Gábor Samu
Gábor Samu
Creator of this blog.
Sep 14, 2018 7 min read

A hands-on look at GPU "autoconfig" in IBM Spectrum LSF

thumbnail for this post

It’s been a long time since I’ve posted to my goulash blog. I’ve not disappeared, rather I’ve been writing articles for the IBM Accelerated Insights solution channel on HPCWire. Since then, I’ve been fortunate enough to have access to a POWER9 based developer system equipped with a NVIDIA Tesla V100 PCIe card to put through it’s paces. This is very timely for me as there is some exciting new functionality in IBM Spectrum LSF known as GPU auto detect, which I recently wrote about in the article The Taming of the GPU that I’ve been meaning to try out hands on.

Back in Dark Ages (no not literally), administrators of HPC clusters had to specify in the configuration of the workload scheduler which nodes were equipped with GPUs, the model of the GPUs and so on. This was relatively straightforward when nodes were equipped with single GPUs and clusters were smaller. With the proliferation of GPUs, nodes are frequently equipped with multiple GPUs and often times we can end up with a mix of GPU models in a single cluster where rolling upgrades of hardware has occurred. Factor in hybrid cloud environments where nodes can come and go as needed, and what is seemingly an easy update to configuration files of a workload scheduler can become complex, quickly. Take into account that if a user is requesting a GPU for a job they’ve submitted and the scheduler is not fully aware of which nodes are equipped with GPUs, you can end up with under-utilization of these assets.

Enter Spectrum LSF with a new capability known as GPU auto detect, which helps simplify the administration of heterogeneous computing environments by detecting the presence of NVIDIA GPUs in nodes and automatically performing the necessary scheduler configuration.
For a detailed list of GPU support enhancements in the latest update to IBM Spectrum LSF please refer to the following page.

My testing environment is configured as follows:

  • dual-socket POWER9 development system
  • 1 x NVIDIA Tesla V100 (PCIe)
  • Ubuntu 18.04.1 LTS (Bionic Beaver)
  • IBM Spectrum LSF Suite for Enterprise
  • NVIDIA CUDA 9.2

Note that the following assumes that NVIDIA CUDA and IBM Spectrum LSF Suite for Enterprise are installed and functioning nominally.

By default, the latest version of IBM Spectrum LSF Suite v10.2.0.6 has the following parameters enabled by default in $LSF_ENVDIR/lsf.conf:

LSF_GPU_AUTOCONFIG=Y
LSB_GPU_NEW_SYNTAX=extend

The above parameters enable the new GPU support wizardry in the product.

  1. So let’s get right into it. We start by checking if the Spectrum LSF cluster is up and running.
test@kilenc:~$ lsid
IBM Spectrum LSF 10.1.0.6, May 25 2018
Suite Edition: IBM Spectrum LSF Suite for Enterprise 10.2.0
Copyright International Business Machines Corp. 1992, 2016.
US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

My cluster name is Klaszter
My master name is kilenc

 

test@kilenc:~$ lsload
HOST_NAME       status  r15s   r1m  r15m   ut    pg  ls    it   tmp   swp   mem
kilenc              ok   0.6   0.3   0.3   0%   0.0   1    18  791G  853M  7.3G

 

test@kilenc:~$ bhosts
HOST_NAME          STATUS       JL/U    MAX  NJOBS    RUN  SSUSP  USUSP    RSV 
kilenc             ok              -     32      0      0      0      0      0

We confirm above that the status of the cluster is OK. Meaning it’s up and ready to accept jobs. Note that I have not done any supplementary configuration in Spectrum LSF for GPUs apart from enabling the two above variables noted above.

  1. Eureka! Spectrum LSF has automatically detected the presence of GPUs on the system. The single GPU in this case is now configured as a resource for Spectrum LSF and can be scheduled to. We have used the new -gpu and -gpuload options for the Spectrum LSF user commands to check this.
test@kilenc:~$ lshosts -gpu
HOST_NAME   gpu_id       gpu_model   gpu_driver   gpu_factor      numa_id
kilenc           0 TeslaV100_PCIE_       396.37          7.0            8

 

test@kilenc:~$ lsload -gpu
HOST_NAME       status  ngpus  gpu_shared_avg_mut  gpu_shared_avg_ut  ngpus_physical
kilenc              ok      1                  0%                 0%               1
 

test@kilenc:~$ lsload -gpuload
HOST_NAME       gpuid   gpu_model   gpu_mode  gpu_temp   gpu_ecc  gpu_ut  gpu_mut gpu_mtotal gpu_mused   gpu_pstate   gpu_status   gpu_error
kilenc              0 TeslaV100_P        0.0       46C       0.0      0%       0%      31.7G        0M            0           ok           - 

As we can see above, Spectrum LSF has correctly detected the presence of the single Telsa V100 which is present in the node. It’s also displaying a number of metrics about the CPU including mode, temperature, and memory.

  1. Next, let’s submit some GPU workloads to the environment. I found the samples included with NVIDIA CUDA to be fairly short running on the Tesla V100, so I turned to the trusty Multi-GPU CUDA stress test aka gpu-burn. You can read more about that utility here. To submit a job to GPU workload to Spectrum LSF, we use the -gpu option. This can be used to specify the detailed requirements for your GPU job including the number of GPUs, GPU mode, GPU model, etc. For the purpose of this test, we’ll use the default value “-” which specifies the following options: “num=1:mode=shared:mps=no:j_exclusive=nonvlink=no”.
test@kilenc:~/gpu-burn$ bsub -gpu - ./gpu_burn 300
Job <51662> is submitted to default queue <normal>.
  1. Next we confirm that the job has started successfully.
test@kilenc:~/gpu-burn$ bjobs -l 51662

Job <51662>, User <test>, Project <default>, Status <RUN>, Queue <normal>, Comm
                     and <./gpu_burn 300>, Share group charged </test>
Fri Sep 14 12:52:09: Submitted from host <kilenc>, CWD <$HOME/gpu-burn>, Reques
                     ted GPU;
Fri Sep 14 12:52:09: Started 1 Task(s) on Host(s) <kilenc>, Allocated 1 Slot(s)
                      on Host(s) <kilenc>, Execution Home </home/test>, Executi
                     on CWD </home/test/gpu-burn>;
Fri Sep 14 12:52:10: Resource usage collected.
                     MEM: 4 Mbytes;  SWAP: 0 Mbytes;  NTHREAD: 3
                     PGID: 95095;  PIDs: 95095 95096 95097 


 MEMORY USAGE:
 MAX MEM: 4 Mbytes;  AVG MEM: 4 Mbytes

 SCHEDULING PARAMETERS:
           r15s   r1m  r15m   ut      pg    io   ls    it    tmp    swp    mem
 loadSched   -     -     -     -       -     -    -     -     -      -      -  
 loadStop    -     -     -     -       -     -    -     -     -      -      -  

 EXTERNAL MESSAGES:
 MSG_ID FROM       POST_TIME      MESSAGE                             ATTACHMENT 
 0      test       Sep 14 12:52   kilenc:gpus=0;                          N     
  1. We cross confirm with the NVIDIA nvidia-smi command that the gpu-burn process is running on the GPU.
test@kilenc:~/gpu-burn$ nvidia-smi
Fri Sep 14 12:52:18 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.37                 Driver Version: 396.37                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000033:01:00.0 Off |                    0 |
| N/A   68C    P0   247W / 250W |  29303MiB / 32510MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     95107      C   ./gpu_burn                                 29292MiB |
+-----------------------------------------------------------------------------+
  1. Next we use the Spectrum LSF lsload command with the -gpuload option to check the GPU utilization. This should closely match what we see above.
test@kilenc:~/gpu-burn$ lsload -gpuload
HOST_NAME       gpuid   gpu_model   gpu_mode  gpu_temp   gpu_ecc  gpu_ut  gpu_mut gpu_mtotal gpu_mused   gpu_pstate   gpu_status   gpu_error
kilenc              0 TeslaV100_P        0.0       70C       0.0    100%      29%      31.7G     28.6G            0           ok           -
  1. After 300 seconds (5 minutes), the job completes and exits without error. We inspect the history of the job using the Spectrum LSF bhist command which shows the changes in state of the job from start to finish.
test@kilenc:~/gpu-burn$ bhist -l 51662

Job <51662>, User <test>, Project <default>, Command <./gpu_burn 300>
Fri Sep 14 12:52:09: Submitted from host <kilenc>, to Queue <normal>, CWD <$HOM
                     E/gpu-burn>, Requested GPU;
Fri Sep 14 12:52:09: Dispatched 1 Task(s) on Host(s) <kilenc>, Allocated 1 Slot
                     (s) on Host(s) <kilenc>, Effective RES_REQ <select[((ngpus
                     >0)) && (type == local)] order[gpu_maxfactor] rusage[ngpus
                     _physical=1.00] >;
Fri Sep 14 12:52:09: External Message "GPU_ALLOC="kilenc{0[0:0]}"GPU_MODELS="Te
                     slaV100_PCIE_32GB-32510{0[0]}"GPU_FACTORS="7.0{0[0]}"GPU_S
                     OCKETS="8{0[0]}"GPU_NVLINK="0[0#0]"" was posted from "_sys
                     tem_" to message box 131;
Fri Sep 14 12:52:10: Starting (Pid 95095);
Fri Sep 14 12:52:10: Running with execution home </home/test>, Execution CWD </
                     home/test/gpu-burn>, Execution Pid <95095>;
Fri Sep 14 12:52:10: External Message "kilenc:gpus=0;EFFECTIVE GPU REQ: num=1:m
                     ode=shared:mps=no:j_exclusive=no;" was posted from "test" 
                     to message box 0;
Fri Sep 14 12:57:12: Done successfully. The CPU time used is 302.0 seconds;
Fri Sep 14 12:57:12: Post job process done successfully;


MEMORY USAGE:
MAX MEM: 220 Mbytes;  AVG MEM: 214 Mbytes

Summary of time in seconds spent in various states by  Fri Sep 14 12:57:12
  PEND     PSUSP    RUN      USUSP    SSUSP    UNKWN    TOTAL
  0        0        303      0        0        0        303         

This has only been a teaser of the GPU support capabilities in Spectrum LSF. Spectrum LSF also includes support for NVIDIA DCGM which is used to collect GPU resource utilization per job. But that’s a topic for another blog :). À la prochaine fois!