OpenCL & threading supported at the same time?

Hi (@rok_cesnovar)!

I am setting up cmdstan to use OpenCL as backed and to my surprise my toy example Stan program simply crashed with a segmentation fault when I do compile the Stan program with threading support in addition. The toy program does not even use threading nor does it really make use of OpenCL, but the program only samples if I only enable OpenCL while leaving threading off.

Is this expected? If yes, then some documentation around that we don’t support OpenCL & threading would be great… if that’s supposed to work, then I am happy to send along all system details, toy model, etc…

Thanks!

Best,
Sebastian

A segmentation fault is never expected I guess :) So yeah, if you can let me know how you got to the seg fault, that would be great. I would definitely try to track this down.

Right now, the expected behavior would be that TBB would be used for reduce sum calls and OpenCL would pick up any non-udf lpdf/lpmf calls in the model block. Cooperation between the two is not existent at the moment. It should definitely not seg fault and that is a bug!

I am still waiting for varmat support to be added to stanc3, so that I can build the rest of the missing stanc3 OpenCL stuff on top of that. Right now, if you want to use both at the same time you would need to manually fix the generated C++, which is what I currently do when I need max performance. Its obviously not something I would recommend.

As for the docs, I think I added a sentence on this somewhere, will check and add if I am misremembering.

Cheers,
Rok

It’s my usual performance regression test model:

blrm-exnex-28.stan (31.6 KB)
blrm_exnex-combo3.data.R (3.9 KB)

This is the output of clinfo:

Number of platforms                               1
  Platform Name                                   NVIDIA CUDA
  Platform Vendor                                 NVIDIA Corporation
  Platform Version                                OpenCL 1.2 CUDA 10.1.236
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer
  Platform Extensions function suffix             NV

  Platform Name                                   NVIDIA CUDA
Number of devices                                 1
  Device Name                                     Tesla P100-PCIE-16GB
  Device Vendor                                   NVIDIA Corporation
  Device Vendor ID                                0x10de
  Device Version                                  OpenCL 1.2 CUDA
  Driver Version                                  418.87.00
  Device OpenCL C Version                         OpenCL C 1.2
  Device Type                                     GPU
  Device Topology (NV)                            PCI-E, 0000:2f:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               56
  Max clock frequency                             1328MHz
  Compute Capability (NV)                         6.0
  Device Partition                                (core)
    Max number of sub-devices                     1
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x64
  Max work group size                             1024
  Preferred work group size multiple (kernel)     32
  Warp size (NV)                                  32
  Preferred / native vector sizes
    char                                                 1 / 1
    short                                                1 / 1
    int                                                  1 / 1
    long                                                 1 / 1
    half                                                 0 / 0        (n/a)
    float                                                1 / 1
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              17071734784 (15.9GiB)
  Error Correction support                        Yes
  Max memory allocation                           4267933696 (3.975GiB)
  Unified memory for Host and Device              No
  Integrated memory (NV)                          No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       4096 bits (512 bytes)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        917504 (896KiB)
  Global Memory cache line size                   128 bytes
  Image support                                   Yes
    Max number of samplers per kernel             32
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             16384x32768 pixels
    Max 3D image size                             16384x16384x16384 pixels
    Max number of read image args                 256
    Max number of write image args                16
  Local memory type                               Local
  Local memory size                               49152 (48KiB)
  Registers per block (NV)                        65536
  Max number of constant args                     9
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     4352 (4.25KiB)
  Queue properties
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Kernel execution timeout (NV)                 Yes
    Concurrent copy and kernel execution (NV)     Yes
      Number of async copy engines                2
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [NV]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  No platform

It’s with 2.28.1 under a RHEL 7.9. The compiler used is a g++ 8.2.0. make/local (working version) is:

# use GCC compilers
MKL_CBWR=AVX
CC = gcc
CXX = g++
# which need to be passed to compiler/linker options
#STAN_THREADS=true
#CXXFLAGS += -DSTAN_THREADS
STAN_OPENCL=true
OPENCL_DEVICE_ID=0
OPENCL_PLATFORM_ID=0
LDFLAGS_OPENCL= -L${CUDA_PATH}/lib64 -lOpenCL

With the above threading stuff commented in things just segfaults like

./blrm-exnex-28 sample data file=blrm_exnex-combo3.data.R
method = sample (Default)
  sample
    num_samples = 1000 (Default)
    num_warmup = 1000 (Default)
    save_warmup = 0 (Default)
    thin = 1 (Default)
    adapt
      engaged = 1 (Default)
      gamma = 0.050000000000000003 (Default)
      delta = 0.80000000000000004 (Default)
      kappa = 0.75 (Default)
      t0 = 10 (Default)
      init_buffer = 75 (Default)
      term_buffer = 50 (Default)
      window = 25 (Default)
    algorithm = hmc (Default)
      hmc
        engine = nuts (Default)
          nuts
            max_depth = 10 (Default)
        metric = diag_e (Default)
        metric_file =  (Default)
        stepsize = 1 (Default)
        stepsize_jitter = 0 (Default)
    num_chains = 1 (Default)
id = 1 (Default)
data
  file = blrm_exnex-combo3.data.R
init = 2 (Default)
random
  seed = 1930721324 (Default)
output
  file = output.csv (Default)
  diagnostic_file =  (Default)
  refresh = 100 (Default)
  sig_figs = -1 (Default)
  profile_file = profile.csv (Default)
num_threads = 1 (Default)
opencl
  device = -1 (Default)
  platform = -1 (Default)
opencl_platform_name = NVIDIA CUDA
opencl_device_name = Tesla P100-PCIE-16GB

Number of groups: 3
Number of strata: 2
EXNEX enabled for compounds 3/3:    [1,1,1]
EXNEX enabled for interactions 0/4: [0,0,0,0]
EXNEX mixture dimensionality 3 leads to 8 combinations.
Observation => group assignment:
Group 1: [1,2,3]
Group 2: [4,5,6,7,8,9,10]
Group 3: [11,12,13,14,15,16,17,18]

Group => stratum assignment:
1 => 1
2 => 2
3 => 2
Prior distribution on tau parameters:
Log-Normal
Segmentation fault

Anything else needed?

Thanks for looking into it!

So for now I just disable threading (this is for our install of OpenCL based CmdStan into our production)?

1 Like

I think I have everything. Will look into this in the next few days, probably tomorrow.

Yeah.

1 Like

I was looking for a way to spawn a GPU process for each of my threads because that’s what seems to be limiting them, and I think this answers my question - it’s not supported right now

Does that mean that the best way to accelerate a model without manually editing c++ to just turn on opencl and let the compiler do what it can? Or are there scenarios where threading with reduce_sum should also be used?

When it comes to accelerating models, I’d preference opencl first. This will apply to almost all vectorised distribution calls and many functions as well (@stevebronder do you know if opencl is currently compatible with the new O1 flag and varmat?), and will work without any c++/syntax changes once enabled.

If you have any computations that can’t be GPU-accelerated, then parallelising those with reduce_sum could provide an additional speed-up

1 Like

Not yet but I know how to do it and it’s on my todo list

3 Likes

The most computationally expensive part of my model is multiplying a very sparse matrix by a dense vector to produce a vector of means for a ~normal call. I expected csr_matrix_times_vector() to get picked up by opencl, but since it isn’t I’m thinking the best approach is probably to use map-rect to spread the sparse multiplication across multiple cores and get a vector that can be sent to the gpu for lupdf() calculation.

Does this approach make sense? I’m worried about all the copying data around

Ah yes, unfortunately we don’t have an opencl implementation of csr_matrix_times_vector in the Math library, so that can’t be GPU-accelerated. If you’re using the recent Stan 2.29, and you haven’t already, I’d recommend enabling the O1 optimisations for your model. These optimisations have a specialisation for csr_matrix_times_vector, so you’ll get some benefit there, and then map_rect will be your best bet after that

1 Like

Update for future viewers: opencl was requiring a lot of kernel operations to move data to and from the GPU, so what ended up being fastest was using the O1 flag, turning off opencl, and using reduce_sum with the normal_lupdf call in the threaded bit.

Another big performance gain came from manually converting the parts of the sparse matrix that were all 0 and 1s into indexes. If you can’t get it onto the gpu that’s pretty efficient