Taking advantage of both sparse matrices and GPUs

I’m currently using R, Stan, and specifically the functions extract_sparse_parts() and csr_matrix_times_vector(), to take advantage of a having a 98.7% sparse matrix in a hierarchical linear regression model. I have also seen this regarding GPUs.

Is there currently a way to take advantage of both GPUs and sparse matrices? If so, is there an example posted anywhere? If not, and I can only choose either GPU or sparse, any idea which would be faster?

The only things planned for GPUs in the immediate future are dense operations. There are separate developments to support sparse matrices more in the Stan language. So, for now, using csr_matrix_times_vector() is about it.

ok great, thanks for the quick response!

do you have any feel for if GPU with dense operations would be faster/slower than CPU with sparse operations?

With the sparsity you mention I would guess sparse CPU would be faster, especially if you matrix is big as there will be some overhead to transferring the matrix to CPU (memory transfer speed is often the bottleneck in GPU performance).

2 Likes

Ok great thanks. I was hoping you’d say sparse… already have that implemented, so that answer requires less work. :)

We may have real sparse matrix type supported in the beginning of next year. After that adding GPU support for sparse matrices is easier.

ok that sounds great, thanks for your help!

I just searched for this question again, and totally forgot I asked it here already! I don’t know whether to feel bad about being forgetful or good about being consistent. :)

Was there any progress on GPU support for sparse matrices?