I’m currently using R, Stan, and specifically the functions
csr_matrix_times_vector(), to take advantage of a having a 98.7% sparse matrix in a hierarchical linear regression model. I have also seen this regarding GPUs.
Is there currently a way to take advantage of both GPUs and sparse matrices? If so, is there an example posted anywhere? If not, and I can only choose either GPU or sparse, any idea which would be faster?