How to make GP with non-Gaussian likelihood more efficient?

  1. How to select the number of basis function? I selected it by looking at Figure 6 (https://github.com/gabriuma/basis_functions_approach_to_GP/blob/master/Paper/manuscript/manuscript_06.pdf): I only knew the normalised length scale is greater than 0.05 and took c=1.5, so I used 40 basis functions, but I also tried 30 basis functions, and there are not much differences in terms of performance.

  2. I adapted the code to be ‘optimal’ and it do run fast than my previous code. Thanks! However, when I change poisson_log to poisson_log_glm, it becomes slower. Is it due to my additional term in my design matrix which changes over iterations? gp1.stan (4.1 KB)

  3. I found spline method s( , bs=‘gp’) is faster, but if I understand correctly, this does not estimate hyperparameters of the kernel and I need to fix them as input, right? Is it possible to use this method while estimating the hyper parameters?

  4. You mentioned New adaptive warmup proposal (looking for feedback)! However, I think it only helps with the warm-up period, isn’t it?

Sorry for my late reply as I just tried these options and thank you very much for your potential help!

2 Likes

append_col and append_row can be slow as they are creating new temporary variable every leapfrog step. It might be faster to create instead in transformed parameters a full size design matrix and a full size coefficient vector and assign constant and variable terms in the desired places. Then the matrix and coefficient are not created again each iteration and instead just values are assigned. @Bob_Carpenter might know better what would the best thing here.

It fixes the lengthscale but has magnitude as a parameter, which makes the posterior easier.

Eventually it will include adaptive mass matrix selection which improves speed also after the warmup.

2 Likes