# Question about the HMC algorithm in rstan

Hello everyone. I am using the rstan to do the fitting, my code is below

``````  stan.fit = stan(file = 'DSP with WL.stan',
data = list(n = N,
cmatrix = C_Matrix,
y = Y),
algorithm = "HMC",
max_treedepth = 10),
chains = 1,iter = staniter,warmup = stanwarmup,refresh = 1);
``````

In my code, I set algorithm = “HMC”. As I know in the HMC algorithm, the gradient of the log posterior is need, but it didn’t ask me to give the gradient of the log posterior. I want to know did I do it in the wrong way or stan will calculate the gradient automatically?

I would use algorithm “NUTS” which has nicer sampling properties

.
I have the analytical form of the gradient of the log posterior. Is there any way that I can use it as an input to avoid the calculation of the gradient and speed up my program?

I will try the NUTS too.

To do that you would need to write a function in C++ that calculates the gradient @bbales2 @wds15 do yinz have links on how to add your own C++ function?

I think if you want to also add the gradient you need to put it in Stan Math or StanHeaders. I dont think there is any docs on that.

Are you experiencing unacceptably slow sampling with the automatically computed gradients?

Thank you!

In my code, I need to run this stan model in all 10,000 iterations. In each iteration the input C_matrix will changed and I calculate the posterior based on the C_matrix. In each iteration I only need one sample which draw from the posterior by the HMC or NUTS algorithm.

Do you have any idea about what I can do to accelerate my program?

Thanks so much.

I am new to the stan and don’t familiar with C++. So I have no idea about what I can do with this.

When I run this code, I set iter = staniter = 15, warmup = stanwarmup =15.

It took me about 20s to 40s to get the fit. In my code I need to run it in all 10,000 iterations. So I hope I can do something to increase the speed.

Why do you think you need so many iterations? A few hundred effective samples should give good estimates of even tail properties of the posterior, and for most well-designed models, the defaults usually achieve at least that.

It’s also probably a bad idea to base any conclusions, timing or otherwise, on such short warmup.

In my model I used the Dynamic shrinkage process prior. So in each iteration I will update many parameters, beta is just one of them. And the posterior of beta is based on the other parameters. So in each iteration the posterior distribution of beta are different. What I need is one sample from each posterior not Lots of samples from one posterior. So that is why I want a large number of iterations.

I know it is not a good idea to use such a small number of warmup iterations, but I just need one sample from the posterior distribution, so I don’t want to spend too much time on a very large number of warmup iterations. Can you give me some advise about how many warmup iterations will be enough if I just want one sample?

Hm, I’m not expert enough to know what this is, but I will say that every time I’ve seen folks trying to achieve inference iteratively (where samples from one posterior are supplied as data for a subsequent model), it’s turned out that there’s a more appropriate/sensible (in the sense of probability theory) single model that could be run with all data in a single pass (see for example @betanalpha’s comment here). (Exception might be the PBPK models where something akin to this is called “model cutting”, but I’m [possibly naively] still leary of it making sense even there)

Is your c matrix getting modified in a highly specific way at each iteration? If not, couldn’t you just specify the c matrix as parameters with a uniform prior on the range you’re interested in?

I’m not really sure what you would like to do, are you able to post example code?

if you can show us a mini piece of code, the data transmission part has stan.

noted that with a large number of iterations, a divergence may occur during the warm-up phase