Since upgrading to RStudio (v1.3.1056), RTools (v4.0) and Rstan (2.21.2), the time it takes my model to run seems to have roughly 2x-3xed. I appreciate it could be so many things but I was just wondering if anything jumped out to people here?
Sorry this is such a vague question but as my model previous took about 20 hours to run, and I’m typically running it 1-2 times per week, any speed increases are of great use. I thought it was worth a shot asking here!
Does it use ode solvers, algebra solvers, anything special? Do you know what the bottleneck is in your model? Its hard to say much without any details.
We did not observe any regression between 2.19 and 2.21 of this magnitude, at least that we know of.
Sure, I’m actually modelling underdispersed count data so I use a generalised poisson pdf. This was built for me by someone much smarter than me.
One of the quirks though is that I have to set init = 0 as otherwise the initialisation of the sampling fails. From I call it returns negative infinity. Perhaps that’s off topic but just in case that would effect anything on the speed front.
Here is the pdf:
functions {
// Generalized poisson with its mean and standard deviation as parameters.
real generalized_poisson_ms_log(int n, real mu, real sigma2) {
// Reparametrization.
real theta = pow(mu, 1.5) / sqrt(sigma2);
real lambda = 1 - sqrt(mu) / sqrt(sigma2);
if ((theta + lambda * n) < 0) {
return negative_infinity();
} else {
return log(theta)
+ (n - 1) * log(theta + n * lambda)
- lgamma(n + 1)
- n * lambda
- theta;
}
}
}
Does anything else happen in your model or does the model block just include that function call? There might be something else that was changed since 2.19.
Also you mentioned that it used to take 20 hours to run, is this because you have a very large dataset or a very complex model?
The issue there was that the threadsafe version of lgamma (lgamma_r) wasn’t available in mingw32, but I wonder if this is still the case now that its at RTools4. What do you think @wds15?
Unfortunately both! I typically about 35k data points with about 30 parameters. I do also get quite variable chain speeds, particularly as the data set gets bigger, but the model always converges to sensible values without divergent transitions.
I’ve tried to iron the variable chain speeds out but still struggle a bit. One methodology that’s helped is setting increasingly more thoughtful and sensible priors (sometimes hard to think about on the log scale) but this has only achieved so much. The restriction of having to start the warm up at init = 0 I feel is part of the problem, as some of the parameters posterior would never have a value in that range. I would like to place sensible upper and lower bounds on these parameters away from 0, but can’t because the warmup fails (without init = 0).
Are the number of leapfrogs the same? Try: get_num_leapfrog_per_iteration(fit)
Do you have a saved model from the old run that you can look at by any change (I’m not even sure how possible it is to serialize these things so I won’t be surprised if no)?
(Edit: What we’re doing here is trying to figure out if this is a sampling thing – if the sampler seems to be doing a different thing, or if this is just a computational thing – that stuff is slower per gradient evaluation)
And what’s the effective sample size of some parameters between the two? Is it the same (in which case the new model is working much harder to do the same thing)?
I’d look at the parameters with low ESS to do this comparison.
Yeah. I’ll send you more instructions later but what we’ll need to do is a bit of a binary search to figure out where in time this problem occurred so we can identify the commits that might have caused it.
Are you using rstan? There are ways to set init to the values you want and not just 0 or random. I don’t have it on hand, but you can use a previous run to get the dimensions for all the parameters and set those to sensible values like rng outputs of your priors