This is a straightforward calculation, but is there a function in rstan or more generally a diagnostic tool to get, after running a model, the time required to compute 1000 effectively independent samples?
And if not, would there be interest in implementing such a tool? It’s convenient for performance tests.
But what do you do for a model with multiple parameters? I see no way to get the runtime separately for each parameter, if that’s actually meaningful to consider. So any statement for the time to compute an effective number of samples, will have to use the total time (I guess after warmup) of the sampler, but can use parameter specific ratios/neff’s, right?
Revisiting this after a while.
Yes, I want to include warmup. I rarely include compilation time which is usually much shorter than run time. This was part of a scheme to measure the performance of a latent Gaussian model that uses a Laplace approximation.