I have easily reproduced this. See the Github issue @stevebronder
I now have a strong candidate now I think.
I have easily reproduced this. See the Github issue @stevebronder
I now have a strong candidate now I think.
Great! And you are on Linux @rok_cesnovar ?
Here is the model with profiling stuff added
blrm_exnex-profile_stan.txt (28.5 KB)
(I used cmdstanr time() commandā¦that is good, no?)
Yeah Ubuntu. Lets continue on Github. Else if any other issue pops up this thread will become hard to follow.
cmdstan is our most direct interface so Iād rather look for regressions using it and bash. I also ran the above with nothing else running on my computer. Other stuff running can cause cache to be taken up etc.
@rok_cesnovar Iām seeing the github issue, but honestly Iām confused about what models and versions people are checking. Are you both using the exact same model as the one here?
That model doesnāt have profiling data in it so Iām not sure which model yinz are using (and tbh Iām not that worried about performance regressions with models that have profile statements as I donāt think models that have those are performance critical.)
Are both of your results running the same (not similar, same) scheme as what Iām doing above? Iām worried we are comparing apples and oranges in terms of time, it would seem very weird that my computer is faster with the new version while another has a 20% slowdown
Cmdstanr time() is just timing how long the process is alive. Its fine, especially for this huge regression.
For me itās mostly that it adds another layer of complication and just using cmdstan gets us down to exactly what we want to measure before we spend a lot of time reverting things
Like @rok_cesnovar suggest, letās move to github.
I have seen the regression with and without the profiling stuff in the model. I would also trust the cmdstanr reported ātimeā reports as these come from cmdstan itself and these are wall time since 2.26 (as I recall).
I am not sure why you are not seeing a regression, but I am and @rok_cesnovar is as wellā¦which sounds as if we are in the majority for now :D
Yeah agree to move over to github so we arenāt juggling two threads
I just synced and built after a make clean-all
and get a lot of substitution failures.
Edit: I re-cloned and all is well!
Does something like alpha1, alpha2, alpha3 ~ normal(0,100);
in the model block make sense? Or there is already a way to apply the same prior to multiple parameters?
I think itās (see 6.3 Vector, matrix, and array expressions | Stan Reference Manual)
[alpha1, alpha2, alpha3] ~ normal(0,100);
This is indeed the way to do it right now.
Supporting the syntax in @nipnipjās example was also suggested (https://github.com/stan-dev/stanc3/pull/670#issuecomment-702840841) by @rybern in his PR, so it issomething that might eventually get supported as well.
What about something like target += normal_lpdf(alpha1, alpha2, alpha3 | 0, 100)
.
I would strongly prefer
target += normal_lpdf([alpha1, alpha2, alpha3] | 0, 100).
to make it clear that thereās only one argument on the left of the vertical bar.
That should already work now. No?
Yes, this model compiles fine:
parameters {
real y;
real x;
real z;
}
model {
target += std_normal_lpdf([x,y,z]);
// tilde statement equivalent of the above (ignoring constants)
[x,y,z] ~ std_normal();
// tilde statement exact equivalent
target += std_normal_lupdf([x,y,z]);
}
For anyone following along with this we sorted out the performance bug in the issue below