Hi,
Using the same model, seed, iterations and warmup, is there a reason why RStan would yield different output than PyStan for the same input dataset?
Look forward to insights.
Thanks!
Ivan
Hi,
Using the same model, seed, iterations and warmup, is there a reason why RStan would yield different output than PyStan for the same input dataset?
Look forward to insights.
Thanks!
Ivan
Different implementation of the PRNG?
Avraham, thanks for reply.
I used the get_seed() in PyStan and RStan, and both return the same value as the input I have.
Is there another check I can do?
Thanks.
Ivan
This is a bug if it is happening. Draws and optimization results should be identical given the same random seed. Other things, such as the printed summary of draws, are not yet identical.
Thanks for feedback @ariddell
I wonder if anyone else had this same issue.
You need to set the seed in Stan, not in Python or in R.
At that point, everything should produce the same results.
If it doesnât, it should all get fixed after the refactor
lands in 2.15 and we call exactly the same underlying code
for sampling (rather than having a lot of complicated duplicated
code across interfaces).
Also check to see if the chain_id is the same as it strides the RNG to ensure that multiples chains donât lock. For example, in RStan chains=1 sets the default chain_id to be 1 where as the default chain_id in CmdStan is 0. I had ran into this issue recently and had to resolve it by setting id=1 in CmdStan.
Thanks for the response.
Update: ran both models with reduced warmup to sampling ratio and got closer outputs; initially, warmup/sampling ratio is 1:1, now itâs 1:4.
@Bob_Carpenter: I set the seed within the stan() call, with the seed argument; this should null the issue you noted.
@betanalpha: Similar to seed, I set the chains argument in stan() call to a 1 for both; hence, shouldnât run into the issue you noted.
Are you running on same hardware with the same underlying C++
compiler and compiler options? They really should produce
exactly the same answer in that situation.
Weâre not going to try to debut this now as everythingâs getting
changed in Stan 2.15 and that should make everything consistent.
For now, there are probably variations in the algorithms that
arenât updated the same way across RStan, PyStan and CmdStan, which
is one of the main reasons we pulled all the important code down
into C++.
I donât know hat you mean by 1:1 in warmup/sampling ratio.
Is that just the number of iterations you chose?
Same machine for running both.
Regarding the warmup/sampling ratio, the initial run on both had total iterations at 20000, and warmup of 10000 (i.e. sampling 10000 as well). This gave different output when extracting quantile values (e.g. 10% quantile).
The second run had the same number of iterations, but warmup is set to 5000 (i.e. 15000 sampling); this gives a closer output for both.
As Michaelâs suggesting, I think itâs probably a count from 0 vs.
count from 1 issue on the chain ID. Have you tried controlling for
that? Also, I think your reply is truncated.
In general, you just want to be able to replicated within MCMC
standard error. The whole process is probabilistic and thereâs nothing
special about any given seed and run.
And as I said, weâre bringing all the implementations exactly in
line with the forthcoming Stan 2.15 release.
@Bob_Carpenter: I set the chains argument in stan() call to a 1 for both; hence, shouldnât run into the issue you noted
With the lower âwarmup:samplingâ ratio, weâre within the MCMC standard error, as measured by RMSE.
Look FW to 2.15.
Thanks for the all the feedback.
Ivan
@leungi, with the same compiler, same settings, same hardware, same operating system, same versions of Stan, same random seed and chain id, the runs will be identical. Not just within MCMC SE, but identical.
I think PyStan, by default, has a compiler optimization option of -O2
, which will make it not run the same way. But Iâm not absolutely sure.
You might have to explicitly set seed to make them match. Just setting number of chains might not be good enough.
As noted in previous post, seed and chain were explicitly set to be the same, hence the confusion.
Thanks for feedback.
Up until recently (current develop, not sure if itâs release yet) the different interfaces did not all initialize things in exactly the same way so you might end up with one more or less PRNG call in one interface vs. another. When Daniel says same x/y/z that includes interface and (as Daniel mentions above) optimization level.
Python Code:
schools_code = ââ"
data {
int<lower=0> J; // number of schools
vector[J] y; // estimated treatment effects
vector<lower=0>[J] sigma; // s.e. of effect estimates
}
parameters {
real mu;
real<lower=0> tau;
vector[J] eta;
}
transformed parameters {
vector[J] theta;
theta = mu + tau * eta;
}
model {
eta ~ normal(0, 1);
y ~ normal(theta, sigma);
}
ââ"
schools_dat = {âJâ: 8,
âyâ: [28, 8, -3, 7, -1, 1, 18, 12],
âsigmaâ: [15, 10, 16, 11, 9, 11, 10, 18]}
sm = pystan.StanModel(model_code=schools_code)
import os
os.environ[âSTAN_NUM_THREADSâ] = â2â
fit = sm.sampling(data=schools_dat, iter=500, warmup=400, chains=1, seed = 6789)
=================
Both has the identical stan file. The first 10 numbers are so different.
Did you solve your problem?
RStan, PyStan, and CmdStan should produce identical outputs.
Is the underlying version of the Stan library the same?
What happens if you replace that Python code with the following?:
sm = pystan.StanModel("school.stan")
Doing that should guarantee that the same Stan code is being used with both RStan and PyStan.
Same issue here.
Why printed summary is not yet identical?
What problems do you have with printed summary?