Probability Function for Simulation not acting Properly

Hello,

Sometimes when I want to simulate data- and in the modeling block i have nothing but the probability distribution i want to use - stan still draws from completely different values.

For example

if(run_estimation==0){
    alpha_rescale ~ normal(-1.6, .001); //T[-1.61, -.159];
    lambda_rescale ~ normal(-.001, .000001); //T[-.0009,-.0011];
    for(i in 1:Nsubjs){
      alpha_fast[i] ~ normal(.5, .01); //T[.49,.51];
      beta_slow[i] ~ normal(5, .01); //T[4,6];
      beta_fast[i] ~ normal(5, .01); //T[4.8,5.2] ;
      lambda_fast[i] ~ normal(3, .01); //  T[2.9,3.1];
    }

}
Gives me values completely different from what is specified in the distribution. In addition, if I try to truncate the distribution I get an initialization error.

any idea what i’m doing wrong?

thanks so much

The reason for the initialization error is that initialization does not inspect the model block. If you truncate a distribution you have to declare lower, upper bounds in the parameters block. Otherwise the chains will try (and fail) to initialize outside the bounds.

When I run your model in PyStan I get a lot of warnings about low effective sample size.
Longer warmup fixes the problem. Here’s a traceplot with 3000 warmup + 1000 post-warmup samples per chain and four chains.



Looks like the stepsize crashes early to accomodate lambda_rescale which has a very tight prior. That works for lambda_rescale but everything else freezes. Once the sampler gets stuck it takes very long time for it to start moving again.

You can help Stan by declaring parameters with their expected scale. This model runs much better:

data {
  int Nsubjs;
}
parameters {
  real alpha_rescale;
  // offset + multiplier sets the initial scale of the parameter
  real<offset=-.001, multiplier=.000001> lambda_rescale;
  real alpha_fast[Nsubjs];
  real beta_slow[Nsubjs];
  real beta_fast[Nsubjs];
  real lambda_fast[Nsubjs];
}
model {
    alpha_rescale ~ normal(-1.6, .001); //T[-1.61, -.159];
    lambda_rescale ~ normal(-.001, .000001); //T[-.0009,-.0011];
    for(i in 1:Nsubjs){
      alpha_fast[i] ~ normal(.5, 0.01); //T[.49,.51];
      beta_slow[i] ~ normal(5, 0.01); //T[4,6];
      beta_fast[i] ~ normal(5, 0.01); //T[4.8,5.2] ;
      lambda_fast[i] ~ normal(3, 0.01); //  T[2.9,3.1];
    }
}
1 Like

Hey Niko,

yep, that did the trick, thank you!

In general, even for complex models I’ve run warmup only for 300-400 iterations. Seems like I should be letting them run for much longer.

Do you have any further advice for determining exactly how long the warm up phase should be if im not getting low effective sample size warnings?

Thanks again for all your help.
Levi

400 warmup iterations is usually enough and I wouldn’t recommend going beyond the default 1000.
The takeaway from my post is that what you put in parameters block is important. Declare lower and upper bounds on parameters with truncated distribution and use offset and multiplier on parameters that have narrow priors. Short warmup should work if everything is scaled appropriately.

1 Like

Thank you!