Hi all,
I am doing an artificial exercise to learn about how Stan deals with constraint in the transformed parameters block.
First, I generate data under a simple linear regression model with \beta=(1.0, 0.3)^T and \sigma = 0.5. Then I fit model_Ex3a.stan, a correct model. Everything is fine, of course. The estimates are:
mean 2.5% 97.5%
beta[1] 1.005 0.957 1.049
beta[2] 0.316 0.271 0.361
sigma 0.509 0.478 0.542
Now I do an experiment. Suppose now that I have a constraint that \beta_1*\beta_2 + \sigma has to greater than a fixed number. To do that I put the following into model_Ex3b.stan.
transformed parameters {
real<**lower=0.0**> constraint;
constraint = beta[1] * beta[2] + sigma;
}
We can see that the estimates in the output above obviously satisfy the constraint: real<lower=0.0> constraint. Therefore when I run this code, everything is fine. The estimates remains the same.
The story changes when I do
transformed parameters {
real<**lower=0.9**> constraint;
constraint = beta[1] * beta[2] + sigma;
}
Obviously, the true values, \beta=(1.0, 0.3)^T and \sigma = 0.5, do not satisfy this constraint that is greater than 0.9. When I fit this (model_Ex3c) there are two things:
- Biased estimates (of course)
- There were 1903 divergent transitions after warmup.
I am curious and would like to know about the second. So I would have several questions:
- How does Stan know in this case to flag divergent or not?
- Does Stan know that the space, imposed by the constraint in this case, is smaller than the parameter space (without constraint).
Thank you!
Kind regards,
Trung Dung.
Ex3.txt (7.5 KB)
model_Ex3b.stan (397 Bytes)
model_Ex3c.stan (397 Bytes)
model_Ex3a.stan (296 Bytes)
R codes Ex3.R (1.8 KB)