Hi,
I am practicing with RStan by estimating a simple hierarchical model.
When I run this model with a sample size of 150, I get slightly biased estimates of the parameters mu and sigma and no error after having “uncentered” the parametrization. First, would anyone have some advice on resources that explains geometric ergodicity and why uncentering is working in a rather non technical way?
Second, when I increase the sample size to 1500, I get very good estimates of my and sigma, but a Rhat for mu superior to 2 (when the sample size is lower, the Rhat are under 1.4 for all the coefficients), and very low effective samples. I have no divergent transition.
Could someone explain why I get such a high Rhat and low effective sample size for this model? (I am not sure I understand what n_eff is exactly).
Thank you very much,
I am copying my code below:
The model is:
stancode <- 'data {
int<lower=0> J;
real mu0;
real sigma0;
real alpha0;
real beta0;
vector[J] obs;
real<lower=0> sigmae;
}
parameters {
real<lower=0> Sigma;
real mu;
vector[J] beta_tilde;
}
transformed parameters {
vector[J] beta;
beta = mu + Sigma * beta_tilde;
}
model {
mu ~ normal(mu0, sigma0);
Sigma ~ inv_gamma(alpha0, beta0);
beta_tilde ~ normal(0, 1);
obs ~ normal(beta, sigmae);
}
'
With the following data:
mu <- 1
Sigma <- 0.2
beta <- rnorm(n = J, mu, Sigma)
sigmae <- 0.002
obs <- rnorm(n = J, beta, sigmae)
And the following settings:
iter = 2000
warmup = 1000
stanfit2 <- sampling(stanmodel, data = list(J = J,
mu0 = 1, sigma0 = 15, alpha0 = 1, beta0 = 1, obs = obs, sigmae = sigmae),
iter = iter, warmup = warmup, chains = 3, cores = 1, control = list(adapt_delta = 0.99, max_treedepth = 25))