Thanks!
I’ve been playing with a very similar paramterization. It WORKS, but there is still something wrong.
The first approximately 400 samples takes 20-30 minutes. Tailing the samples.csv file, the posterior draws for the parameters strongly match my intuition on what they should be. I even took a few, the plotted them in R compared to the data, and the fits look great.
THE, the model seems to get stuck. While the first 400 samples too 30 minutes, the next 50 samples too 8 HOURS.
What’s confusing to me is that the model starts sampling quickly, the values look great, and then it gets stuck. My guess is that Stan has somehow gotten stuck in some corner of sampling space.
I’ve tried adjusting the parameters for the priors, but it doesn’t seem to help much.
Here is the latest Stan model:
data {
int<lower=0> N;
real<lower=0> score[N];
int<lower=0> N_group;
int<lower=0> group_f[N];
}
parameters {
// baseline intercept
real alpha0;
// Random effects
vector[N_group] group_eta;
real<lower=0> group_scale;
vector[N_group] phi_eta;
real<lower=0> phi_scale;
}
transformed parameters{
vector[N_group] group_re;
vector[N_group] phi_group;
group_re = group_scale * group_eta;
phi_group = phi_scale * phi_eta;
}
model {
vector[N] mu;
vector[N] phi;
group_eta ~ normal(0,1);
group_scale ~ exponential(.1);
phi_eta ~ normal(0,1);
phi_scale ~ exponential(.1);
alpha0 ~ normal(5, 10);
mu = exp(alpha0 + group_re[group_f]);
phi = 1 ./ ( exp(phi_group[group_f])) ;
target += gamma_lpdf(score | phi, phi ./ mu);
}