Hi,
I am trying to write a Bayesian hierarchical model with mixed effects using stan following the model format from the paper Bayesian Linear Mixed Models Using Stan: A Tutorial for Psychologists, Linguists, and Cognitive Scientists. (Sorensen, Tanner, Sven Hohenstein, and Shravan Vasishth). After I write my model, I was comparing the results with the results from stan_glmer. However, my model has a larger standard deviation and I don’t really understand why. I am wondering if someone could help me explain why this is happening? Thank you!
The stan_glmer model I used is stan_glmer(y ~ x + (x + 1 | group), data = data_long, adapt_delta = 0.99)
The stan model I wrote is as following:
data {
int<lower=1> N; //number of data points
real y[N];
real x[N];
int<lower=1> J; // number of group
int<lower=1, upper=J> id[N]; // vector of group indices
}
parameters {
real<lower=0> sigma; // sigma of y
vector<lower=0>[2] sigma_u; // sigma of intercept and slope for random effects
cholesky_factor_corr[2] L_u; // Cholesky decomposition of the group correlation matrix
matrix[2,J] z_u; // intercept and slope for random effects
vector[2] beta; // fixed effects intercept and slope
}
transformed parameters{
matrix[2,J] u; // intercepts and slopes of random effects of J pairs
u = diag_pre_multiply(sigma_u, L_u) * z_u; // generates varying intercepts and slopes from joint probability distribution
}
model {
real mu;
//priors
L_u ~ lkj_corr_cholesky(2.0);
// Our choice of 2.0 implies that no prior info about the correlation btw intercepts and slopes
to_vector(z_u) ~ normal(0,1); // convert the matrix z_uu to a column vector in column major order.
sigma ~ exponential(2);
beta ~ multi_normal(rep_vector(0,K),diag_matrix(rep_vector(2,K))*square(25));
//likelihood
for (i in 1:N){
mu = beta[1] + u[1,id[i]]+ (beta[2] + u[2,id[i]]) * x[i];
y[i] ~ normal(mu, sigma);
}
}