# How to estimate the covariance matrix of a mirt model parameter in rstan

Dear stan users,
I am a fresh stan learner, and I want to know that how could I use the stan to estimate the covariance matrix of model parameter, before I meet this question I have read a paper which use the software jags to estimated the covariance matrix of a IRT model parameter.
So I was wondering if I could do the same work in rstan, and here is my stan code for a MIRT(Multidensional item response theory).

``````//  M3PL-parameter estimated with stan

data {
int<lower=1> n_stu;                     // the number of students
int<lower=1> n_itm;                     // the number of items
int<lower=0,upper=1> Y[n_stu, n_itm];   // the response score
int<lower=1> D;                         // the number of dimensions
}

transformed data {
row_vector [D] mu_theta = rep_row_vector(0, D);
cov_matrix [D] Sigma;
Sigma = rep_matrix(0, D, D);
for(i in 1:D){
Sigma[i,i] = 1;
}
}

parameters {
vector<lower=-3,upper=3>[D] theta [n_stu];
matrix<lower=0.5,upper=2.5>[n_itm,D] alpha;
vector<lower=-3,upper=3>[n_itm] beta;
vector<lower=0,upper=0.25>[n_itm] gamma;
}

model {
for(i in 1:n_stu){
theta[i] ~ multi_normal(mu_theta, Sigma);
}
beta ~ normal(0,1);
to_vector(alpha) ~ lognormal(0, 0.5);
gamma ~ uniform(0, 0.25);

for(j in 1:n_itm){
for(i in 1:n_stu){
real p;
p = inv_logit(1.7*(row(alpha,j)*theta[i]+beta[j]));
Y[i,j] ~ bernoulli(gamma[j]+(1-gamma[j])*p);
}
}
}

generated quantities {       // computate the modelbased log-likelihood for loo and WAIC
vector[n_itm] log_lik[n_stu];
vector[n_itm] pY[n_stu];
for(j in 1:n_itm){
for(i in 1:n_stu){
pY[i,j] = gamma[j]+(1-gamma[j])*inv_logit(1.7*(row(alpha,j)*theta[i]+beta[j]));
log_lik[i,j] = bernoulli_lpmf(Y[i,j] | pY[i,j]);
}
}
}

``````

I would appreciate your help very much.
Young Luis.

1 Like

This problem was solved by myself !

2 Likes

Sorry we weren’t able to help you before. If you have the time, could you share an outline of what halped you and possibly the final Stan code so that others may learn from it! (but it is OK if you don’t :-) )

Thanks!