Here’s my puzzle: When I fit am IRT model with MCMC I get a scale that ranges from about -1 to 1.5. When I fit the exact same model with the same data and priors with MAP I get a scale that ranges from -.1 to 0.8. I can’t figure out why that is or if it’s something I should be worried about.
The details:
I’m fitting a model for what is essentially a two parameter IRT model. I have count data so I’m using a binomial distribution, and then I put the IRT in the probability parameter of the distribution. There are 72 items in the model, the variable I’m interested in is Theta.
data {
int<lower=1> J; // number of people
int<lower=1> K; // number of items
int<lower=1> N; // number of observations
int<lower=1,upper=J> jj[N]; // user for observation n
int<lower=1,upper=K> kk[N]; // item for observation n
int X[N]; // Total attempts for user J
int<lower=0> y[N]; // count of successful attempts for observation n
}
parameters {
vector[K] delta; // item intercept
vector[K] alpha; // discrimination/slope parameter
vector[J] theta; // ideology/ability
}
model {
delta ~ normal(0, 1); // item intercept,
alpha ~ normal(0, 2); // discrimination/slope parameter
theta ~ normal(0, 1); // ideology/ability
y ~ binomial_logit(X, delta[kk] + alpha[kk] .* theta[jj]);
}
I fit the model with some weakly informed priors of -1 or 1 to solve reflective invariance. When I fit the model with MCMC it behaves well. Chains converge, nothing divergent, ESS > 400, Rhat ~ 1, etc. As far as I can tell there is no sign that the models are misbehaving. The results of the two models are essentially perfectly correlated.
So why are the theta scales in different locations, is it something I should worry about, and how do I solve it? Thanks for any insights.