Dear All,

has anybody experience with specifying non-compensatory IRT models in Stan? If so, could anybody point out example code (or snippets, especially related to the likelihood)?

Thanks a ton!

Best wishes

Christoph

Dear All,

has anybody experience with specifying non-compensatory IRT models in Stan? If so, could anybody point out example code (or snippets, especially related to the likelihood)?

Thanks a ton!

Best wishes

Christoph

I don’t know about non-compensatory IRT models (or IRT models at all), but there are some other fancier IRT models over here: https://education-stan.github.io/

Hi bbbales2,

thanks for the link! After working through some of the examples provided, I think I can narrow my problem down to identification issues.

Say you have responses to `k`

items. In the compensatory case the likelihood of the IRT model (i.e., the probability of a correct response) is `y ~ bernoulli_logit(alpha[ii] .* (ability[jj] - beta[ii])`

, with `[ii]`

and `[jj]`

being the item and person-indices, respectively.

In the non-compensatory case, however, there are two latent traits (abilities) that are necessary for a correct response, and additional item parameters for each item. So, in a way you have two components in the likelihood, `y ~ bernoulli_logit([alpha1[ii] .* (ability1[jj] - beta1[ii])] + [alpha2[ii] .* (ability2[jj] - beta2[ii])]`

.

If I am not mistaken, you could also frame two problem as a two-factor model with cross-loadings. Currently, I am working on such a model specification, but for continuous outcomes. I have a model-specification which ‘runs’, but I did not manage it to converge yet (sampling is awful, although without divergent transitions or treedepth-warnings). I think my main problem is that the model is, due to the cross-loadings (and the similarity of the factor loadings), not identified.

The likelihood of my model looks something like the following:

`y = (beta1+beta2) + (lambda1 * trait1) + (lambda2 * trait2) + (e1 + e2)`

;

in Stan I am working with

` target += log_sum_exp(normal_lpdf(y | lt1, sigma1[ii]), normal_lpdf(y | lt2, sigma2[ii]))`

, because I have an additive relationship of the components lt1 and lt2, where

`lt1 = beta1[ii] + lambda1[ii] .* trait1[jj]; lt2 = beta2[ii] + lambda2[ii] .* trait2[jj]`

.

I am not quite sure if I need the `log_sum_exp`

. My main question, however, relates to (hard or soft) constraints to impose on the `beta1, beta2`

and `lambda1, lambda2`

vectors. Are there any such constraints I can introduce to better separate the intercepts and factor loadings?

I hope this was not too confusing. Any hints and tipps appreciated!

Best wishes

Christoph

`log_sum_exp(a, b)`

is `log(exp(a) + exp(b))`

I think it would be rather unusual here. Can you write out mathematically what term you wan to compute there? Then I can maybe tell if it’s right or not (or ask more questions).

What sort of constraints might you want to impose? (edit: not that it’s a bad idea – just asking so I can think better about it)