Priors for item slopes in 2PL IRT

Hi all,
I’m recently wondering if there is an essential benefit to specify the priors for itemslopes intercepts via logalpha and use exp(logalpha) for the nl fomula instead of choosing normal priors with lower bound of 0. Well the probability is decreasing faster. But is calculation more efficient or something like this? Or is it just a convention and you are free to choose?

I came towards this question because I’m working on a framework to specify multidimensional models where items can have a different number of dimensions so I have to include a possibility to set some alphas to 0 or implement 0 as the basevalue which doesn’t work well with the logalphas.

Sincerely Famondir

I may be misinterpreting your question, but the logalpha & exp(logalpha) trick in IRTs isn’t so much about the prior as it is about forcing alpha to be positive in order to make the model identifiable.

“Without any further restrictions, this model will likely not be identified (unless we were
specifiying highly informative priors) because a switch in the sign of αi can be corrected
for by a switch in the sign of θp + ξi without a change in the overall likelihood. For this
reason, I assume αi to be positive for all items, a sensible assumption for the VerbAgg data
set where a y = 1 always implies endorsing a certain verbally aggressive behavior. There are
multiple ways to force αi to be positive, one of which is to model it on the log-scale, that is
to estimate log αi and then exponentiating the result to obtain the actual discrimination via
αi = exp(log αi).” – https://arxiv.org/pdf/1905.09501.pdf

I think forcing it to be positive (or negative) by some other means should be fine. You could always run a simpler model both ways and compare.

Thank you for your extensive answer. I experimented with an alternative specification for item slopes alpha. I found that a direct approach can be unstable if the intercept is close or equal to 0. The model was stable with a prior like prior("normal(1, 0.5)", class = "b", nlpar = "alpha1", lb = 0). The response data has been simulated for alphas N~(1, sigma) with sigma = {0.4, 0.5, 0.6}.

So unfortunately switching to this approach can’t pool alphas to 0 what was intended in order to positive correctly identify the cases in which the items don’t load on a specific dimesnion.

Comparing a logalpha and alpha specification the logalphas resulted in smaller uncertainty intervals in the cases where alpha should be 0. Also the Rhats became 1 with less iterations. The model identification for the logalphas even worked without setting some alphas to 0 (if you ignore that the found dimensions weren’t in the order they have been specified.*)

So all in all the logalpha specification was found to be superior.

*: The item groups were identified correctly. The simulated date contained 1000 persons and 21 items. The 3 dimensions were distinct. So the item groups were 1-7, 8-14, 15-21. There was no correlation between the dimesnions. In real life data or with correlated dimensions it probably is important to set some restrictions as shown in this tutorial.