# Ordinal IRT for Latent Variable in BRMS

Hello,

I’ve been struggling to find a clear answer, so I am hoping someone can help. I would like to run a latent variable model that estimates a latent variable using a series of ordinal measure indicators, then use this estimated latent variable as my independent variable (with other covariates) in a regression. I would like to do this all within in one Bayesian IRT model. Is this possible with BRMS? I think this article suggests it is doable: https://arxiv.org/pdf/1905.09501.pdf.

Can anyone clarify and/or share sample code for doing this? Thanks in advance!

I may need a little additional information to understand exactly what you’re intending to do here, but here’s my current understanding: you have a latent variable that is estimated from item-level responses to some test, and you want to include variables (ordinal variables in this case) that predict the latent variable.

If this is the case, then the `brms` formula depends on what kind of IRT model you’re running. For the sake of simplicity, I’ll assume that the items you have are dichotomous and that you want to fit a 2PL model. The formula would look something like this:

``````latentReg.2pl <- bf(response ~ exp(logalpha) * eta,
eta ~ 1 + (1 |i| item) + mo(ind1) + mo(ind2) + mo(ind3) + (1 | id),
logalpha ~ 1 + (1 |i| item),
nl = TRUE)
``````

Here, I’m assuming a random item model, but that could be easily changed (e.g., `1 + item` instead of `1 + (1 |i| item)`). I’m also wrapping the ordinal predictors of the latent trait in the `mo()` function since this enforces a monotonic predictor, but if you have other ways of wanting to estimate the ordinal predictors (e.g., a specific contrast structure), then the `mo()` can be dropped. The latent variable in the model is the random effect for `id` (or whatever index is used to keep up with participants in the data), so any predictor of that latent variable needs to appear in that line of the formula (in this case it is for the `eta` parameter).

It’s important to remember that the data will be in long format, so the predictors specific to the latent trait will be repeated for each person in the dataframe. There’s no other special coding or data preparation needed to inform the model that these are predictors of the latent variable.

Feel free to provide additional details and context if your particular situation is not addressed by this approach

1 Like

Thanks for this suggestion. Let me provide some more context to what I am trying to do.

I am using survey data on individuals. Several survey questions tap into the latent trait I want to estimate for each person. I want to use 8 of these questions to create a score for each person’s latent trait. I want to then use the estimated latent trait for each person as an independent variable in a regression where the dependent variable is another survey question.

So, I want to run a regression where X is the estimated latent variable and Y is a binary variable from the survey. Before this, I need to estimate the latent X variable using several other survey questions that are ordinal. So, latent X is estimated using 8 ordinal variables, then the latent X is used in a separate regression to predict Y. And, I want to do this all in one model if possible.

Hopefully this isn’t too confusing. Appreciate any thoughts. Thanks!

I don’t think that this can be done in a single model in `brms`, but I’m certainly no expert. The issue is that in the IRT parameterization the latent variable is a random effect estimated for each person, meaning you need to be able to access the estimates the model gives you for those random effects to get the latent variable’s value.

You can run a joint model where the random effects of two equations are allowed to correlate:

``````survey.resp <- bf(Y ~ 1 + (1 | gr(id, id = "p")))

latent.var <- bf(response ~ exp(logalpha) * eta,
eta ~ 1 + (1 |i| item) + mo(ind1) + mo(ind2) + mo(ind3) + (1 | gr(id, id = "p")),
logalpha ~ 1 + (1 |i| item),
nl = TRUE)

fit <- brm(survey.resp + latent.var, ...)
``````

While the above is close to what you’re looking for, the random effects between the two equations is not the same effect. These random effects are just correlated with one another, so it’s not the same as estimating the latent variable in one equation and then using that estimate in the other.

The issue is that the only way, at least that I know of in this model parameterization, to get the latent variable estimates is to extract the random effect estimates themselves via `ranef()`. That’s why I think that you’d have to run this as two separate steps: estimate the appropriate IRT model, extract the latent trait estimates for each person via `ranef()`, and then run a second regression with the latent trait estimates as a predictor.

An alternative approach might be to specify the latent variable directly via a column of missing values and using `mi()` as detailed here. The issue with this approach is that some post-processing won’t be available (e.g., loo, pp_check) since there’s no observed data to check against the estimated latent variable. Depending on your needs, the loss of that post-processing might be OK.

Can you (or anyone else) provide an example of how would this look in Stan code? In particular, I’m curious whether the two equations would be included in the model block

``````model {
latent_var = item1 + item2 + ... ;              // 2PL item-response model score
y ~ bernoulli_logit(latent_var * beta +  ...);  // observational model using score as a predictor
}
``````

or if you would need to first specify something in the transformed parameters block (e.g., the latent variable model) that would subsequently be modeled as a predictor in the model block?

``````transformed parameters {
eta = item1 + item2... ;              //  create linear predictor for 2PL model
}
model {
x ~ bernoulli_logit(eta);             // model for latent variable
y ~ bernoulli_logit(x * beta + ... ); // observational model w/ latent variable as predictor
}
``````

Just looking for a simple example; thanks!

Again, far from an expert on IRT or `brms`, but the model might be able to be specified in `rstan`. I don’t think that `brms` would currently support the needed latent variable modeling, though maybe there’s something creative to be done with `mi()` and a column vector of NAs for estimating the latent trait.

The issue as I see it is that the latent trait estimate is not a linear composite of item responses but is instead based on the item parameters and participant responses to those items. The latent trait is jointly estimated with the item parameters, meaning that to get one you have to estimate both in the same model or have item parameters from a different sample.

I would think that you need to specify an IRT model in `rstan` and then use the estimated latent trait parameter to then fit your logistic regression. The Stan User’s Guide includes a section on IRT models: here. Using their 2PL model in `rstan`, you’d want to fit your logistic regression using the `alpha` parameter; however, I’m not certain that you could estimate `alpha` and then tell `rstan` that it needs to be included in a matrix for regression… maybe with transformed parameters?

At the end of the day, I’m not sure what the advantage of fitting this all in a single `rstan` call would do. Your aim is seemingly to run two models (an IRT model and then a logistic regression), and even if those are estimated at the same time, they are still two different models being run sequentially (i.e., results from one must always be estimated before the second can be estimated). I would think that it’s better to just save the headache of writing and troubleshooting the code to run the models together and just run them as separate models, but maybe there’s some nuance that I’m missing.

Now that I look back at your original modeling question, I’m also not sure what the point of using the latent trait from one set of survey questions to predict response to another question would be. If you think that the questions are related to the latent trait, which I’d assume you do if you want to run these models, then you should just add the other survey question to your IRT model. Literally, what the IRT model is doing is using the latent trait (and item information) to predict the response (i.e., `response ~ 1 + Item + (1 | ID)` such that we apply a logistic function to predict the probability of a “correct” response given some knowledge of the latent trait), so that seems to be the aim of the second model to begin with just with the added perk that you also estimate that item’s parameters if you include it in the IRT model. The only other alternative that I can think of off the top of my head is that this might be some kind of linking/equating problem: in which case, I’d think it’d be easier to just estimate the two scales’ item parameters while treating the latent trait as being the same between the two. If there’s another application of the model that would make the two different models necessary, then let me know

This is pretty easy to do in stan / Rstan. The code below is created out of existing examples from the Stan handbook. This should work if you dichotomize your observed indicators for the latent trait to be estimated. Or you can extend this model to allow for polytomous indicators using examples from the handbook or elsewhere.

Including both the IRT/measurement model and the regression model is a good idea because it allows for the uncertainty of measurement to be retained in the regression model, which is more accurate than the two-step method used by many.

Note that you will need to stack your data such that rows are unique reponses that a particular respondent gives to a particular question.

``````data {
int<lower=1> J;              // number of respondents
int<lower=1> K;              // number of questions
int<lower=1> N;              // number of observations (questions * responses)
int<lower=1,upper=J> jj[N];  // index showing respondent for observation n
int<lower=1,upper=K> kk[N];  // index showing question for observation n
int<lower=0,upper=1> x[N];   // stacked vector of responses used to measure independent variable
int<lower=0,upper=1> y[J];   // vector of J dichotomous responses forming the outcome variable of the regression
}

parameters {
real mu_beta;                // mean question location
vector[J] alpha;             // latent trait for j - mean
vector[K] beta;              // item location for k
vector<lower=0>[K] gamma;    // discrimination of k
real<lower=0> sigma_beta;    // scale of item locations
real<lower=0> sigma_gamma;   // scale of log discrimination
real delta0;				   // regression intercept
real delta1;                 // regression coefficient for latent estimate
}

model {
alpha ~ std_normal();
beta ~ normal(0, sigma_beta);
gamma ~ lognormal(0, sigma_gamma);
mu_beta ~ cauchy(0, 5);
sigma_beta ~ cauchy(0, 5);
sigma_gamma ~ cauchy(0, 5);
x ~ bernoulli_logit(gamma[kk] .* (alpha[jj] - (beta[kk] + mu_beta)));
y ~ bernoulli_logit(delta0 + delta1 * alpha);
delta0 ~ normal(0, 1);
delta1 ~ normal(0, 1);
}
``````
1 Like

Thank you @nomad, this is very helpful.

One point of clarification though. My understanding of the code is that the covariate `x` is represented in the regression equation through `alpha`. And because `alpha` is centered and scaled in the measurement model, the interpretation of `delta1` is the expected change in the value (i.e., log odds) of `y` when `alpha` +/- 1 standard unit.

Is that correct?

delta1 is a logistic regression coefficient, so standard interpretations apply, yes. And alpha will be on the standard normal scale. However x is not a covariate - it is a set of variables which are used as observed indicators of a latent trait (alpha) which the IRT model estimates. This code requires you to stack these variables in one vector

1 Like