OK to not model a variance of no interest?

Suppose I have a model like the following

stan_lmer(Y ~ X +(X | conditions) + (1 | Subject), data=dat, …)

and my goal is to make posterior inference about those conditions. In other words, I’m not interested in the posterior distribution about the subjects. From the conventional statistical perspective (e.g., linear mixed-effects modeling), it would be still preferable to model the cross-subject variance when it’s substantial (e.g., likelihood ratio testing). So, my question is: from Bayesian perspective, is the above model superior to the following one without incorporating the cross-subject variability?

stan_lmer(Y ~ X +(X | conditions), data=dat, …)

I’m not sure what your subject level is here. Are there multiple y per subject or is y something like count or repeated binary trial data?

I’d say the Bayesian perspective we adopt is more one of evaluating which predictors and coefficients are useful on a case-by-case basis when building up models. Sometimes it depends on the application—we might only are about predictions for more observations from the same group and sometimes we might care about predictions for new groups.

Do you just have one observation per subject? If so, then the models are equivalent from a certain perspective (assuming that residual errors and subject random effects are independent), because random subject effect (~ N(0, tau^2)) + epsilon (~N(0, sigma^2)) ~ N(0, tau^2+sigma^2).

In that scenario the only question is whether you are interested in tau and sigma per se. If, as you seem to imply, you do not really care, then it may not matter how you set this up. I suppose another possibility is that you might have prior information on tau or sigma, but not both, or that it is easier to specify separate priors, in which case there would be an advantage in keeping both in the model.

This is no longer the case, once we take about generalized linear mixed effects models with concave or convex inverse link functions.

Currently I’m only considering Gaussian assumption.

I have one observation per condition per subject (‘condition’ is another grouping variable). In other words, each subject has K observations, where K is the number of conditions. I’m still uncertain whether it’s OK to not explicitly model the cross-subject variability.

If you ignore that you have multiple observations per subject and pretend that they are independent, which your second model does, then this tends to inflate the amount of information you have. Depending on the within subject variability this can be a rather extreme effect (in what I do it usually is and one should never leave out some kind of subject effect in this kind of setting).

2 Likes