- Operating System: Win 10.0.17134
- brms Version: 2.9.0
Greetings, I’m looking for advice on how to proceed with a current analytical challenge within the brms framework…
Background: analysing data from a single survey given to different sampling cohorts (sample, n = 8) in which all participants (subject, n = 500) answered the same question set under two fictional scenarios (scenario). So, subject is nested within sample, and scenario is fully-crossed within subject. The question set contained 32 statements, each rated on the same 7pt ordinal scale (1-7, strongly disagree to strongly agree, although only verbal labels were displayed to participants). Most of the statements are indicators of five outcomes scales of interest (number of indicators ranges from 3-6 for a given scale), and the remaining nine statements comprise another “scale” that, for simplicity, is being scored via averaging a person’s responses across the items and treated as a standardised (z-score) continuous predictor (pred_1). Of central interest is the effect of scenario and pred_1 on a given outcome, and whether or not the effect of pred_1 varies across scenario. All outcomes are correlated with each other.
Current Approach: So far, and following Bürkner & Vuorre (2019), I’ve used separate multilevel ordinal regression models to predict scores for a each outcome (resp) :
resp ~ scenario * pred_1 + (1|item) + (1|subject) + (1|sample),
family = cumulative("logit")
prior <- prior("normal(0, 3)", class = "sd", group = "sample") + prior("normal(0, 3)", class = "sd", group = "subject") + prior("normal(0, 3)", class = "sd", group = "item")
While not ideal, since outcomes are correlated, this may ultimately suffice. However, I have two questions about such an approach. First, does the model syntax look correct? And second, is it possible to produce a scatterplot for a given model showing:
- predicted values of the latent outcome variable on the y-axis,
- predicted values of the continuous predictor,
pred_1, on the x-axis, - colour the points by
scenario,
then - overlay lines representing the predicted interaction of
scenario*pred_1?
Using marginal_effects() shows the probability of responses on the observed metric (1-7 scale), which isn’t quite what I’m looking for.
Alternative Approach: Since the outcomes are correlated, might it make more sense to use a custom beta-binomial distribution to estimate all outcomes, as sum-scores, simultaneously? An example of this approach can be found here, although I’ve having difficulty mapping my situation to the model syntax.
Any insight, and/or additional approaches, would be greatly appreciated!