I’ve attempted to use brms to model what I would describe as a multivariate between + within subjects ANOVA. The output I get in the summary information does match what I would expect to see given the raw data, but I wondered if you might be able to confirm that I have actually written the code appropriately, as I am quite a noob in terms of both Bayesian modelling and frankly R generally (though very happy to learn as much as possible about both!). As well as checking what I’ve tried, I had a couple of additional questions that I would really appreciate if anyone is able to shed some light on.
Ppts are organised in 2 groups based upon control group vs. anxiety disorder group. Participants are required to give 2 types of assessment in response to a stimulus (multivariate). In addition, participants give these ratings in 2 contexts, within subjects. This is how I coded it:
MANOVAattempt <- brm(formula = mvbind(Assessment1, Assessment2) ~ Group * Context + (1|PPN),
data = MANOVAData,
family = student,
warmup = 1000, iter = 3000, chains = 4, control = list(adapt_delta = 0.975))
I used the mvbind approach described in the brms multivariate vignette. PPN is the participant number, and so I believe that (1|PPN) would be the way to indicate that we are receiving multiple responses from each subject, but I may be mistaken.
Does such a model look to you like it is appropriately analysing the study design I described above?
In addition, if anyone is able to shed some light on the following questions it would be great. I have quite a few so I’m not expecting you to be able to answer them all or in any great detail!:
- I think the mvbind approach models the different ‘outcome’ variables as correlated. Is there a way to run essentially the same multivariate model with them not specified as correlated. I understand the output would then match what you’d get with 2 separate analyses of each variable, but I am thinking 2 MANOVAs could be directly compared. The reason for this is that I do not think the two variables should necessarily correlate, particularly in the anxiety disorder patients, and so I would like to assess the data both ways and know if it makes much of a difference.
- The ‘outcome’ variables are really on a scale, not merely ordinal, with increments all along the scale being equal to one another. However, they are bounded from 0-100. Does this pose a serious issue for the model? I didn’t think this would be a real problem as e.g., I see people predicting a person’s weight or height in similar models, and these variables are also bounded within reason/biological plausability, as are many variables treated as fully continuous).
- I can see the HDI for the estimated difference in each outcome variable for the level of the between and within groups factors, and their interaction, but is there a way to get an HDI for the ‘effect size’ of a variable in the model? This may be a very silly question!
- I can also see the estimated sigma and nu values for the variables. Is there a way to specify that these might be different at different levels of the group/context factors, and receive estimates for those levels?
- I’m a little concerned about the pp_check output I receive for each variable. There is what seems to me a healthy-looking overlap of the y and yrep, but the graph is a little tough to read owing to some yrep values tailing out to +/-1000. Does this look problematic to you (I’ve uploaded an attachment which you can hopefully see). To get the graph I just typed: pp_check(MANOVAattempt, resp = “Assessment1”)
Thank you for taking the time to see if you can help with my questions!
- Operating System: MacOS Sierra 10.12.6
- brms Version: 2.8.0