I conducted an experiment with three groups and within each group, I randomly used one of two stimulus replicates (i.e., the same experimental manipulation, but in a different context). I am interested in the difference between the three groups, but I need to control for the fact that participants saw different versions of the simuli. What I see desciptively in the data is that the difference between the three groups is of the same size for the two stimuli, but the average value of y is higher for one of the two stimuli (s. example). So I was thinking of conducting a multi-level ANOVA.

Here is an (simplified) example how the means of the groups look like:

Letâ€™s take the brms package as an example:
Which of the following specifications fits my researach context best?

brm(y ~ 1+ group + (1|replicate))
brm(y ~ 1+ group + replicate + (1|replicate))
brm(y ~ 1+ group:replicate + (1|group:replicate))

If your dataset follows the structure above (i.e., three groups & two replicates) then you donâ€™t need random effects. You can just specify:

brm(y ~ 1+ group*replicate)

Because an individual can only belong to one group, only receive one of the stimuli, and was only assessed once, there is no clustering here that needs a random effect.

Also, note the use of group*replicate instead of group:replicate. The : operator only gives the interaction effect, whereas the * operator gives both the interaction and main effects

Hi Andrew, thanks a lot, that makes totally sense.

I have one follow-up question based on the output:
Usually I reported the results of the ANOVA with F Statistics and p-values. Through running the Bayesian ANOVA with brms, one of the three groups becomes the reference group. If group 1 becomes the reference group, what would be the Bayesian complement to a non-bayesian post-hoc test (e.g., Tukey) to test the difference between group 2 and group 3? Or in general: Is there any â€śbest practiceâ€ť to report the results of the Bayesian ANOVA?