We have liking ratings on a 1–5 Likert scale of images, each in one of two conditions (asymmetrical vs. symmetrical) and belonging to one of six categories. We are interested in the effect of stimulus category on symmetry preference (whether liking more symmetrical or asymmetrical images). This is the head and structure of our dataset:
Here is our model configuration:
prior_m1 ←
#fixed
set_prior(“normal(0,1)”, class = “Intercept”) +
set_prior(“normal(0,1)”, class = “b”, coef = “symmetry1”) +
#error
set_prior(“gamma(1,2)”, class = “sd”)
m1 ← brm(liking ~ symmetry + (1|participant) + (symmetry|set),
data = Data1,
family = cumulative(link = “probit”, threshold = “flexible”),
iter=20000, warmup=2000,
prior=prior_m1,
chains=4, cores=4,
init=‘0’, control=list(adapt_delta=0.99, max_treedepth = 10),
seed = 111)
Then, we draw some data and run equivalence tests as follows:
post1 ← as_draws(m1)
post1 ← do.call(rbind.data.frame, post1)
hyp_Sym ← equivalence_test(post1$b_symmetry1, ci=(.89), range=c(-0.1sd(Data$liking), 0.1sd(Data$liking)))
Importantly, we are interested in the effects of this relationship per set because we want to compare them later, which is why our model considers them a grouping factor with independent intercepts and slopes. To build the different posterior distributions per set, we include the main effects beta “b_symmetry1” and the effects by set. We would normally also include the intercept in any other model. However, the intercept in a cumulative is dependent on the particular tau by level. Therefore, we think it should not be included.
The probit link determines the likelihood that an item or event will fall into one of a range of categories by estimating the probability that an observation with specific features will belong to a particular category. We are interested in all these estimations at the same time. Therefore, we think the intercepts are useless for our contrast of interest. Please correct us if we are wrong. Anyway, following this logic, we calculate the equivalence tests as follows:
hyp_A ← equivalence_test((post1$b_symmetry1 + post1$r_set.Artworks.Intercept. + post1$r_set.Artworks.symmetry1.), ci=(.89), range=c(-0.1sd(subset(Data$liking, Data$set==“Artworks”)), 0.1sd(subset(Data$liking, Data$set==“Artworks”))))
hyp_B ← equivalence_test((post1$b_symmetry1 + post1$r_set.Bertamini.Intercept. + post1$r_set.Bertamini.symmetry1.), ci=(.89), range=c(-0.1sd(subset(Data$liking, Data$set==“Bertamini”)), 0.1sd(subset(Data$liking, Data$set==“Bertamini”))))
hyp_D ← equivalence_test((post1$b_symmetry1 + post1$r_set.Designs.Intercept. + post1$r_set.Designs.symmetry1.), ci=(.89), range=c(-0.1sd(subset(Data$liking, Data$set==“Designs”)), 0.1sd(subset(Data$liking, Data$set==“Designs”))))
hyp_J ← equivalence_test((post1$b_symmetry1 + post1$r_set.Jacobsen.Intercept. + post1$r_set.Jacobsen.symmetry1.), ci=(.89), range=c(-0.1sd(subset(Data$liking, Data$set==“Jacobsen”)), 0.1sd(subset(Data$liking, Data$set==“Jacobsen”))))
hyp_P ← equivalence_test((post1$b_symmetry1 + post1$r_set.Pepperell.Intercept. + post1$r_set.Pepperell.symmetry1.), ci=(.89), range=c(-0.1sd(subset(Data$liking, Data$set==“Pepperell”)), 0.1sd(subset(Data$liking, Data$set==“Pepperell”))))
hyp_S ← equivalence_test((post1$b_symmetry1 + post1$r_set.Sasaki.Intercept. + post1$r_set.Sasaki.symmetry1.), ci=(.89), range=c(-0.1sd(subset(Data$liking, Data$set==“Sasaki”)), 0.1sd(subset(Data$liking, Data$set==“Sasaki”))))
Is this approach correct? How should we treat the main intercepts and the intercepts per set? Is there a better approach to address our question, such as: liking~ symmetryset + (symmetryset|participant)? And (last question, we promise) should we use a probit in this case or should we use another link function as it is difficult to mathematically estimate?
We greatly value your expertise and look forward to learning from your suggestions. Thank you in advance for your time and insights.