Qualitatively negligible, but sufficiently strong evidence for indirect effects during mediation analysis in brms

Hello, I’m conducting a mediation analysis with ordinal outcomes, using brms.

My model contains:
X → M → Y where M is the mediating variable, and Y is the ordinal outcome variable.

X: a continuous variable, repeated measures within-patient
Y: an ordinal variable (rating 1 to 3), repeated measures within-patient
M: a continuous mediating variable, repeated measures within-patient

So, with “mu” as the random intercept for “patient”, the model could look like:

(1) Y = c*X + mu (for c, the total effect)
(2) M = a*X + mu (for a)
(3) Y = c'*X + b*M + mu (for b and c’, the direct effect)

The indirect effect, according to the product of coefficients (PC) approach, would be a*b.

First, model fit diagnostics looks reasonable.
Second, the range for coefficients a, b and the direct and total effects are ~0.05 in magnitude, which I think also makes sense, in the context of our data.

But the indirect effect is 0.001, the the CI barely excluding 0. This looks like I should interpret my model as having sufficient evidence for a mediation effect; however the effect looks so small. It’s also true that we have a large sample size, about 500 measurements per patient, with 40 patients total, which might be skewing the results. How should we report this mediation effect? Are there approaches, other than CI, to make a comment about the strength of effects, and describe it as potentially existing but negligible?

Thank you so much!

If the indirect effect is in the same direction as the total effect, perhaps it would be useful to report the proportion of the total effect that is mediated (which in this case would be ~0%).

Thank you so much. And if it were the opposite direction in a different model, it would just be interpretable as a suppression effect, no matter how seemingly negligible?

In that case, mediated-effect/direct-effect might be a useful way to convey how much of the direct effect is being suppressed? In both cases you’d make it clear that ignoring the mediation pathway would not have a meaningful impact on the expected outcome.

Ok makes sense, thank you! I guess it’s already qualitatively obvious how small the effect size is, and not to over interpret it. I’ll be careful on how I describe this, thank you for your suggestion.

1 Like

Sorry to return to this, but are there any references I can cite for this interpretation: to make the claim that ignoring the mediation pathway does not strongly influence the expected outcome? To reiterate, my indirect effect is small but opposite in sign to the total effect.

I have looked as thoroughly as possible through the literature, starting with Preacher & Kelley (2011) and Wen & Fan (2015). Perhaps my discomfort with not having a concrete measure (like “proportion of mediated effects”) comes from the frequentist way of thinking. But if there are any other references I’d appreciate it. Thank you so much.

If you think the suppression is negligible the options discussed above are a way to express this quantitatively; any of these quantities can be reported with uncertainty intervals.

But deciding whether the observed suppression is negligible will depend on the context and your domain knowledge. It’s not really a statistical question. (From a basic research perspective, evidence that suppression, however small, is part of the overall mechanism may be useful information)

You might get some more helpful feedback on this here.

1 Like