Comparing posteriors of predictors in brms using hypothesis


I am new to Bayesian statistics and I fit a multivariate model, in which I try to predict various EEG paramters by age.
The EEG paramters are correlated, as I am interested in “standard” paramters, and a corresponding paramter, to which a correction is applied.
So the model looks like this

fit<- brm(mvbind(eeg1_standard, eeg1_corrected, eeg2_standard...) ~age )

I am now interested, whether effect sizes (here stanardized regression coefficients) are smaller when I apply a correction to my EEG parameters.
Could this be a question answered by:

h1 <- hypothesis (fit, eeg1_standard_age > eeg1_corrected_age)

Thanks a lot in advance!

Sorry for not answering earlier… I don’t think your question can be straightforwardly answered with hypothesis. I would just use get the posterior samples for the relevant coefficients and see how often they are bigger, i.e. (code is just a sketch):

par_names <- c("eeg1_standard_age", "eeg1_corrected_age") #Guessing the names here, you should be able to find them from the summary of the fit
s <- posterior_samples(fit, pars = par_names)

p_standard_smaller <- mean(s[[par_names[1]]] < s[[par_names[2]]]])

Does that make sense?

Thank you for the input!

I tried it, and I get a value of 0.88.
However, I find it hard to interpret this in a statistical way, as my main research question is whether the influence of age on these eeg paramter changes when I am correcting them.
When I use hypothesis(fit, “eeg1_standard_age < eeg1_corrected_age”) and look at the evidence ratio, I get a value of 7.6, from which I could conclude (according to Jeffereys) that there is moderate evidence for my hyoptehsis in the current dataset. Could you somehow elaborate why comparison of the posterios isnt really appropriate this way?

Thanks a lot in advance!

First, sorry, I didn’t realize hypothesis actually does almost exactly the same thing I proposed for one-sided hypotheses. The only additional step is that hypothesis converts the probability to evidence ratio (you may note that 0.884 / (1 - 0.884) = 7.62).

If you believe your model is correct, it tells you that you should also believe that there is 88% probability that the eeg1_standard_age coefficient is smaller. Nothing more, nothing less.

Overall, I don’t think dichotomous thinking of changes/doesn’t change is very useful. Why wouldn’t the influence change at least a little bit after a correction? For any non-trivial correction I would expect the “influence” to change at least a little 100% of the time. You may be able to evaluate the probability (or evidence ratio, if you prefer) for something like abs(eeg1_standard_age - eeg1_corrected_age) > some_value_you_consider_important, i.e. that the coefficient changes by a noticeable amount.

But I can’t help but wonder, whether the way you ask the question is sensible - what exactly is the correction? If your correction was that eeg1_corrected = 0.1 * eeg1_standard, you would get eeg1_standard_age * 0.1 = eeg1_corrected_age so the coefficients would change (with high probability / evidence ratio), but I don’t think it would be sensible to say that the “influence” of age changes.

It might be more sensible to fit separate models for standard and corrected and compare them with loo or perform posterior predictive checks to see which is better “explained” by age (for some meaning of “explain”).

Does that make sense?


Sorry for the late answer, but thanks again for your input!

The correction is not just a simple scaling of eeg_standard, it includes calculation of another individual eeg parameter which then gets subtracted (can be positive or negative, there is a lot of intra-individual variation). It is based on the assumption that the standard_eeg is a mixture of two independent components which shouldn’t be analyzed without disentangling them.
This yields a decrease in some eeg_corrected_age parameter and an increase in another (compared to the standard_eeg_age).
My research question is whether this correction is useful or rather unnecessary. Unfortunately it is very hard to determine some_value_you_consider_important, but that’s an interesting thought as well! I also had some model comparisons in mind before, I may go back to follow this plan.

Thanks for your help!

1 Like

Then I would also totally look the other way: is the eeg after your correction better at explaining clinically meaningful outcomes than before the correction? If you only have age data than looking at the bayesian R^2 or comparing with loo between age ~ eeg_standard and age ~ eeg_corrected, but it would IMHO be much more impressive if you could show that a model using your corrected value is better at predicting say heart disease or other actually more patient-oriented outcome than a model with the standard value.

Generally - you should IMHO focus on a task eeg_standard is currently used for and show that this task can be done better with your eeg_corrected. Being predicted well by age seems not very interesting in itself and I fear using the models the way you did is not really aiming at the core of the problem.