Estimating contrasts (code peer-review)

Hello everyone,

I am looking for some ‘peer-review’ of the approach I have taken to test contrasts. I have estimated a robust hierarchical model (using the student() family) and the pp_check and loo() are good, so I am not going to the details of the models as I don’t think it is necessary to evaluate the question, but I am happy to follow up if useful.

The models have the form:

y ~ 1 + group * stimuli + (1 + B | subject)

where:
y: continuous
group: categorical, 2 levels between subjects
stimuli: categorical, 6 levels within subjects

We are interested in estimating contrasts between the 6 levels of B within the 2 levels of A
In order to estimate various hypotheses, e.g. that stimulus_1 > stimulus_2 within group_1 but not within group_2. Based on different discussions here, it seemed that using the brms::hypothesis() is a good way to achieve such comparisons but it requires specifying the contrasts manually, whereas emmeans() provides a convenient way to compute conditional/marginal means from the posterior distribution. So I tried to replicate what hypothesis() does, from the emmeans() output using the following:

1. setup contrast terms


stim_1            = c(1, 0, 0, 0, 0, 0)
stim_2            = c(0, 1, 0, 0, 0, 0)
# and so on

planned_contrasts = c( contrast_1 = stim_1 - stim_2, 
                       contrast_2 = ...)

2. compute contrasts and test directional hypotheses


require(brms)
require(emmeans)
require(tidybayes)

 model %>% 
    emmeans(., specs = eval(str2expression(specs))) %>% 
    contrast(method = contrastList) %>%
    gather_emmeans_draws() %>% 
    group_by(contrast) %>%
    summarise(Est.error = sd(.value),
                            median_hdi(.value),
                            Post.prob = sum(.value > 0)/n(), 
                            Evid.ratio01 = Post.prob / (1 - Post.prob), 
                            Evid.ratio10 = 1/Evid.ratio01
)  

3 hierarchical contrasts

Finally, we wanted to explore whether there are “main” effects of stimuli, before looking into the effects of group, so when computing the marginal means, I report separetely the result of emmeans (model, specs = ~ stimuli), and the result of emmeans( model, specs = ~ stimuli | group).

The results seems sensible, but as I am still learning the bayesian/brms approach, I wanted to check if this appears a sound approach to others? Thank you!

1 Like

Unfortunately, there are AFAIK not many people familiar with emmeans on the forums (and I am not one of them), but I’ll try tagging @rvlenth (creator of emmeans), if they have time to answer.

Sorry for not being of more help and good luck with your model!

This looks pretty reasonable to me. All that emmeans does with a Bayesian model is apply the erstwhile frequentist method for obtaining estimated marginal means (EMMs) to each posterior draw, creating posterior draws of EMMs. Then contrast just does the contrast calculations on the posterior EMMs.

It seems that one way to establish a greater comfort level with what you are doing is to run hypothesis for a situation that you know you could also study using emmeans and contrast, and verify that you get the same results both ways.

1 Like

I’m not familiar with using emmeans with bayesian models, but I often look at differences between levels of predictors by creating a posterior distribution of differences.

In a tidybayes/tidyverse framework, I usually do this with add_predicted_draws (or add_fitted_draws depending on the application), which gives you a data frame of posterior predictive draws.

Then you can use compare_levels, if the design is balanced. Otherwise, it can be done manually with pivot_wider to then calculate pairwise differences in y_rep between levels of your predictors. The resulting difference distribution can be summarized with means, medians, credible intervals, compared to zero, interval widths of interest.

If you can share a reproducible example of your model, I’d be happy demonstrate.

3 Likes