How do I get "marginal_effects" for categorical variables to condition on an average rather than a category?

Please also provide the following information in addition to your question:

  • Operating System: Mac OS Sierra
  • brms Version: 2.4.0 (I think)

Hi!

I ran an experiment where I obtained perceptual data on listeners’s sensitivity for recognizing a token in two different levels of noise (low and high). I have two types of tokens from three speakers ( for example “pat”/“bat”. In total I therefore have 12 datapoints per listener (2 levels noise x two tokens x 3 speakers.

Let’s say the following model best fit my data:

model7B.model ← brm(sensitivity_scores ~ noise_level * token + speaker + speaker:token + (1|listener), data=mydata, iter = 20000, control = list(adapt_delta = 0.99), init = 0)

It would be great if it were possible to display the marginal effects plots for the interaction between conditions, for example noise_level and token. I’ve experimented with using the function “marginal_effects”, but as a result of my conditions being categorical, marginal_effects displays only the reference category each time, for example

marginal_effects(model7B.model, effects = noise_level:token)

displays the interaction only for the first speaker.

Instead, I’d like to have a figure that displays what the noise_level by token interaction looks like if speakers were lumped/pooled together, rather than just for one category of data (speaker 1 / speaker 2 etc.)

I had two ideas on how I might achieve this:

Idea 1

I simply rerun the Bayesian statistics with this model:

model4B.model ← brm(sensitivity_scores ~ noise_level * token + (1|listener), data=mydata, iter = 20000, control = list(adapt_delta = 0.99), init = 0)

And then use

marginal_effects(model4B.model, effects = “noise_level:token”))

To get the interaction plot of noise_level and vowel if speaker was not a factor. I can then do the same for the there interaction.

Idea 2

I extracted the posterior draws using:

intercept_draws = model7B.model[[“fit”]]@sim[["samples”]][[1]][[“b_Intercept”]][(4001:20000)]
noise_level_hi_draws = model7B.model[[“fit”]]@sim[["samples”]][[1]][[“b_noise_level”]][(4001:20000)]
token2_draws = model7B.model[[“fit”]]@sim[["samples”]][[1]][[“b_token”]][(4001:20000)]
speaker2_draws = model7B.model[[“fit”]]@sim[["samples”]][[1]][[“b_speaker1”]][(4001:20000)]
speaker3_draws = model7B.model[[“fit”]]@sim[["samples”]][[1]][[“b_speaker2”]][(4001:20000)]
token2_noise_level_hi_draws = c(model7B.model[[“fit”]]@sim[[“samples”]][[1]][[“b_dBlevelhi:voweliy”]][(4001:20000)])
token2_speaker2_draws = c(model7B.model[[“fit”]]@sim[[“samples”]][[1]][[“b_voweliy:speakerm30”]][(4001:20000)])
token2speaker3_draws = c(model7B.model[[“fit”]]@sim[[“samples”]][[1]][[“b_voweliy:speakerm44”]][(4001:20000)])

(In these particular lines I used 4001:20000 to exclude the 20% warmup values. This line only extracts the posterior draw values for the first chain for the purposes of the question.)

I then pooled together the speakers by adding up the posterior draws of each speaker for a particular condition, say low noise level and first token, and dividing by the number of groups I added (so, 3 for the three speakers). See below:

#The next line of code adds together:
#the draws for the the speaker 1, low noise level and token 1 (intercept draws);
#the draws for the the speaker 2, low noise level and token 1 (intercept draws+speaker2_draws);
#the draws for the the speaker 3, low noise level and token 1 (intercept draws+speaker2_draws);
#Then divide by 3, for the three groups added together.
#subsequent 3 lines essentially do the same for different conditions

poolspeakers_lo_noise_t1 = ((intercept_draws) + (intercept_draws + speaker2_draws) + (thedrawsIntercept + speaker3_draws))/3

poolspeak_lo_noise_t2 = ((intercept_draws + thedrawsvoweliy) + (intercept_draws + token2_draws + speaker2_draws + token2_speaker2_draws) + thedrawsIntercept + token2_draws + speaker3_draws + token2speaker3_draws)/3

poolspeak_hi_noise_t1 = ((intercept_draws + noise_level_hi_draws) + (intercept_draws + noise_level_hi_draws + speaker2_draws) + (intercept_draws + noise_level_hi_draws + speaker3_draws))/3

poolspeak_hi_noise_t2 = ((intercept_draws + noise_level_hi_draws + token2_draws + token2_noise_level_hi_draws) + (intercept_draws + noise_level_hi_draws + token2_draws + noise_level_hi_draws + speaker2_draws + token2_speaker2_draws) + (intercept_draws + noise_level_hi_draws + token2_draws + noise_level_hi_draws + speaker3_draws + token2speaker3_draws))/3

I plot the average of the pooled categories and the highest density intervals as error bars.

I’m pretty new to Bayesian statistics, so I’m still trying to wrap my head around everything, but for idea 1, I worry that a different model will experience different “shrinkage” (model4B.model compared to model7B.model).

For idea 2, I’m assuming that the new pooled posterior draws I “create” is a reasonable representation of what the Bayesian model would draw if speakers were not a factor, but then this seems to be essentially what idea 1 will do, and perhaps “shrinkage” will again be an issue?

Essentially I’d like to know if either of these ideas actually yield a valid marginal effects plot for my categorical data that are conditioned on some “average” rather than the reference category.

Any information or suggestions on how to get the correct plot would be greatly appreciated!

1 Like

There is an easier solution than idea 2 that may come close to what you want.

If you use the conditions argument and set conditions = data.frame(speaker = NA), it will set all dummy variables of speaker to zero (this is also explained in the doc of ?marginal_effects). If you then use sum coding for speaker, 0 in the dummy variables will correspond to the grand mean of the speakers.

4 Likes

I’m fairly confident you can use the emmeans package to compute the marginal effects of categorical variables averaged over the other categories.

something like

library(emmeans)
emmeans(model4b.model,pairwise~noise_level*token)

I think you are right, and you can actually use emmeans with brms models. I am not saying my solution is the only one, just that it works with marginal_effects in the current implementation.

Thank you, everyone.

I had perviously tried to use conditions = data.frame(speaker = NA, token = NA) without using dummy variables for my model, which still seemed to leave me stuck with values for reference categories only. Using dummy variables as per your suggestion works well.

Nels, your suggestion works great too. I’m looking at the documentation for emmeans, but it’s not immediately obvious to me how I can specify an hpd different from 95%?

tidybayes has support for extracting draws from emmeans objects and then summarizing them using whatever interval type you want. Something like this might work:

library(emmeans)
library(tidybayes)

model4b.model %>%
  emmeans( ~ noise_level * token) %>%
  contrast(method = "pairwise") %>%
  gather_emmeans_draws() %>%
  median_qi(.width = .90)

That will give you median and 90% quantile (equal-tailed) intervals. If you want highest-density intervals, you could use something like mode_hdi in place of median_qi. The full set of variants is [mean|median|mode]_[qi|hdi|hdci], where hdi is highest-density (not necessarily continuous) interval, and hdci is highest-density continuous interval — i.e. hdi may return multiple intervals, which might be undesirable; hdci does not, though the result is not necessarily an HDI.

For more examples of tidybayes + emmeans (including plots), see here

7 Likes

Wow that’s a nice option! Wasn’t aware that one can combine emmeans and tidybayes!

1 Like

And I didn’t realize brms was supported by emmeans now! Makes for a useful combination :).

I haven’t been doing this myself so thanks to the developer of emmeans! But I think we can improve the interplay between brms and emmeans even more. I need to take a closer look I think to this issue.

1 Like

Hi @mjskay @nelsjohnson and everyone. I found emmeans interesting. It is great that it supports brms and rstanarm. I have three questions:

(1) How could I get the constrasts expressed as differences in probabilties comparing all levels to the baseline rather than odds ratio when I set type = "response"? This is assuming that the above model is logistic.

(2) Is there in emmeans a function doing what make_conditions and conditions do in brms's marginal_effects? [Update: a solution to this issue using tidybayes is discussed on Add vignette for calculating average marginal effects and/or average predictive comparisons · Issue #139 · mjskay/tidybayes · GitHub].

(3) Is it possible in emmeans to marginalize over rather than have the results ... averaged over the levels of: the other predictors.

I have opened an issue (How to marginalize rather than condition on variables to make the output of brms marginal_effects literal AME, MER, and MEM · Issue #552 · paul-buerkner/brms · GitHub) to address these questions following this discussion and wanted to know if emmeans has these functionalities implemented already to get the marginal effects proposed for brms.

Thank you in advance.