Bradley-Terry model in brms or rstanarm (lme4 notation)?

Hi !

I’d like to help my collaborators fit Bradley-Terry models:

P[\text{team }i\text{ wins against team }j] = \text{logit}^{-1}(\beta_i - \beta_j)

in R, using an interface to Stan like rstanarm or brms.

This post uses lme4:

First, we’ll create a data.frame where the outcome takes on values of 0 or 1 depending on whether team 1 won the game. Each game occupies two rows in this data.frame with each opponnent taking its turn as team1.

glmer(outcome ~ (1 | team1) + (1 | team2), family = binomial) 

But this is

P[\text{team }i\text{ wins against team }j] = \text{logit}^{-1}(\beta_i^{(1)} + \beta_j^{(2)})

and doesn’t link the same team’s random effect when they enter the likelihood as team1 vs team2, right ?

See a related question asked about lme4 in stackexchange.

UPDATE: I got this working in INLA ! Can compare to HMC, maybe in Stan itself, if interfaces like brms and rstanarm can’t accommodate Bradley-Terry models.

data$w = -1

inla(depvar ~ f(team1, model="iid", values = teams) + 
              f(team2, w, copy = "team1"), 
              family="binomial", 
              data=data)

You can run Bradley-Terry or Thurstone models in brms (or other glm packages) by coding the factors as specified here: Critchlow, D. E. and Fligner, M. A. (1991). Paired comparison, triple comparison, and ranking experiments as generalized linear models, and their implementation on GLIM. Psychometrika, 56(3):517– 533.

2 Likes

Thanks, @andymilne ! I have tried to parse what the article recommends, but cannot seem to get the gist of what the the authors are suggesting.

I have a set of paired comparison data, in which no preference was an option.

Each row has Option A (3 possibilities), Option B (also 3 possibilities) and the Preference either A, B or No Preference. Each subject did all possible comparisons, in both directions, under a set of other conditions.

How should I code the factors for a glmm approach to a Bradley-Terry model?

I assume you mean that each paired comparison was one out of three items from Option A and one out of three items from Option B.

You need to create six new variables/columns in your data frame called something like: A1, A2, A3, B1, B2, B3. The coding required for a B-T model is that, in each trial, the item presented first is coded -1, the item presented second is coded 1, all unpresented items are coded 0. For example, if the two items are A1 followed by B3, you would code it:

A1 A2 A3 B1 B2 B3
-1  0  0  0  0  1

The dependent variable Choice needs to be 1 when the item presented second is chosen, otherwise 0.

A brms model then takes a form like:

brm(Choice ~ A1 + A2 + A3 + B1 + B2 + B3, 
    family = bernoulli(link = "logit"), 
    ...)

Including all 6 predictors is only possible in a Bayesian context because they will be perfectly multicollinear. In a frequentist context, one of the factors will always be omitted so the resulting coefficients are interpreted as relative to the omitted item. E.g.,

Choice ~ A2 + A3 + B1 + B2 + B3

I actually prefer to omit one of the factors in a Bayesian context too because, if you don’t, you get additional uncertainty related to the overall level of every item, which is informed solely by the prior and can make the estimates look excessively uncertain.

The formula can also contain random effects and other underlying predictors of interest; e.g.,

brm(Choice ~ A2 + A3 + B1 + B2 + B3 + X1 + X2 + ... + XN + 
            (A2 + A3 + B1 + B2 + B3 + X1 + X2 + ... + XN | Prtcpnt), 
    family = bernoulli(link = "logit"), 
    ...)

Choosing the probit link instead of logit gives you a Thurstone model.

The intercept gives an overall bias for choosing the second versus first item presented.

Hopefully I have understood what you want to do.

2 Likes

Thank you! I’ll give this a shot.

Thanks again @andymilne

I think I got everything set up correctly, but could use a bit more help with interpretation.

I set up the model as:

pref_fmla <- bf(Choice ~ A2 + 
                        B1 + 
                        B2 +
                        (1|SubjID),
                center = FALSE)

The model summary looks like this:

Group-Level Effects: 
~SubjID (Number of levels: 7) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     1.58      0.47     0.81     2.64 1.00     2384     2824

Population-Level Effects: 
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept    -0.52      0.62    -1.77     0.68 1.00     2018     2525
A2            1.01      0.49     0.04     1.98 1.00     4092     3093
B1           -0.17      0.49    -1.13     0.76 1.00     3563     2438
B2           -0.70      0.58    -1.81     0.43 1.00     3849     3163

These estimates are all highly uncertain, but would I interpret this as:

Chance of 2 being preferred over 1:

inv_logit_scaled(-0.52 - 0.70)

0.23

A 23% chance of 2 being preferred over 1?

Chance of 1 being preferred if presented second:

inv_logit_scaled(-0.52 + 1.01 - 0.17)

0.58

A 58% chance of 1 being preferred over 2?

The conditional effects look like this:



Just to be sure we are not misunderstanding each other with regard to the naming of variables … you mentioned in the earlier post that there are three possibilities for A and also for B in which case there should be predictors called A3 and B3 also in the model. Or did I misunderstand what you were referring to? Could you perhaps give some examples of the different As and Bs? Were there a lot of tied responses in the data because the current modelling set-up (1 for choosing second item; -1 for choosing first item) will be losing those responses? – they can be included by changing that coding and using an ordinal model but let’s get to that once we are sure everything is set-up correctly for the non-tied responses.

Thanks!

I re-coded some of the options to form two things to be compared (a generic and a custom). There are quite a few ties/no preference.

All good I think, but why in the text do you refer to items as 1 and 2 and not as A1 or B1 and A2 or B2? What does the A and B signify? Is A1 the same thing as B1, and A2 the same as B2 or completely different things? I ask because I am wondering if A and B might signify whether they come first or second? If so, the As and Bs would each need to be collapsed into a single variable (column) because the position (first/second) is coded with the -1 and 1.

To avoid throwing away all of the ties, these can be included by recoding the Choice as 1 for first item chosen, 2 for neither item chosen, and 3 for second item chosen. Then, in brm(), change the family to cumulative("logit"). (Another option would be acat("logit")). More info about Thurstone/Bradley-Terry modelling of tied responses here: Agresti, A. (1992). Analysis of ordinal paired comparison data. Journal of the Royal Statistical Society. Series C (Applied Statistics), 41(2):287–297. Including these ties might tighten up the estimates.

I think there’s a typo with the first example you provide – do you mean chance of A1 being preferred B2? Otherwise, I think this is the correct way to account for probabilities of a particular response given a particular condition. But often with these types of models it can be easier and less prone to user-error to make predictions with the sets of conditions you are interested in and analyse the resulting posterior distributions.

Based on your example I coded the condition presented first as A and second as B, so yes A1 is the same item as B1.

In that case you need just three columns: Option1, Option2, and Option3 (keeping with your original 3-way categorization). Each is coded -1 if first, 1 if second, otherwise 0. Then model as before.

In each trial there are three options presented and the order of which was first was counter balanced across 2 factors.

I recoded these as:

 mutate(Option1 = case_when(
                A == "Option1" ~ -1L
                , B == "Option1" ~ 1L
                , TRUE ~ 0L)
               , Option2 = case_when(
                       A == "Option2" ~ -1L
                       , B == "Option2" ~ 1L
                       ,TRUE ~ 0L )
               , Option3 = case_when(
                       A == "Option3" ~ -1L
                       , B == "Option3" ~ 1L
                       , TRUE ~ 0L )
               , Choice = ifelse(Preference == B, 1, 0)
        )

Will this handle ties if I use the Bernoulli model?

I’ve also got:

mutate(Option1 = case_when(
                A == "Option1" ~ -1L
                , B == "Option1" ~ 1L
                , TRUE ~ 0L)
               , Option2 = case_when(
                       A == "Option2" ~ -1L
                       , B == "Option2" ~ 1L
                       ,TRUE ~ 0L )
               , Option3 = case_when(
                       A == "Option3" ~ -1L
                       , B == "Option3" ~ 1L
                       , TRUE ~ 0L )
               , Choice = case_when(
                                      Preference == A ~ 1L
                                      , Preference == B ~ 3L
                                      , Preference == "NP" ~ 2L
                              )
                              , Choice = factor(Choice
                                                , levels = c(1:3)
                                                , ordered = TRUE)
        )

With the model as:

pref_fmla <- bf(Choice ~ Option1 + 
                        Option2  + 
                        Option3  +
                        X1 +
                        X2 +
                        (1|SubjID)
                        , center = FALSE,
                       , family = cumulative(link = "logit"
                       , threshold = "equidistant")

The first coding/modelling strategy will not handle ties. For that, you need three response levels: 1 (for first), 2 (for neither), and 3 (for second) and, because there are three response levels, you will need a cumulative or acat family. If you completely remove the tied responses, this should work fine for the Bernoulli family.

The second coding strategy and model family looks correct for handling ties although it’s not clear to me how Option1, 2, and 3 map to the variables in the model formula.

1 Like

Thank you very much for you patience and thorough explanations!

I edited the formula syntax to clarify the mapping.

Looks fine, but you may want to remove one of Options 1, 2, and 3 from the formula so that the resulting coefficients are then treated as being relative to the baseline Option that’s been removed. This will tighten up the estimates if you are interested in their relative preference.