Add discrimination factor to Bradley-Terry model

Hi all,

Inspired by this source I created the following hierarchical Bradley-Terry model

data {
    int<lower=0> n_sets;
    int<lower=0> n_players;
    int<lower=1, upper=n_players> player_id[n_sets];                     // player A
    int<lower=1, upper=n_players> opponent_id[n_sets];                   // player B
    int<lower=0, upper=1> won[n_sets];                                   // won / lost
    int n_points[n_sets];                                                // total points played
}

parameters {
    vector[n_players] alpha;                                             
    real<lower=0> sigma_alpha;                                           
}

model {
    // priors
    alpha ~ normal(0, sigma_alpha);
    sigma_alpha ~ lognormal(0, 0.5);
    
    // likelihood
    won ~ bernoulli_logit(alpha[player_id] - alpha[opponent_id]);
}

I would like to include another parameter, n_points which represents the number of points played in a single set. The underlying assumption is that players of similar strength play longer; if A wins with 11-0 (11-10) he is much (slightly) better than B.
Apparently, this could be introduced in the form of a “discrimination” factor from the item response theory.
Intuitively; every set is a question. The more points played, the more difficult the question. The question can be answered correct (set is won) or wrong (set is lost).

Using the docs I would like to transform my script into a multilevel 2PL model.

How would this be introduced into my script? I was thinking like below

for (set in 1:n_sets_ {
   won[set] ~ bernoulli_logit(gamma[set] * (alpha[player_id[set]] - alpha[opponent_id[set]]));
}

But the docs introduce the discrimination / gamma factor in the parameters-block as a parameter, whereas I know “it” and would introduce it in the data-block?. Please advice

FWIW, it seems to me that the difference in scores doesn’t inform the “difficulty” of a match (two very weak players could play a close match). However, you could treat the difference in scores as the outcome, and instead of fitting won ~ bernoulli you could fit score_difference ~ some_suitable_discrete_distribution