Hi,
I have the following model where each player has a constant lambda
.
data {
int<lower=1> n_matches;
int<lower=2> n_players;
int<lower=1, upper=n_players> player_id[n_matches];
int<lower=1, upper=n_players> opponent_id[n_matches];
int<lower=0> n_points_won[n_matches];
int<lower=n> n_points[n_matches];
}
parameters {
vector[n_players - 1] raw_lambda;
}
transformed parameters {
vector[n_players] lambda = append_row(0, raw_lambda);
vector<lower=0,upper=1>[n_matches] phi = inv_logit(lambda[player_id] - lambda[opponent_id]);
}
model {
raw_lambda ~ normal(0, 2);
n_points_won ~ binomial(n_points, phi);
}
Ideally I would like to make the skill of the players time-varying by introducing penalized splines.
I read the blogpost “Random effects and penalized splines are the same thing” where the author applies penalized splines on a mixed effect model:
y = X * beta + Z * b + e
I was wondering how I could implement his approach my binomial model. I noticed in the docs that the binomial_logit_glm()
-function exists in Stan. This can be seen as Binomial(n | N, logit^-1(alpha + x * beta)) which looks more like the authors model.
The author introduces the smoothing by assuming
b ~ Normal(0, sigma / smoothness_parameter)
Would my version look as follows:
...
parameters {
vector[n_players - 1] raw_lambda;
real<lower=0> sigma_lambda;
real smoothness;
}
...
model {
raw_lambda ~ normal(0, sigma_lambda / smoothness);
n_points_won ~ binomial_logit_glm(n_points, ..., ...);
}
Would lambda
be alpha
or beta
in that case?
More importantly: am I on the right track or am I lost?