Sounds like this is similar to your previous model but now you have a three-state outcome instead of a binary one.
The main change would be categorical_logit
instead of bernoulli_logit
.
Here’s an example of how that might look.
data {
int<lower=0> Nobs; // number of observations
int Nplayers; // number of players
int Nzones; // number of zones
int<lower=1,upper=Nplayers> player_id[Nobs];
int<lower=1,upper=Nzones> zone_id[Nobs];
int<lower=1,upper=3> action[Nobs];
}
parameters {
vector[3] beta_player[Nplayers];
vector[3] beta_zone[Nzones];
}
model {
for (i in 1:Nplayers){
beta_player[i] ~ normal(0,1);
}
for (i in 1:Nzones) {
beta_zone[i] ~ normal(0,1);
}
for (i in 1:Nobs) {
action[i] ~ categorical_logit(beta_player[player_id[i]] + beta_zone[zone_id[i]]);
}
}
The above model has some nonidentifiability because only differences in beta
values matter for prediction. Due to this beta
s will probably have very vague posterior distributions no matter how much data you have. Despite that the posterior predictive distribution should be sharp.
It’s possible to fix the nonidentifiability but I left it in to keep this starting point model simple.