Let’s say I have prob which is a number between 0 and 1 and some covariates X. I want to model this like
prob ~ inv_logit(N(alpha + beta X, …))
The below code (where I flip logit and normal) works but I find it unsatisfactory as the sigma could push the normal() bit outside the 0, 1 range.
Is there a better way of modeling this?
data {
int N;
matrix[N, 2] X;
real<lower=0, upper=1> prob[N];
}
parameters {
vector[2] beta;
real alpha;
real<lower=0> sigma;
}
model {
beta ~ normal(0, 1);
alpha ~ normal(0, 1);
prob ~ normal(inv_logit(alpha + X * beta), sigma);
}
Hey there! You could use the Beta (proportion) distribution, which is a parameterization of the Beta distribution with location an scale (so it works well for beta regression).
You could also try to model \text{logit(prob)} \sim N(\alpha + X\beta, \sigma), although you would need to include the Jacobian correction for the logit transformation of prob
and probably tight priors for \sigma. This is sort of like wrapping the inv_logit function around the whole left hand side/normal distribution, so that the outcome is guaranteed to fall in [0,1].
Cheers,
Max
2 Likes
Thanks Max, I have not seen stan examples where a function would be on the LHS, do you have an example?
In the beta prop would my Xbeta be the mu or kappa?
Ok I think I figured the beta prop out
data {
int N;
matrix[N, 2] X;
real<lower=0, upper=1> prob[N];
}
parameters {
vector[2] beta;
real alpha;
real<lower=0> kappa;
}
transformed parameters {
vector<lower=0, upper=1>[N] mu;
mu = inv_logit(alpha + X * beta);
}
model {
beta ~ normal(0, 1);
alpha ~ normal(0, 1);
kappa ~ gamma(1, 1);
prob ~ beta_proportion(mu, kappa);
}
1 Like
Looks good! Does its work?
That should “just work”. Something like this…
logit(prob) ~ normal(alpha + X*beta, sigma);
target += .... // Jacobian correction
…but I’d go with the Beta regression anyways.
Cheers,
Max