Hi,
I have a model with a joint distribution over a number of discrete random variables and I want to specify my priors over this distribution using the marginal probability of each of the variables separately. My prior beliefs about this model are easier to think of that way. For example, for simplicity imagine by joint probability was over two variables
where r_d \in \{r^1_d, r^2_d\} and r_m \in \{r_m^1,r_m^2, r_m^3\}. Now I can easily imagine how to set priors on p(r_d) and p(r_m) but not p(r_d,r_m).
At first, I thought why not use a change of variables kind of like that rough code below, but then I realized I can’t have such a transformation from M\times N to M + N.
data {
int N_r;
int N_m;
matrix[N_r * N_m, N_r - 1] M_r;
matrix[N_r * N_m, N_m - 1] M_m;
}
parameters {
simplex[N_r * N_m] joint_prob;
}
transformed parameters {
vector[N_r - 1] alpha_d = logit(M_r * joint_prob);
vector[N_m - 1] alpha_m = logit(M_m * joint_prob);
}
model {
// Set some priors on alpha_r and alpha_m
alpha_d ~ ...
alpha_m ~ ...
}
Does anyone have any suggestions how I can specify such a prior for a joint distribution using my knowledge of the marginal probabilities?