I am not really into bayesian statistics, but I face a problem of which I think you guys are probably into.
I have a set of dichotomous Variables A,B,C,… and I have assumptions about their probabilities P(A), P(B),… and also about the probabilities of each pair P(A&B), P(A&C), P(B&C), …
Or in other words, I have their variance-covariance matrix.
But I have no further assumptions about P(A&B&C) for example.
From this, I would like to construct the joint distribution of my variables with the maximum entropy.
How would you do this? Or how would you construct a reasonable joint distribution with this information? Is there an easy way to do this in stan?
I hope this question is not too obvious, but I did not find the answer in BDA 3, so if you would like to recommend literature, I would also be happy.
So, just to make sure I get this right: You have data A, B, C, which are binary and correlated. I think the lingo is usually, that when you assign a prior to parameters you speak of priors and when you assign a distribution to data you usually call it likelihood. It’s not hugely important, since these two are entangled anyways and the distinction between variable and data can get blurry, too.
If you have something like a variance-covariance matrix, you can fit a multivariate Probit regression (scroll down a bit). Specifying priors can be a bit tricky, but usually in this model you treat the correlation matrix (variances are set to 1 to identify the model) as a parameter and estimate it.
Is that what you are looking for? If not feel free to follow up with questions, or maybe some details on the problem or what you are trying to achieve.
Hi @Max_Mantei !
Thanks! Yes, I would like to assign a distribution to data.
I will take a look at the link later (since it takes me a while to understand it), but I maybe can already try to give some details.
My problem is as follows: I have a bunch of diseases A,B,C,… and from the scientific literature, I know their probabilities (12% of people get disease A in their lifetime) and their pairwise relations (e.g. If you have disease A, it is more likely to also have disease B).
I have no real dataset at all! Only those published probabilities.
What I would like to estimate is how likely it is to have disease A and B and C and not D and not F … all together. Unfortunately, I can not directly compute them from the information I have - but the information I have restricts the possible values of those probabilities.
Therefore, I would like to estimate a joint distribution of all my diseases that is in accordance with the information I have. As I said, there are many distributions that satisfy those conditions, and with linear programming, I would be able to obtain the range of possible values.
However, I think the distribution that satisfies those conditions and has the maximum entropy would be the best one for further use in my model.
Yes, I have “only” conditional probabilities, but as P(A) determines the variance of A, and P(A|B) and P(B) determine their pairwise joint distribution, I think I also have their variance-covariance matrix.
I need the model to work for arbitrary diseases, but if it would help I could post a realistic example.
I see. I was just thinking about doing something like this (edit: I hope you know some R, sorry for assuming that):
# a little helper
freq_pos <- function(x) {
sum(x > 0) / length(x)
}
# assume some marginal probs
p_A <- 0.3
p_B <- 0.6
p <- c(p_A, p_B)
# convert to z-score
mu <- qnorm(p)
# number of simulations
n_sims <- 1e6
# diagonal correlation matrix, implying Pr(B) = Pr(B|A)
S_uncorr <- diag(c(1, 1))
# draw from multivariate Normal
Y_uncorr <- MASS::mvrnorm(n = n_sims, mu = mu, Sigma = S_uncorr)
# compute marginal probs
apply(Y_uncorr, 2, freq_pos)
# identifier for the case A = 1
A_true <- Y_uncorr[, 1] > 0
# computing Pr(B|A)
apply(Y_uncorr[A_true,], 2, freq_pos)
# specify positive correlation, implying Pr(B) < Pr(B|A)
S_corr <- matrix(c(1, 0.5,
0.5, 1), nrow = 2)
Y_corr <- MASS::mvrnorm(n = n_sims, mu = mu, Sigma = S_corr)
apply(Y_corr, 2, freq_pos)
A_true <- Y_corr[, 1] > 0
apply(Y_corr[A_true,], 2, freq_pos)
I’m just not sure how to cleverly map from specific conditional probabilities to correlations… But if this is figured out, then generating data is as easy as drawing from a (multivariate) Normal distribution.
That being said… there is probably a more straightforward way to do all this…
Let p(A) and p(B) denote the marginal probabilities of events A and B and let p(A, B) denote the joint probability of these events.
Then the correlation coefficient is \rho_{AB} = \frac{p(A, B) - p(A)p(B)}{\sqrt{p(A) \left[1-p(A) \right] p(B) \left[1-p(B) \right]}}
Hey! Well, the correlation of the two binary variables A, B that Max pointed out is something different from the correlation of the latent variables in the multivariate Probit approach.
Right now I don’t really have time to think that much about it, but if there’s a way to express the correlation induced by the marginal and conditional probabilities in terms of Kendall’s \tau, then I can point you to a paper, where they discuss a conversion for the latent variable approach. (I’m on my phone right now, so, I will link that paper probably tomorrow.)