# Model averaging and relationship to mixture weights

Say I have two models which are equally likely apriori (M_1,M_2) and some data y. What is the relationship between the posterior marginal probabilities of each model and the \lambda \in [0,1] that would be assigned if the data was fitted as a mixture: L(y|\theta,M)= \lambda \pi(y|\theta_1,M_1)+(1-\lambda)\pi(y|\theta_2,M_2)?

The latter is trivial to code in Stan, and I’m hoping it could provide a computational shortcut to something well related to the posterior marginals for alternative models. Intuitively it feels like the quantities in question should be directly related. It feels bizarre if they could vary independently since one can think of model averaging as a mixture model.

1 Like

Not necessarily the exact answer, but related discussion can be found in Section 4.3 in Using Stacking to Average Bayesian Predictive Distributions
If that section doesn’t help, ask again and I try to clarify.

2 Likes

Just what the doctor ordered. Perfect, thanks.

1 Like