Say I have two models which are equally likely apriori (M_1,M_2) and some data y. What is the relationship between the posterior marginal probabilities of each model and the \lambda \in [0,1] that would be assigned if the data was fitted as a mixture: L(y|\theta,M)= \lambda \pi(y|\theta_1,M_1)+(1-\lambda)\pi(y|\theta_2,M_2)?
The latter is trivial to code in Stan, and I’m hoping it could provide a computational shortcut to something well related to the posterior marginals for alternative models. Intuitively it feels like the quantities in question should be directly related. It feels bizarre if they could vary independently since one can think of model averaging as a mixture model.