Marginal likelihood requires multivariate integration to determine normalization constant Z

In Andrew Gelman’s paper, he states

“Estimating the marginal likelihood is more challenging, because determining the normalization constants Z\k requires multivariate integrations”

I have a couple of question. If you already found the full posterior from EP, then there should be no more concern about the normalization constant. Unless, the EP exercise only derived the posterior to within a constant?

Even if you do have a normalization constant, at the end each marginal distribution (for a single variable) has to integration to 1, so that should solve that problem. What’s wrong with this view?

You are right that you don’t need the marginal likelihood when using MCMC methods.

There are few statistics that require the marginal likelihoot itself, though, for instance Bayes factors. In this case, you can try to infer the marginal likelihood from the posterior samples for instance using bridgesampling (see https://cran.r-project.org/web/packages/bridgesampling/index.html for an implementation in R).

1 Like

Also mixture models. We can only drop normalizing constants that don’t depend on parameters.