Expressing data-averaged posterior

When studying ensembles of samples one has to consider independent and identical product distributions. Formally the distribution of N exact samples from the distribution specified by the density function \pi(x) is specified by the product density function

\pi(x_1, \ldots, x_N) = \pi(x_1) \cdot \ldots \cdot \pi(x_N) = \prod_{n = 1}^{N} \pi(x_{n}),

where each of the component density functions are the same.

The SBC™ method proposed in the paper look at L posterior samples for each observation simulated from the prior predictive distribution. This corresponds to the joint distribution

\pi(\theta_1, \ldots, \theta_{L}, y, \theta') = \left[ \prod_{l = 1}^{L} \pi(\theta_{l} \mid y) \right] \pi(y \mid \theta) \, \pi(\theta).

From this joint distribution the SBC™ method marginalizes out y and then pushes forward the resulting distribution \pi(\theta_1, \ldots, \theta_{L}, \theta') along a one-dimensional rank function, which turns out to be uniform.

Simulating K observations from the same prior draw and then L posterior draws for posterior distribution corresponds to

\pi(\theta_{1,1}, \ldots, \theta_{l,k}, \ldots \theta_{L,K}, y_1, \ldots, y_{K}, \theta') = \left[ \prod_{k = 1}^{K} \left[ \prod_{l = 1}^{L} \pi(\theta_{l, k} \mid y_{k}) \right] \pi(y_{k} \mid \theta) \right] \pi(\theta),

and so on.

2 Likes