Expressing data-averaged posterior

I assume the self-consistency equation below might not hold with ensembles of samples which is what happened with code example here.
\pi(\theta)=\int \mathrm{d} \tilde{y} \mathrm{~d} \tilde{\theta} \pi(\theta \mid \tilde{y}) \pi(\tilde{y} \mid \tilde{\theta}) \pi(\tilde{\theta})

If so, may I ask whether this would affect the result of eight school example where K = 8 (J in the code here) and linear regression example where K = 25 (N in the code here) from Talts SBC paper?

If only K = L = 1 is allowed, this would limit the use of SBC greatly (especially the number of observations, K) unless I am misunderstanding. Would there be any walk around to this i.e. could the ensemble of samples be rewritten cleverly to recover the LHS of the above equation \pi(\theta)?

\pi\left(\theta_{1,1}, \ldots, \theta_{l, k}, \ldots \theta_{L, K}, y_{1}, \ldots, y_{K}, \theta^{\prime}\right)=\left[\prod_{k=1}^{K}\left[\prod_{l=1}^{L} \pi\left(\theta_{l, k} \mid y_{k}\right)\right] \pi\left(y_{k} \mid \theta\right)\right] \pi(\theta)
= \pi(\bar{\bar{\theta}}, \bar{y}, \theta^{\prime})?

Would the following be any help?

log(\left[\prod_{k=1}^{K}\left[\prod_{l=1}^{L} \pi\left(\theta_{l, k} \mid y_{k}\right)\right] \pi\left(y_{k} \mid \theta\right)\right] \pi(\theta))
=Klog(\left[\prod_{l=1}^{L} \pi\left(\theta_{l} \mid y\right)\right] \pi\left(y\mid \theta\right)) +log( \pi(\theta))
=KLlog(\pi\left(\theta\mid y\right)) + Klog(\pi\left(y\mid \theta\right)) +log( \pi(\theta))
= log(\pi(\bar{\bar{\theta}}\mid y)) + log(\pi\left(\bar{y}\mid \theta\right)) +log( \pi(\theta))