Hi,
this may be a naive question, but I am wondering about the validity of following idea:

take k (for example k=10) random subsets of the data, run a sampling chain for every subset (say 1000 warmup iterations, 100 sample iterations).

for every set of 100 parameter samples do a Bayesian update with the rest of the dataset (the k-1 other subsets), resulting in weights for every sample. This should not be too computationally expensive since we compute on a discrete parameter space.

all those k*100 weighted samples together should be a good approximation of the posterior distribution

The advantage here is the fully parallel computing procedure, so we could take advantage of HPC clusters.
Is there something I am missing?

I see that the 100 samples are not very good for estimating the marginal likelihood. Alternatively the updating could be done sequentially for every subset, thus always using the 900 leftout samples.

Well with any method, step one is laying it out and figuring out exactly what youâ€™re getting right and what youâ€™re getting wrong. If youâ€™re cutting data into pieces, thatâ€™d be the place to start. Maybe things can be recombined later maybe not.

When it comes to statistical approximations, itâ€™s easy to get caught up in the idea that whatever small assumption youâ€™ve had to make wonâ€™t be that big a deal in with whatever problem youâ€™re working on and youâ€™ll still get at the true posteriors youâ€™re after. Practically though, itâ€™s hard enough to figure out a useful model and get good sampling on it even with an exact algorithm. Itâ€™s fun to play with this stuff, but itâ€™s hard to trust it.

and then follow that up with Andrew Gelman, Aki Vehtari, et al. on EP (which uses the cavity distribution to mitigate some of the problems with the kind of naive subsampling you and others suggest):