Well with any method, step one is laying it out and figuring out exactly what you're getting right and what you're getting wrong. If you're cutting data into pieces, that'd be the place to start. Maybe things can be recombined later maybe not.
When it comes to statistical approximations, it's easy to get caught up in the idea that whatever small assumption you've had to make won't be that big a deal in with whatever problem you're working on and you'll still get at the true posteriors you're after. Practically though, it's hard enough to figure out a useful model and get good sampling on it even with an exact algorithm. It's fun to play with this stuff, but it's hard to trust it.
Here's a thing from Betancourt about the 8-schools model in Stan (simple model, small data, fancy algorithm -> still really hard to get it right): http://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html
Here's a thing by Bob on ensemble methods: http://andrewgelman.com/2017/03/15/ensemble-methods-doomed-fail-high-dimensions/
Best of luck!
edit: changed desc. of the Bob link