When you add more data your posterior becomes more concentrated, and as it concentrates it becomes more sensitive to the compatibility of your model and the true data generating process. In other words, the more data you have the more your posterior manifests model misfit.
Often this model misfit manifests in awkward posterior geometries that make computation difficult, yielding behavior not unlike what you see.
Another issue is that concentrating posteriors make non-identifiabilities worse. For example, if you have collinearity then as you add more data your posterior converges to a singular line that is extremely difficult to explore (you have to make infinitesimal jumps orthogonal to the non-identifiability, and large jumps transverse to it). Or if you have a multimodal model then adding more data will generally suppress the posterior between the modes and obstructing transitions between the modes.
So there are lots of possible reasons for the behavior you see, and the only way to better identify what’s going on is to work in steps. The 10k observations look good, but instead of jumping straight to the 60k try 20k. Carefully check sampling diagnostics for indications of bad posterior geometries arising and posterior predictive checks for any misfit becoming significant. As you add data incrementally the pathology should hopefully rear its head before it becomes so bad that you can’t explore the posterior at all.