The weakly-informative priors that we typically talk about enforce “containment” of the posterior. The shape of the prior density impacts just how strong this containment is.

Lighter tails, like Gaussians, offer stronger containment. This prevents the posterior from stretching to extreme regions of parameter space but if the scale is wrong then that prior containment can conflict with the likelihood.

Heavier tails, like the Cauchy, offer very weak containment. This weaker containment offers less resistance to the likelihood, so in the case of an overaggressive scale the likelihood can still dominate the posterior. On the other hand, in the case of a diffuse likelihood the posterior will follow those heavy tails towards more extreme values. This may not sound bad but most people have trouble grasping just how heavy those tails are! A Cauchy density with location 0 and scale 1 has appreciable mass stretching all the way out to 100, and even a bit near 1000! The model configurations out in those tails can be all kinds of problematic – for example they might cause intermediate calculations like ODE solvers or algebraic solvers to fail.

Stan will sample from a Cauchy just fine, so it’s not the heavy tails themselves that worry me but rather what the extreme model configurations far in the tails can do the overall stability of the model. I much prefer the safety of the stronger containment of the Gaussian coupled with a careful analysis of the posteriors shapes relative to the prior shapes to identify any misplaces scales (which is pretty straightforward to see). But, again, that’s just my opinion. Everyone approaches modeling differently.