A general question about sampling from challenging distributions.

For some varying intercept models, I sometimes need to use informative priors on the standard deviations of the so-called random effects. The inverse gamma works well when I want the standard deviation to be larger than the data might otherwise suggest, thanks to its enormous right tail. When I am using optimization, it tends to behave fairly well.

However with Stan, I find that I am getting a lot of divergent transitions, even with specifications like `inv_gamma(.5,10)`

. In reviewing the reparameterization section of the Stan User’s Guide, I noted the discussions about the benefits of reparameterizing Cauchy and Student_t distributions because of their large tails.

Just generally speaking, would people expect the inverse gamma to present similar sampling challenges to the Cauchy and student-t? If so, is there a reparameterization strategy analogous to the ones discussed in the Stan User Guide that might be worth exploring for the inverse gamma? Perhaps an even better question would be whether there are other distributions that could have a similar informative effect (encouraging larger standard deviations for varying intercepts) but which might be easier for Stan to sample.

I’d appreciate any thoughts. Thanks.

Jonathan