Half-cauchy prior on standard deviation - R vs. Stan definition of half-cauchy

I would like to put a half-cauchy prior on the standard deviation of a regression model

The r package ‘LaplacesDemon’ has a half cauchy distribution function that I can use to test/plot some different options, and it takes only ‘scale’ as it’s input variable.

I believe in Stan, the half-cauchy is done just by specifying a Cauchy distribution and bounding it at 0 (or, for a distributional parameter like the standard deviation, the prior is automatically bound at 0) - there doesn’t seem to be an explicit half-cauchy option in the stan documentation that I can find. However, the input for the Cauchy requires mu and sigma. I presume that for a half-cauchy we would put mu at 0 (?), but is ‘sigma’ the same as ‘scale’? If they are not the same, how can I plot half cauchy priors in R using the Stan definition of the distribution to check what they look like?

Please also provide the following information in addition to your question:

  • Operating System: mac OSX
  • brms Version: 2.18.1

Yes, you are correct.

Thank you for your response @mike-lawrence

One further thought: When you bound the distribution at 0, does it just ignore the values that would have come below zero - so it would literally be like chopping off half the distribution?

When you declare a parameter as bounded, Stan will behind the scenes apply a transform that will produce only proposals that respect that bound. Similarly, when expressing a prior or likelihood involving a bounded variable, Stan will renormalize the distribution being used so that the sum of the area between the bounds is 1. So, for a positive bounded parameter it will only generate positive proposals and when you use a cauchy(0,x) prior it will automatically switch to a half-cauchy centered at zero.

Thanks - I’m just trying to imagine this in a sort of visual way to check I understand the normalisation. If you imagine a density plot of the distribution, is it just as if you have covered up the half of the distribution below 0. So the renormalisation simply means the right hand side above 0 would just be sampled twice as much as when the bound is not there? Or, is the shape of the right hand side above 0 in some way changed by the renormalisation?

For the case of a boundary at zero and a symmetric distribution centered at zero, yes. For the general case, the renormalization simply multiplies the portion of the density inside the bounds by whatever constant that causes the area inside the bounds to become 1.

Great - thank you!