How to make regularized horse shoe prior asymmetric around mean?

I’m doing bayesian sparse regression. In my dataset, most of the \beta are distributed around say 60 and all ‘significant’ \beta are in the range of 10-20. I think regularized horseshoe prior is suitable for my problem.

As depicted here in the plot of the slopes, I think horseshoe prior is designed to place those ‘significant’ \beta on the both sides of the mean (60 here). I’ve verified this by drawing samples from the unconditional prior and plotted histogram of \beta.

Questions:

  1. How to make regularized horseshoe prior work for my case?
  2. How to choose the value of degrees of freedom \nu? Is it related to the size of the input matrix? Is it related to m_{eff} or p0?

EDIT:

What I’ve tried so far

  1. I’ve used a Laplace prior for \beta with mean at 60 and it worked well in some cases and in some cases resulted in stochasticity in estimations due to multimodality in marginal distributions of \beta.
  2. Used a mixture model with one gaussian placed over 60 and another over 20. They are weighted accordingly with expected 'sparsity' / total length of the vector. This improved the results a bit.

I believe horseshoe prior will be helpful to my problem.

@avehtari

If you have that much information then a mixture model with informative priors on location parameters would present that information better. That would also allow separate variation parameter for the betas near 60 and for the betas in the range 10-20. With horseshoe it would be more difficult to control the variation around 60, and horseshoe would not shrink betas near 10-20 to that range.

2 Likes

Thank you for the reply!! Can you suggest any reading material or any pointers on how should I go about choosing a good prior for sparse regression? Is there any systematic way to choose a prior?

I’m new to Bayesian methods, I’d be very grateful for any help provided.