I’m doing bayesian sparse regression. In my dataset, most of the \beta are distributed around say 60 and all ‘significant’ \beta are in the range of 10-20. I think regularized horseshoe prior is suitable for my problem.
As depicted here in the plot of the slopes, I think horseshoe prior is designed to place those ‘significant’ \beta on the both sides of the mean (60 here). I’ve verified this by drawing samples from the unconditional prior and plotted histogram of \beta.
Questions:
- How to make regularized horseshoe prior work for my case?
- How to choose the value of degrees of freedom \nu? Is it related to the size of the input matrix? Is it related to m_{eff} or
p0
?
EDIT:
What I’ve tried so far
- I’ve used a Laplace prior for \beta with mean at 60 and it worked well in some cases and in some cases resulted in stochasticity in estimations due to multimodality in marginal distributions of \beta.
- Used a mixture model with one gaussian placed over 60 and another over 20. They are weighted accordingly with
expected 'sparsity' / total length of the vector
. This improved the results a bit.
I believe horseshoe prior will be helpful to my problem.