I normally use glmnet for variable selection tutorial here. The brms documentation says that there is a lasso function, but I am struggling to get a working example. I get the error.
Error: Defining priors for single population-level parametersis not allowed when using horseshoe or lasso priors(except for the Intercept).
Could someone show a simple working example of variables selection using lasso with brms?
Please provide the code you want to get working. Also, I suggest using the horseshoe prior rather than lasso, since the former provides much better shrinkage.
How can I be more aggressive or less aggressive with setting coefficients equal to 0? I was assuming that df was the argument for this, but maybe I am wrong. I am seeing that none of the following models have covariates getting set to 0.
That’s because you are in a Bayesian framework. There is no absolut shrinkage to zero. See the paper about the Bayesian lasso I cite in the doc of ?lasso.
In fact, the lasso prior is a bad shrinkage prior. I rather suggest using the horseshoe prior instead.
This is the code with the horseshoe priors. After glancing at the paper it seems as if the Bayesian lasso is a compromise between lasso and ridge, but as you mentioned the coefficients don’t shrink to 0. In the paper they also used double-exponential.
What is the justification of the horseshoe prior?
Also, is it true that the smaller the df the more regularization with df = 1 being the most regularized?
I don’t think so. I would say that the regularization is mostly due to the expected number of non-zero coefficients. Even still, you are not going to obtain exact zeros, although you can use the ideas in the projpred package to obtain a model with fewer coefficients that is expected to predict future data about as well.