Beta regression using LASSO prior (Park, Casella) - Initial values rejected

*** has been resolved - working code below ***

Dear

I am currently trying to sample the posteriors given a beta likelihood.
f(y_i | a, b) = \frac{y_i^{(a-1)}(1-y_i)^{(b-1)}}{B(a,b)}
a = \mu\cdot\phi
b = (1-\mu)\cdot\phi
and \mu is linked by an inverse-logit function to be constraint in (0, 1).

I treat \phi as a scalar parameter, because it is already not working in this simpler case.

I also want to include some regularization priors on beta. Below you can see my try for the BLASSO by Park & Casella. I just went Bayes so I am not too experienced with all that full posterior densities etc.
As the likelihood is beta I don’t really have any sigma2 in my full posterior for beta and excluded it…

The issue:

The initial values are getting rejected once I have a large Matrix X.
If I run the code using just 40 Variables I usually get some posterior samples that do make sense.
Once the matrix gets larger (say 70 variables) the initial values are constantly rejected.

I have been trying now for a very long time. I believe the constraints in the data/parameters are all correct.
I tested 500 different seeds.
I printed the mu & phi samples and it should be proper. Yet it fails.
I hope you have an idea and can help me out…

test.csv (460.9 KB)

working code

# Data
df_data <- read.csv( ... data attached in the post ...)
y <- df_data[, 1]
x <- df_data[, -1]
dat <- list(N = length(y), M = dim(x)[2], y = y, X = x)


write("// Stan model for beta LASSO Regression
      data {
  int<lower=1> N;
  int<lower=1> M;
  int<lower=1> J;
  vector<lower=0,upper=1>[N] y;
  matrix[N,M] X;
  matrix[N,J] Z;
}

parameters {
  vector[M] beta;
  vector < lower = 0 > [M] tau;
  vector[J] gamma;
  real alpha;
  real < lower = 0 > lambda;
  real < lower = 0 > sigma;
}

transformed parameters{
  vector < lower = 0, upper = 1 >[N] mu; // transformed linear predictor for mean of beta distribution
  vector < lower = 0 >[N] phi;           // transformed linear predictor for precision of beta distribution
  vector < lower = 0 >[N] A;             // parameter for beta distn
  vector < lower = 0 >[N] B;             // parameter for beta distn
  
  // hyperprior for lambda
  real r = 1.5;
  real d = 20;
  
  for (i in 1:N) {
    mu[i]  = inv_logit(alpha + X[i,] * beta);   
    phi[i] = exp(Z[i,] * gamma);
  }
  
  A = mu .* phi;
  B = (1.0 - mu) .* phi;
}

model {
  // priors
  lambda ~ gamma(r, d);
  tau ~ exponential(lambda^2 / 2);

  beta ~ normal(0, tau);
  
  gamma ~ normal(0,2);
  alpha ~ normal(0,2);

  // likelihood
  y ~ beta(A, B);
}

      
      generated quantities{
        vector<lower=0,upper=1>[N] y_rep;        
        
        for (n in 1:N) {
          y_rep[n] = beta_rng(A[n], B[n]);
        }
}
// The posterior predictive distribution",

"betaBLASSO.stan") # 


fit <- stan(file='betaBLASSO.stan',
                    data = dat, seed=3,
                    warmup = 500, iter = 1000, chains = 1,
                    control=list(adapt_delta=0.99, max_treedepth=12))

See, for example, here https://statmodeling.stat.columbia.edu/2017/11/02/king-must-die/ and here https://betanalpha.github.io/assets/case_studies/bayes_sparse_regression.html why Bayesian lasso is not that good idea even if lasso and Bayes separately can be good. I recommend regularized (Finnish) horseshoe instead https://projecteuclid.org/euclid.ejs/1513306866

For changing the initialization see init and init_r options.

1 Like

Thanks for your response.
I will apply (regularized) Horseshoe too.
Yet, my problem is not based on the “better” regularization method, but the initialization of the chain(s)…

Best

See the second part of my answer:

Did change init_r to several different values without success.

Can you try with init? You need check that A and B get reasonable values. You can set each parameter with init and then calculate corresponding A and B and check that they are reasonable.

Started from scratch and think that i didnt chose the best priors.
Also changed inti to = 0. It is now running more or less smoothly.

But although the Beta makes sense, I will change it to an zero-inflated beta.
Thanks avehtari.

Changed code above to working one.