How to extent (regularized) horseshoe prior to multiple outcome setting?

Hi everyone,

we try to build a linear regression model with three outcomes but we have a problem with including the (regularized) horseshoe prior.

There is no problem with incorporating the horseshoe prior for the univariate case or the multiple outcome setting without horseshoe but I’m struggling with bringing these two together.

I very simple example with toy data would be the following:

set.seed(2409)
library(rstan)
library(dplyr)
library(MASS)

N <- 200 #number of observations
M <- 250 #number of predictors
M0 <- 75 #number of predictors that have a significant effect

#true values
intercept <- c(0.5, 0, -0.5)

beta <- list()
beta[[1]] <- c(rep(0.5, M0), rep(0, M-M0)) #true effects of predictors for y1
beta[[2]] <- c(rep(-0.5, M0), rep(0, M-M0)) #true effects of predictors for y1
beta[[3]] <- c(rep(0.5, M0), rep(0, M-M0)) #true effects of predictors for y1

sigma1<-0.2 #sigma for y1
sigma2<-0.5 #sigma for y2
sigma3<-0.7 #sigma for y3
rho<-0.3 #correlation across sigmas

##this creates the errors with correlations across equations

eps<-mvrnorm(N,mu=c(0,0,0),Sigma=cbind(c(1,rho,rho), c(rho,1,rho),c(rho, rho, 1)))
e1<-eps[,1]*sigma1
e2<-eps[,2]*sigma2
e3<-eps[,3]*sigma3

X <- matrix(rnorm(N*M),N,M) %>% 
  data.matrix()

y1 <- intercept[1] + X %*% beta[[1]] + e1
y2 <- intercept[2] + X %*% beta[[2]] + e2
y3 <- intercept[3] + X %*% beta[[3]] + e3

Like I said, running the model for the univariate case is no problem. The model code is the following:

code <- "data {
  int <lower=1> N; // number of obs
  int <lower=1> M; // number of predictor variables
  matrix[N,M] X; // predictor matrix
  vector[N] y;//observed outcome
  //horseshoe prior
  real<lower=0> scale_global;
  real<lower=1> nu_global; 
  real<lower=1> nu_local;
  real<lower=0> slab_scale; 
  real<lower=1> slab_df;
}

parameters {
  
  //coefficients
  real alpha; //intercept
  vector[M] beta_hs_z ; // effects of predictor (z_scale)
  
  //horseshoe
  real<lower=0> aux1_global;
  real<lower=0> aux2_global;
  vector<lower=0>[M] aux1_local;
  vector<lower=0>[M] aux2_local;
  real<lower=0> caux;
  
  
  //scales
  real<lower=0,upper=pi()/2> sigma_y_unif; // scale for outcome (cauchy reparameterization)
}

transformed parameters {
  vector[M] beta_hs; // effects of predictor
  real<lower=0> sigma_y; // scale for outcome
  
  //horseshoe
  real<lower=0> tau;
  vector<lower=0>[M] lambda_hs;
  vector<lower=0>[M] lambda_hs_tilde;
  real<lower=0> c;
  
  sigma_y = 2.5 * tan(sigma_y_unif);
  tau = aux1_global * sqrt(aux2_global) * scale_global*sigma_y;
  c = slab_scale * sqrt(caux);
  lambda_hs = aux1_local .* sqrt(aux2_local);
  lambda_hs_tilde = sqrt( c^2 * square(lambda_hs) ./ (c^2 + tau^2*square(lambda_hs)) );
  beta_hs = beta_hs_z .*lambda_hs_tilde*tau;
  

}

model {
  vector[N] mu;
  
  //mu for model
  mu = alpha + X*beta_hs;

  //priors
  
  //coefficients
  alpha ~ normal(0, 1);
  beta_hs_z ~ normal(0, 1);

  
  //horseshoe
  aux1_local ~ normal(0, 1);
  aux2_local ~ inv_gamma (0.5* nu_local , 0.5* nu_local );
  aux1_global ~ normal(0, 1);
  aux2_global ~ inv_gamma (0.5* nu_global , 0.5* nu_global );
  caux ~ inv_gamma (0.5* slab_df , 0.5* slab_df );

  //scale
  //sigma_y_unif samples from a uniform distribution automatically 
  
  //likelihood
    y ~ normal(mu, sigma_y) ;
}"
model <- stan(model_code=code, 
              data = list(N = N,
                          M = M,
                          X = X,
                          y = as.numeric(y1),
                          scale_global = (M0)/(((M-M0))*sqrt(N)),
                          nu_global = 1,
                          nu_local = 1,
                          slab_scale = 1,
                          slab_df = 2),
              chains = 10,
              cores = 10,
              iter = 10000,
              warmup = 1000,
              seed = 2409,
              control=list(adapt_delta=0.999, stepsize=0.001, max_treedepth=18))

The model, this example is for y_1,recovers the parameters pretty good.

Now I can extent the model to the multivariate case and can also use the horseshoe prior for one of the equations (e.g. y_1). To ensure identifiability for the “non-horseshoe equations” I set M to e.g. M = 100 and M_0 = 50. The stan code for this example is the following:

code <- "data {
  int <lower=1> N; // number of obs
  int <lower=1> M; // number of predictors
  matrix[N,M] X;//predictor matrix
  vector[N] y1;//observed outcome
  vector[N] y2;//observed outcome
  vector[N] y3;//observed outcome
  //horseshoe prior
  real<lower=0> scale_global;
  real<lower=1> nu_global; 
  real<lower=1> nu_local;
  real<lower=0> slab_scale; 
  real<lower=1> slab_df;
}
transformed data {
  vector[3] y[N];
  
  for (n in 1:N) {
    y[n] = [y1[n], y2[n], y3[n]]';
  }
}
parameters {
  
  //coefficients
  real alpha[3];
  vector[M] beta_hs_y1_z;
  vector[M] beta_y2;
  vector[M] beta_y3;
  
  //horseshoe
  real<lower=0> aux1_global_y1;
  real<lower=0> aux2_global_y1;
  vector<lower=0>[M] aux1_local_y1;
  vector<lower=0>[M] aux2_local_y1;
  real<lower=0> caux_y1;
  
  //scales
  vector <lower=0, upper=pi()/2> [3] sigma_y_unif;
  cholesky_factor_corr[3] L_Omega;
}

transformed parameters {
  vector[M] beta_hs_y1;
  real<lower=0> tau_y1;
  vector<lower=0>[M] lambda_hs_y1;
  vector<lower=0>[M] lambda_hs_tilde_y1;
  real<lower=0> c_y1;
  vector <lower=0> [3] sigma_y;
  
  sigma_y = 2.5 * tan(sigma_y_unif);
  
  //horseshoe
  tau_y1 = aux1_global_y1 * sqrt(aux2_global_y1) * scale_global*sigma_y[1];
  c_y1 = slab_scale * sqrt(caux_y1);
  lambda_hs_y1 = aux1_local_y1 .* sqrt(aux2_local_y1);
  lambda_hs_tilde_y1 = sqrt( c_y1^2 * square(lambda_hs_y1) ./ (c_y1^2 + tau_y1^2*square(lambda_hs_y1)) );
  beta_hs_y1 = beta_hs_y1_z .*lambda_hs_tilde_y1*tau_y1;
  }

model {
  vector[N] mu1;
  vector[N] mu2;
  vector[N] mu3;
  matrix[3, 3] LSigma;
  vector[3] mu[N];
  
  mu1 = alpha[1] + X * beta_hs_y1; 
  mu2 = alpha[2] + X * beta_y2; 
  mu3 = alpha[3] + X * beta_y3;  

  for (n in 1:N) {
    mu[n] = [mu1[n], mu2[n], mu3[n]]';
  }
  
  LSigma = diag_pre_multiply(sigma_y, L_Omega);

  //priors
  
  //coefficients
  alpha ~ normal(0, 1);
  
  beta_hs_y1_z ~ normal(0, 1);
  beta_y2 ~ normal(0, 1);
  beta_y3 ~ normal(0, 1);
  
  //horseshoe
  aux1_local_y1 ~ normal(0, 1);
  aux2_local_y1 ~ inv_gamma (0.5* nu_local , 0.5* nu_local );
  aux1_global_y1 ~ normal(0, 1);
  aux2_global_y1 ~ inv_gamma (0.5* nu_global , 0.5* nu_global );
  caux_y1 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  
  //scale
  L_Omega~lkj_corr_cholesky(3);
  
  //likelihood
     y ~ multi_normal_cholesky(mu, LSigma);
}
generated quantities {
    corr_matrix[3] Residual_cors = multiply_lower_tri_self_transpose(L_Omega);
    vector<lower=-1,upper=1>[3] residual_cors;
    
    for (l in 1:3) {
    for (j in 1:(l - 1)) {
      residual_cors[choose(l - 1, 2) + j] = Residual_cors[j, l];
    }
  }
}"
model <- stan(model_code=code, 
              data = list(N = N,
                          M = M,
                          X = X,
                          y1 = as.numeric(y1),
                          y2 = as.numeric(y2),
                          y3 = as.numeric(y3),
                          scale_global = (M0)/((M-M0)*sqrt(N)),
                          nu_global = 1,
                          nu_local = 1,
                          slab_scale = 1,
                          slab_df = 2),
              chains = 4,
              cores = 4,
              iter = 2000,
              warmup = 1000,
              control=list(adapt_delta=0.999, stepsize=0.001, max_treedepth=18))

The results are as expected:

iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                    mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha[1]            0.51       0 0.02  0.47  0.49  0.51  0.52  0.54  5749    1
alpha[2]           -0.05       0 0.04 -0.13 -0.08 -0.05 -0.02  0.03  3747    1
alpha[3]           -0.51       0 0.07 -0.65 -0.55 -0.51 -0.46 -0.38  3629    1
beta_hs_y1[1]       0.49       0 0.02  0.45  0.48  0.49  0.50  0.53  3787    1
beta_hs_y1[2]       0.47       0 0.02  0.44  0.46  0.47  0.49  0.51  3890    1
beta_hs_y1[3]       0.47       0 0.02  0.43  0.45  0.47  0.48  0.51  3676    1
beta_hs_y1[4]       0.49       0 0.02  0.46  0.48  0.49  0.51  0.53  3901    1
beta_hs_y1[5]       0.50       0 0.02  0.46  0.49  0.50  0.51  0.54  3739    1
beta_hs_y1[6]       0.50       0 0.02  0.46  0.49  0.50  0.51  0.54  3890    1
beta_hs_y1[7]       0.49       0 0.02  0.45  0.48  0.49  0.51  0.53  3266    1
beta_hs_y1[8]       0.51       0 0.02  0.47  0.49  0.51  0.52  0.54  3609    1
beta_hs_y1[9]       0.47       0 0.02  0.44  0.46  0.47  0.48  0.50  3844    1
beta_hs_y1[10]      0.51       0 0.02  0.48  0.50  0.51  0.53  0.55  4131    1
...
beta_hs_y1[40]      0.49       0 0.02  0.45  0.48  0.49  0.50  0.52  3547    1
beta_hs_y1[41]      0.52       0 0.02  0.49  0.51  0.52  0.54  0.56  3714    1
beta_hs_y1[42]      0.48       0 0.02  0.44  0.47  0.49  0.50  0.53  3869    1
beta_hs_y1[43]      0.48       0 0.02  0.45  0.47  0.48  0.49  0.51  3924    1
beta_hs_y1[44]      0.53       0 0.02  0.49  0.52  0.53  0.55  0.58  3755    1
beta_hs_y1[45]      0.48       0 0.02  0.45  0.47  0.48  0.49  0.52  4092    1
beta_hs_y1[46]      0.51       0 0.02  0.48  0.50  0.51  0.52  0.55  3836    1
beta_hs_y1[47]      0.48       0 0.02  0.45  0.47  0.49  0.50  0.52  4223    1
beta_hs_y1[48]      0.47       0 0.02  0.43  0.45  0.47  0.48  0.50  3903    1
beta_hs_y1[49]      0.50       0 0.02  0.46  0.49  0.50  0.51  0.54  3950    1
beta_hs_y1[50]      0.49       0 0.02  0.45  0.48  0.49  0.50  0.52  4004    1
beta_hs_y1[51]     -0.01       0 0.02 -0.05 -0.02 -0.01  0.00  0.02  3350    1
beta_hs_y1[52]      0.01       0 0.02 -0.02  0.00  0.01  0.02  0.04  3761    1
beta_hs_y1[53]      0.00       0 0.02 -0.03 -0.01  0.00  0.01  0.04  4502    1
beta_hs_y1[54]      0.00       0 0.02 -0.04 -0.01  0.00  0.01  0.03  3858    1
beta_hs_y1[55]      0.03       0 0.02  0.00  0.02  0.03  0.05  0.07  3582    1
beta_hs_y1[56]     -0.01       0 0.02 -0.05 -0.02 -0.01  0.00  0.01  2785    1
beta_hs_y1[57]     -0.02       0 0.02 -0.05 -0.03 -0.01  0.00  0.02  3164    1
beta_hs_y1[58]     -0.01       0 0.02 -0.05 -0.03 -0.01  0.00  0.02  3146    1
beta_hs_y1[59]      0.01       0 0.02 -0.02  0.00  0.01  0.02  0.04  3804    1
beta_hs_y1[60]      0.01       0 0.02 -0.02  0.00  0.01  0.02  0.05  3294    1
...
beta_y2[1]         -0.53       0 0.05 -0.62 -0.56 -0.53 -0.50 -0.44  3329    1
beta_y2[2]         -0.55       0 0.04 -0.64 -0.58 -0.55 -0.53 -0.47  5060    1
beta_y2[3]         -0.47       0 0.05 -0.57 -0.51 -0.47 -0.44 -0.37  2970    1
beta_y2[4]         -0.54       0 0.04 -0.63 -0.57 -0.54 -0.51 -0.45  3341    1
beta_y2[5]         -0.54       0 0.05 -0.64 -0.57 -0.54 -0.51 -0.45  3053    1
beta_y2[6]         -0.46       0 0.04 -0.54 -0.49 -0.46 -0.43 -0.37  4170    1
beta_y2[7]         -0.49       0 0.05 -0.59 -0.52 -0.49 -0.46 -0.39  2994    1
beta_y2[8]         -0.43       0 0.04 -0.51 -0.46 -0.43 -0.40 -0.35  4260    1
beta_y2[9]         -0.54       0 0.04 -0.62 -0.57 -0.54 -0.52 -0.47  4219    1
beta_y2[10]        -0.50       0 0.05 -0.60 -0.53 -0.50 -0.47 -0.40  3654    1
...
beta_y2[40]        -0.51       0 0.04 -0.60 -0.54 -0.51 -0.48 -0.43  3744    1
beta_y2[41]        -0.49       0 0.05 -0.57 -0.52 -0.49 -0.46 -0.39  3964    1
beta_y2[42]        -0.50       0 0.05 -0.60 -0.53 -0.50 -0.46 -0.40  2723    1
beta_y2[43]        -0.59       0 0.04 -0.67 -0.61 -0.59 -0.56 -0.51  3365    1
beta_y2[44]        -0.49       0 0.05 -0.59 -0.53 -0.49 -0.46 -0.39  3382    1
beta_y2[45]        -0.50       0 0.04 -0.58 -0.53 -0.50 -0.47 -0.41  3724    1
beta_y2[46]        -0.48       0 0.04 -0.56 -0.51 -0.48 -0.45 -0.40  4039    1
beta_y2[47]        -0.52       0 0.04 -0.61 -0.55 -0.52 -0.49 -0.44  3820    1
beta_y2[48]        -0.49       0 0.04 -0.58 -0.52 -0.49 -0.46 -0.40  3484    1
beta_y2[49]        -0.47       0 0.05 -0.56 -0.50 -0.47 -0.44 -0.38  3300    1
beta_y2[50]        -0.42       0 0.04 -0.50 -0.45 -0.42 -0.40 -0.34  3483    1
beta_y2[51]        -0.07       0 0.04 -0.16 -0.10 -0.07 -0.04  0.02  3356    1
beta_y2[52]         0.02       0 0.04 -0.06 -0.01  0.02  0.05  0.11  3202    1
beta_y2[53]         0.09       0 0.04  0.01  0.06  0.09  0.12  0.18  3520    1
beta_y2[54]         0.02       0 0.04 -0.07 -0.01  0.02  0.05  0.11  3269    1
beta_y2[55]         0.04       0 0.04 -0.04  0.01  0.04  0.07  0.12  3730    1
beta_y2[56]         0.02       0 0.04 -0.05  0.00  0.02  0.05  0.10  3695    1
beta_y2[57]        -0.06       0 0.05 -0.15 -0.09 -0.06 -0.03  0.03  2860    1
beta_y2[58]        -0.03       0 0.04 -0.11 -0.05 -0.03  0.00  0.06  4485    1
beta_y2[59]        -0.05       0 0.05 -0.14 -0.08 -0.05 -0.02  0.04  3897    1
beta_y2[60]         0.01       0 0.04 -0.08 -0.02  0.01  0.04  0.10  3690    1
...
beta_y3[1]          0.51       0 0.07  0.36  0.46  0.51  0.56  0.65  3181    1
beta_y3[2]          0.53       0 0.07  0.39  0.48  0.53  0.58  0.66  5689    1
beta_y3[3]          0.58       0 0.09  0.41  0.52  0.58  0.64  0.75  3351    1
beta_y3[4]          0.56       0 0.07  0.42  0.51  0.56  0.61  0.70  3376    1
beta_y3[5]          0.51       0 0.08  0.36  0.46  0.51  0.56  0.66  2536    1
beta_y3[6]          0.47       0 0.07  0.33  0.42  0.47  0.52  0.61  3779    1
beta_y3[7]          0.50       0 0.08  0.35  0.45  0.50  0.56  0.66  3133    1
beta_y3[8]          0.50       0 0.07  0.37  0.46  0.50  0.55  0.64  4461    1
beta_y3[9]          0.51       0 0.07  0.38  0.47  0.51  0.56  0.64  4831    1
beta_y3[10]         0.50       0 0.08  0.35  0.45  0.50  0.56  0.66  3576    1
...
beta_y3[40]         0.44       0 0.07  0.30  0.39  0.44  0.48  0.57  3483    1
beta_y3[41]         0.54       0 0.07  0.40  0.49  0.54  0.59  0.69  3832    1
beta_y3[42]         0.49       0 0.08  0.32  0.43  0.49  0.55  0.65  3437    1
beta_y3[43]         0.43       0 0.07  0.30  0.38  0.43  0.47  0.57  3612    1
beta_y3[44]         0.54       0 0.08  0.37  0.48  0.53  0.59  0.70  3141    1
beta_y3[45]         0.47       0 0.07  0.33  0.42  0.47  0.52  0.61  3852    1
beta_y3[46]         0.57       0 0.07  0.43  0.52  0.57  0.62  0.70  4059    1
beta_y3[47]         0.43       0 0.07  0.29  0.39  0.43  0.48  0.58  3949    1
beta_y3[48]         0.53       0 0.07  0.38  0.48  0.52  0.57  0.67  3965    1
beta_y3[49]         0.46       0 0.08  0.30  0.40  0.46  0.51  0.61  3595    1
beta_y3[50]         0.47       0 0.07  0.34  0.42  0.47  0.52  0.61  3555    1
beta_y3[51]        -0.10       0 0.07 -0.25 -0.15 -0.10 -0.05  0.04  3497    1
beta_y3[52]        -0.05       0 0.07 -0.19 -0.09 -0.04  0.00  0.10  3083    1
beta_y3[53]         0.08       0 0.07 -0.06  0.03  0.08  0.13  0.21  3893    1
beta_y3[54]        -0.01       0 0.07 -0.15 -0.06 -0.01  0.04  0.14  3637    1
beta_y3[55]         0.09       0 0.07 -0.05  0.04  0.09  0.13  0.22  3642    1
beta_y3[56]        -0.01       0 0.06 -0.14 -0.05 -0.01  0.04  0.12  4340    1
beta_y3[57]        -0.12       0 0.08 -0.26 -0.17 -0.12 -0.07  0.03  3398    1
beta_y3[58]        -0.16       0 0.07 -0.31 -0.21 -0.16 -0.12 -0.03  3908    1
beta_y3[59]         0.03       0 0.07 -0.11 -0.02  0.03  0.08  0.17  4774    1
beta_y3[60]         0.07       0 0.07 -0.07  0.02  0.07  0.12  0.21  3415    1
...
sigma_y[1]          0.19       0 0.01  0.17  0.18  0.19  0.20  0.22  3111    1
sigma_y[2]          0.44       0 0.03  0.39  0.42  0.44  0.46  0.51  2130    1
sigma_y[3]          0.72       0 0.05  0.63  0.69  0.72  0.76  0.84  1968    1
Residual_cors[1,1]  1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
Residual_cors[1,2]  0.14       0 0.09 -0.04  0.08  0.14  0.20  0.32  5964    1
Residual_cors[1,3]  0.16       0 0.09 -0.03  0.09  0.16  0.22  0.33  5585    1
Residual_cors[2,1]  0.14       0 0.09 -0.04  0.08  0.14  0.20  0.32  5964    1
Residual_cors[2,2]  1.00       0 0.00  1.00  1.00  1.00  1.00  1.00  3994    1
Residual_cors[2,3]  0.09       0 0.10 -0.10  0.03  0.10  0.16  0.28  1778    1
Residual_cors[3,1]  0.16       0 0.09 -0.03  0.09  0.16  0.22  0.33  5585    1
Residual_cors[3,2]  0.09       0 0.10 -0.10  0.03  0.10  0.16  0.28  1778    1
Residual_cors[3,3]  1.00       0 0.00  1.00  1.00  1.00  1.00  1.00  3845    1
residual_cors[1]    0.14       0 0.09 -0.04  0.08  0.14  0.20  0.32  5964    1
residual_cors[2]    0.16       0 0.09 -0.03  0.09  0.16  0.22  0.33  5585    1
residual_cors[3]    0.09       0 0.10 -0.10  0.03  0.10  0.16  0.28  1778    1

Samples were drawn using NUTS(diag_e) at Tue Mar 24 10:56:25 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Now I naively believed that in order to use the horseshoe prior in all the equations I would only have to copy-paste the code for y_1 for the other equations:

code <- "data {
  int <lower=1> N; // number of obs
  int <lower=1> M; // number of predictors
  matrix[N,M] X;//predictor matrix
  vector[N] y1;//observed outcome
  vector[N] y2;//observed outcome
  vector[N] y3;//observed outcome
  //horseshoe prior
  real<lower=0> scale_global;
  real<lower=1> nu_global; 
  real<lower=1> nu_local;
  real<lower=0> slab_scale; 
  real<lower=1> slab_df;
}
transformed data {
  vector[3] y[N];
  
  for (n in 1:N) {
    y[n] = [y1[n], y2[n], y3[n]]';
  }
}
parameters {
  
  //coefficients
  real alpha[3];
  vector[M] beta_hs_y1_z;
  vector[M] beta_hs_y2_z;
  vector[M] beta_hs_y3_z;
  
  //horseshoe
  //y1
  real<lower=0> aux1_global_y1;
  real<lower=0> aux2_global_y1;
  vector<lower=0>[M] aux1_local_y1;
  vector<lower=0>[M] aux2_local_y1;
  real<lower=0> caux_y1;
  //y2
  real<lower=0> aux1_global_y2;
  real<lower=0> aux2_global_y2;
  vector<lower=0>[M] aux1_local_y2;
  vector<lower=0>[M] aux2_local_y2;
  real<lower=0> caux_y2;
  //y3
  real<lower=0> aux1_global_y3;
  real<lower=0> aux2_global_y3;
  vector<lower=0>[M] aux1_local_y3;
  vector<lower=0>[M] aux2_local_y3;
  real<lower=0> caux_y3;
  
  //scales
  vector <lower=0, upper=pi()/2> [3] sigma_y_unif;
  cholesky_factor_corr[3] L_Omega;
}

transformed parameters {
  vector[M] beta_hs_y1;
  real<lower=0> tau_y1;
  vector<lower=0>[M] lambda_hs_y1;
  vector<lower=0>[M] lambda_hs_tilde_y1;
  real<lower=0> c_y1;
  vector[M] beta_hs_y2;
  real<lower=0> tau_y2;
  vector<lower=0>[M] lambda_hs_y2;
  vector<lower=0>[M] lambda_hs_tilde_y2;
  real<lower=0> c_y2;
  vector[M] beta_hs_y3;
  real<lower=0> tau_y3;
  vector<lower=0>[M] lambda_hs_y3;
  vector<lower=0>[M] lambda_hs_tilde_y3;
  real<lower=0> c_y3;
  vector <lower=0> [3] sigma_y;
  
  sigma_y = 2.5 * tan(sigma_y_unif);
  
  //horseshoe
  //y1
  tau_y1 = aux1_global_y1 * sqrt(aux2_global_y1) * scale_global*sigma_y[1];
  c_y1 = slab_scale * sqrt(caux_y1);
  lambda_hs_y1 = aux1_local_y1 .* sqrt(aux2_local_y1);
  lambda_hs_tilde_y1 = sqrt( c_y1^2 * square(lambda_hs_y1) ./ (c_y1^2 + tau_y1^2*square(lambda_hs_y1)) );
  beta_hs_y1 = beta_hs_y1_z .*lambda_hs_tilde_y1*tau_y1;
  //y2
  tau_y2 = aux1_global_y2 * sqrt(aux2_global_y2) * scale_global*sigma_y[2];
  c_y2 = slab_scale * sqrt(caux_y2);
  lambda_hs_y2 = aux1_local_y2 .* sqrt(aux2_local_y2);
  lambda_hs_tilde_y2 = sqrt( c_y2^2 * square(lambda_hs_y2) ./ (c_y2^2 + tau_y2^2*square(lambda_hs_y2)) );
  beta_hs_y2 = beta_hs_y2_z .*lambda_hs_tilde_y2*tau_y2;
  //y3
  tau_y3 = aux1_global_y3 * sqrt(aux2_global_y3) * scale_global*sigma_y[3];
  c_y3 = slab_scale * sqrt(caux_y3);
  lambda_hs_y3 = aux1_local_y3 .* sqrt(aux2_local_y3);
  lambda_hs_tilde_y3 = sqrt( c_y3^2 * square(lambda_hs_y3) ./ (c_y3^2 + tau_y3^2*square(lambda_hs_y3)) );
  beta_hs_y3 = beta_hs_y3_z .*lambda_hs_tilde_y3*tau_y3;
  }

model {
  vector[N] mu1;
  vector[N] mu2;
  vector[N] mu3;
  matrix[3, 3] LSigma;
  vector[3] mu[N];
  
  mu1 = alpha[1] + X * beta_hs_y1; 
  mu2 = alpha[2] + X * beta_hs_y2; 
  mu3 = alpha[3] + X * beta_hs_y3;  

  for (n in 1:N) {
    mu[n] = [mu1[n], mu2[n], mu3[n]]';
  }
  
  LSigma = diag_pre_multiply(sigma_y, L_Omega);

  //priors
  
  //coefficients
  alpha ~ normal(0, 1);
  
  beta_hs_y1_z ~ normal(0, 1);
  beta_hs_y2_z ~ normal(0, 1);
  beta_hs_y3_z ~ normal(0, 1);
  
  //horseshoe
  //y1
  aux1_local_y1 ~ normal(0, 1);
  aux2_local_y1 ~ inv_gamma (0.5* nu_local , 0.5* nu_local );
  aux1_global_y1 ~ normal(0, 1);
  aux2_global_y1 ~ inv_gamma (0.5* nu_global , 0.5* nu_global );
  caux_y1 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  //y2
  aux1_local_y2 ~ normal(0, 1);
  aux2_local_y2 ~ inv_gamma (0.5* nu_local , 0.5* nu_local );
  aux1_global_y2 ~ normal(0, 1);
  aux2_global_y2 ~ inv_gamma (0.5* nu_global , 0.5* nu_global );
  caux_y2 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  //y3
  aux1_local_y3 ~ normal(0, 1);
  aux2_local_y3 ~ inv_gamma (0.5* nu_local , 0.5* nu_local );
  aux1_global_y3 ~ normal(0, 1);
  aux2_global_y3 ~ inv_gamma (0.5* nu_global , 0.5* nu_global );
  caux_y3 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  
  //scale
  L_Omega~lkj_corr_cholesky(3);
  
  //likelihood
     y ~ multi_normal_cholesky(mu, LSigma);
}
generated quantities {
    corr_matrix[3] Residual_cors = multiply_lower_tri_self_transpose(L_Omega);
    vector<lower=-1,upper=1>[3] residual_cors;
    
    for (l in 1:3) {
    for (j in 1:(l - 1)) {
      residual_cors[choose(l - 1, 2) + j] = Residual_cors[j, l];
    }
  }
}"
model <- stan(model_code=code, 
              data = list(N = N,
                          M = M,
                          X = X,
                          y1 = as.numeric(y1),
                          y2 = as.numeric(y2),
                          y3 = as.numeric(y3),
                          scale_global = (M0)/((M-M0)*sqrt(N)),
                          nu_global = 1,
                          nu_local = 1,
                          slab_scale = 1,
                          slab_df = 2),
              chains = 4,
              cores = 4,
              iter = 2000,
              warmup = 1000,
              control=list(adapt_delta=0.999, stepsize=0.001, max_treedepth=18))

Unfortunately the model seems to fail in this case.

Inference for Stan model: a770f8efe1edc8442df6f5e6daeed073.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                    mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha[1]            0.59       0 0.22  0.16  0.44  0.58  0.73  1.03  1851 1.00
alpha[2]           -0.09       0 0.22 -0.54 -0.24 -0.09  0.06  0.33  1885 1.00
alpha[3]           -0.47       0 0.22 -0.89 -0.62 -0.47 -0.32 -0.02  1922 1.00
beta_hs_y1[1]       0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3741 1.00
beta_hs_y1[2]       0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4248 1.00
beta_hs_y1[3]       0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3595 1.00
beta_hs_y1[4]       0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3528 1.00
beta_hs_y1[5]       0.00       0 0.01 -0.02  0.00  0.00  0.00  0.00  2430 1.00
beta_hs_y1[6]       0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4274 1.00
beta_hs_y1[7]       0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4207 1.00
beta_hs_y1[8]       0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  3812 1.00
beta_hs_y1[9]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.00  3107 1.00
beta_hs_y1[10]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3464 1.00
...
beta_hs_y1[40]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3526 1.00
beta_hs_y1[41]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3651 1.00
beta_hs_y1[42]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3473 1.00
beta_hs_y1[43]      0.00       0 0.01 -0.03  0.00  0.00  0.00  0.00  1964 1.00
beta_hs_y1[44]      0.00       0 0.01  0.00  0.00  0.00  0.00  0.02  2830 1.00
beta_hs_y1[45]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4209 1.00
beta_hs_y1[46]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4365 1.00
beta_hs_y1[47]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4212 1.00
beta_hs_y1[48]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4398 1.00
beta_hs_y1[49]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3387 1.00
beta_hs_y1[50]      0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  3045 1.00
beta_hs_y1[51]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4268 1.00
beta_hs_y1[52]      0.00       0 0.00  0.00  0.00  0.00  0.00  0.01  4180 1.00
beta_hs_y1[53]      0.00       0 0.00  0.00  0.00  0.00  0.00  0.01  3938 1.00
beta_hs_y1[54]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3872 1.00
beta_hs_y1[55]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4095 1.00
beta_hs_y1[56]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3981 1.00
beta_hs_y1[57]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3717 1.00
beta_hs_y1[58]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3407 1.00
beta_hs_y1[59]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.00  3444 1.00
beta_hs_y1[60]      0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  4375 1.00
...
beta_hs_y2[1]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.04  2300 1.00
beta_hs_y2[2]       0.00       0 0.01 -0.03  0.00  0.00  0.00  0.01  2865 1.00
beta_hs_y2[3]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3733 1.00
beta_hs_y2[4]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3790 1.00
beta_hs_y2[5]      -0.01       0 0.02 -0.08  0.00  0.00  0.00  0.01  1556 1.00
beta_hs_y2[6]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3080 1.00
beta_hs_y2[7]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  4853 1.00
beta_hs_y2[8]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3771 1.00
beta_hs_y2[9]       0.00       0 0.02 -0.06  0.00  0.00  0.00  0.01  1903 1.00
beta_hs_y2[10]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3209 1.00
...
beta_hs_y2[40]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3836 1.00
beta_hs_y2[41]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3094 1.00
beta_hs_y2[42]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3035 1.00
beta_hs_y2[43]     -0.02       0 0.04 -0.13 -0.02  0.00  0.00  0.00   962 1.01
beta_hs_y2[44]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3423 1.00
beta_hs_y2[45]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3352 1.00
beta_hs_y2[46]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.03  2603 1.00
beta_hs_y2[47]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3105 1.00
beta_hs_y2[48]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3484 1.00
beta_hs_y2[49]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3276 1.00
beta_hs_y2[50]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.04  2363 1.00
beta_hs_y2[51]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  4228 1.00
beta_hs_y2[52]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  4430 1.00
beta_hs_y2[53]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3101 1.00
beta_hs_y2[54]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3936 1.00
beta_hs_y2[55]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3872 1.00
beta_hs_y2[56]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3185 1.00
beta_hs_y2[57]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  4116 1.00
beta_hs_y2[58]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  4024 1.00
beta_hs_y2[59]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3485 1.00
beta_hs_y2[60]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3901 1.00
...
beta_hs_y3[1]       0.00       0 0.02 -0.01  0.00  0.00  0.00  0.05  2055 1.00
beta_hs_y3[2]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  2833 1.00
beta_hs_y3[3]       0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3022 1.00
beta_hs_y3[4]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  4275 1.00
beta_hs_y3[5]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3745 1.00
beta_hs_y3[6]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  2269 1.00
beta_hs_y3[7]       0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3497 1.00
beta_hs_y3[8]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3891 1.00
beta_hs_y3[9]       0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  4191 1.00
beta_hs_y3[10]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  4023 1.00
...
beta_hs_y3[40]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3757 1.00
beta_hs_y3[41]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3605 1.00
beta_hs_y3[42]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3728 1.00
beta_hs_y3[43]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3732 1.00
beta_hs_y3[44]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3242 1.00
beta_hs_y3[45]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3398 1.00
beta_hs_y3[46]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.03  2795 1.00
beta_hs_y3[47]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3901 1.00
beta_hs_y3[48]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3597 1.00
beta_hs_y3[49]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3265 1.00
beta_hs_y3[50]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3607 1.00
beta_hs_y3[51]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3240 1.00
beta_hs_y3[52]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  3333 1.00
beta_hs_y3[53]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  2883 1.00
beta_hs_y3[54]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3739 1.00
beta_hs_y3[55]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  2930 1.00
beta_hs_y3[56]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3269 1.00
beta_hs_y3[57]      0.00       0 0.01 -0.03  0.00  0.00  0.00  0.01  3396 1.00
beta_hs_y3[58]      0.00       0 0.01 -0.04  0.00  0.00  0.00  0.01  2327 1.00
beta_hs_y3[59]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  2811 1.00
beta_hs_y3[60]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3981 1.00
...
sigma_y[1]          3.27       0 0.16  2.99  3.16  3.27  3.38  3.60  1438 1.00
sigma_y[2]          3.31       0 0.16  3.02  3.20  3.31  3.43  3.65  1477 1.00
sigma_y[3]          3.34       0 0.16  3.03  3.22  3.33  3.45  3.67  1486 1.00
Residual_cors[1,1]  1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
Residual_cors[1,2] -0.99       0 0.00 -0.99 -0.99 -0.99 -0.99 -0.98  1833 1.00
Residual_cors[1,3]  0.98       0 0.00  0.97  0.98  0.98  0.98  0.98  2043 1.00
Residual_cors[2,1] -0.99       0 0.00 -0.99 -0.99 -0.99 -0.99 -0.98  1833 1.00
Residual_cors[2,2]  1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
Residual_cors[2,3] -0.96       0 0.01 -0.97 -0.97 -0.96 -0.96 -0.95  1960 1.00
Residual_cors[3,1]  0.98       0 0.00  0.97  0.98  0.98  0.98  0.98  2043 1.00
Residual_cors[3,2] -0.96       0 0.01 -0.97 -0.97 -0.96 -0.96 -0.95  1960 1.00
Residual_cors[3,3]  1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
residual_cors[1]   -0.99       0 0.00 -0.99 -0.99 -0.99 -0.99 -0.98  1833 1.00
residual_cors[2]    0.98       0 0.00  0.97  0.98  0.98  0.98  0.98  2043 1.00
residual_cors[3]   -0.96       0 0.01 -0.97 -0.97 -0.96 -0.96 -0.95  1960 1.00

Samples were drawn using NUTS(diag_e) at Tue Mar 24 09:41:03 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Does anyone have an idea how to implement the horseshoe prior in the multivariate setting correctly?

Thanks!

1 Like

What are you counting as failing? The Rhats are all <= 1.01 and the n_effs are high.

1 Like

Thanks for your reply! Failing in terms of “I am not getting the expected results” not in terms of convergence :)

Oooh, I see all the coefficients are zero and the sigmas are high.

control=list(adapt_delta=0.999, stepsize=0.001, max_treedepth=18)

What happens when you just have a high adapt_delta? (Edit: And don’t do the stepsize/treedepth stuff)

sigma_y = 2.5 * tan(sigma_y_unif);

Maybe just put a normal prior on sigma? Dunno if it’s attached to what is happening here but maybe the heavy tails make it easy for sigma to get big.

Another thing maybe you could comment our your data and just simulate from the prior and look at what the implications are in terms of the regression coefficients and y?

  1. If the ys are crazy out of wack vs. what you’re generating, maybe we need to change the priors
  2. If there’s any inconsistencies between the sets of betas, maybe there is a bug in the model
  3. If the betas are all crazy large or small, relative to what you’re feeding in, that tells us something

I’m a bit scared with Cauchys in there. Maybe first make the sigmas not Cauchy so we have means and such. Also in terms of the horseshoe – isn’t there a parameter that in general makes things less heavy tailed? Can we make that not-heavy super heavy-tail like? At least so we might expect to have means of things and such?

1 Like

I’ve tried three things now.

  1. I put a highly informative prior on the sigmas and set the prior to N(0.5, 0.5).

Still all coefficients 0, still all sigmas extremely large. It’s actually the same picture as in the initial post.

  1. I set adapt_delta to an even higher value of 0.999999 and stepsize to 0.0001. EDIT: Sorry, I didn’t see your edit. I will try it now without stepsize and treedepth.

Dito. Also for the case without stepsize and max_treedepth :(

  1. I set M and M_0 to even lower values of M=30 and M_0=5, but keep everything as in the initial post (i.e., Cauchy(0, 2.5) for the sigmas).
Inference for Stan model: a770f8efe1edc8442df6f5e6daeed073.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                    mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha[1]            0.51       0 0.01  0.48  0.50  0.51  0.52  0.54  5149    1
alpha[2]           -0.02       0 0.03 -0.08 -0.04 -0.02  0.01  0.05  5027    1
alpha[3]           -0.55       0 0.05 -0.65 -0.59 -0.55 -0.52 -0.46  4519    1
beta_hs_y1[1]       0.51       0 0.01  0.48  0.50  0.51  0.52  0.54  3768    1
beta_hs_y1[2]       0.49       0 0.02  0.46  0.48  0.49  0.50  0.52  4114    1
beta_hs_y1[3]       0.46       0 0.02  0.43  0.45  0.46  0.47  0.49  3760    1
beta_hs_y1[4]       0.52       0 0.01  0.49  0.51  0.52  0.53  0.54  3844    1
beta_hs_y1[5]       0.47       0 0.01  0.44  0.46  0.47  0.48  0.50  3848    1
beta_hs_y1[6]       0.00       0 0.01 -0.02 -0.01  0.00  0.00  0.01  4000    1
beta_hs_y1[7]       0.00       0 0.01 -0.01  0.00  0.00  0.01  0.02  4639    1
beta_hs_y1[8]       0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  5077    1
beta_hs_y1[9]      -0.01       0 0.01 -0.03 -0.01  0.00  0.00  0.01  3120    1
beta_hs_y1[10]      0.00       0 0.01 -0.01  0.00  0.00  0.01  0.03  4229    1
beta_hs_y1[11]      0.00       0 0.01 -0.03 -0.01  0.00  0.00  0.01  3991    1
beta_hs_y1[12]      0.01       0 0.01 -0.01  0.00  0.00  0.01  0.03  3050    1
beta_hs_y1[13]      0.00       0 0.01 -0.01  0.00  0.00  0.01  0.02  4610    1
beta_hs_y1[14]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  5496    1
beta_hs_y1[15]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  5171    1
beta_hs_y1[16]     -0.01       0 0.01 -0.04 -0.01  0.00  0.00  0.01  2830    1
beta_hs_y1[17]     -0.01       0 0.01 -0.03 -0.01  0.00  0.00  0.01  4066    1
beta_hs_y1[18]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.02  4745    1
beta_hs_y1[19]      0.00       0 0.01 -0.03 -0.01  0.00  0.00  0.01  3814    1
beta_hs_y1[20]      0.00       0 0.01 -0.03 -0.01  0.00  0.00  0.01  4270    1
beta_hs_y1[21]      0.00       0 0.01 -0.01  0.00  0.00  0.01  0.02  4315    1
beta_hs_y1[22]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  5036    1
beta_hs_y1[23]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  4197    1
beta_hs_y1[24]      0.00       0 0.01 -0.01  0.00  0.00  0.01  0.03  4076    1
beta_hs_y1[25]      0.00       0 0.01 -0.02  0.00  0.00  0.00  0.01  5198    1
beta_hs_y1[26]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  4649    1
beta_hs_y1[27]     -0.01       0 0.01 -0.03 -0.01  0.00  0.00  0.01  3161    1
beta_hs_y1[28]      0.01       0 0.01 -0.01  0.00  0.01  0.02  0.04  2491    1
beta_hs_y1[29]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  4964    1
beta_hs_y1[30]      0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  4450    1
beta_hs_y2[1]      -0.45       0 0.03 -0.51 -0.47 -0.45 -0.43 -0.39  3654    1
beta_hs_y2[2]      -0.53       0 0.03 -0.59 -0.56 -0.53 -0.51 -0.47  3878    1
beta_hs_y2[3]      -0.47       0 0.03 -0.54 -0.49 -0.47 -0.45 -0.40  3915    1
beta_hs_y2[4]      -0.50       0 0.03 -0.56 -0.52 -0.50 -0.48 -0.44  3724    1
beta_hs_y2[5]      -0.55       0 0.03 -0.61 -0.57 -0.55 -0.53 -0.49  4084    1
beta_hs_y2[6]       0.01       0 0.02 -0.02  0.00  0.00  0.02  0.06  3969    1
beta_hs_y2[7]       0.00       0 0.02 -0.04 -0.01  0.00  0.00  0.03  5338    1
beta_hs_y2[8]       0.00       0 0.02 -0.03  0.00  0.00  0.01  0.05  4838    1
beta_hs_y2[9]      -0.02       0 0.03 -0.09 -0.04 -0.02  0.00  0.01  2787    1
beta_hs_y2[10]      0.00       0 0.02 -0.03  0.00  0.00  0.01  0.04  4810    1
beta_hs_y2[11]      0.00       0 0.02 -0.05 -0.01  0.00  0.00  0.03  3881    1
beta_hs_y2[12]      0.01       0 0.02 -0.02  0.00  0.00  0.02  0.05  3505    1
beta_hs_y2[13]     -0.02       0 0.02 -0.08 -0.03 -0.01  0.00  0.02  3029    1
beta_hs_y2[14]      0.00       0 0.02 -0.03  0.00  0.00  0.01  0.04  4572    1
beta_hs_y2[15]      0.01       0 0.02 -0.02  0.00  0.00  0.02  0.06  4360    1
beta_hs_y2[16]      0.00       0 0.02 -0.05 -0.01  0.00  0.00  0.03  4732    1
beta_hs_y2[17]     -0.01       0 0.02 -0.05 -0.01  0.00  0.00  0.02  4091    1
beta_hs_y2[18]     -0.01       0 0.02 -0.05 -0.01  0.00  0.00  0.02  5125    1
beta_hs_y2[19]      0.00       0 0.02 -0.04 -0.01  0.00  0.01  0.03  5515    1
beta_hs_y2[20]      0.00       0 0.02 -0.03  0.00  0.00  0.01  0.04  4984    1
beta_hs_y2[21]     -0.02       0 0.03 -0.09 -0.04 -0.01  0.00  0.01  3181    1
beta_hs_y2[22]     -0.01       0 0.02 -0.05 -0.01  0.00  0.00  0.03  4336    1
beta_hs_y2[23]      0.01       0 0.02 -0.02  0.00  0.00  0.01  0.05  4729    1
beta_hs_y2[24]      0.00       0 0.02 -0.03  0.00  0.00  0.01  0.05  4964    1
beta_hs_y2[25]     -0.01       0 0.02 -0.05 -0.01  0.00  0.00  0.02  4029    1
beta_hs_y2[26]      0.00       0 0.02 -0.04 -0.01  0.00  0.01  0.04  5019    1
beta_hs_y2[27]      0.02       0 0.02 -0.01  0.00  0.01  0.03  0.08  3329    1
beta_hs_y2[28]      0.00       0 0.02 -0.03  0.00  0.00  0.01  0.04  5182    1
beta_hs_y2[29]      0.00       0 0.02 -0.03  0.00  0.00  0.01  0.04  4900    1
beta_hs_y2[30]      0.07       0 0.04  0.00  0.04  0.07  0.09  0.14  1737    1
beta_hs_y3[1]       0.60       0 0.05  0.50  0.57  0.60  0.63  0.69  4260    1
beta_hs_y3[2]       0.45       0 0.05  0.35  0.41  0.45  0.48  0.55  3953    1
beta_hs_y3[3]       0.41       0 0.05  0.30  0.37  0.41  0.44  0.51  3856    1
beta_hs_y3[4]       0.54       0 0.05  0.44  0.50  0.54  0.57  0.63  4218    1
beta_hs_y3[5]       0.48       0 0.05  0.38  0.44  0.48  0.51  0.57  3764    1
beta_hs_y3[6]       0.01       0 0.03 -0.03  0.00  0.00  0.02  0.08  3851    1
beta_hs_y3[7]      -0.01       0 0.02 -0.07 -0.01  0.00  0.01  0.04  4338    1
beta_hs_y3[8]      -0.01       0 0.02 -0.07 -0.02  0.00  0.00  0.03  4255    1
beta_hs_y3[9]       0.00       0 0.02 -0.06 -0.01  0.00  0.01  0.04  4945    1
beta_hs_y3[10]      0.00       0 0.02 -0.06 -0.01  0.00  0.01  0.06  4671    1
beta_hs_y3[11]      0.03       0 0.04 -0.02  0.00  0.01  0.05  0.13  2845    1
beta_hs_y3[12]     -0.01       0 0.03 -0.09 -0.03 -0.01  0.00  0.03  3570    1
beta_hs_y3[13]      0.01       0 0.03 -0.03  0.00  0.00  0.02  0.09  4133    1
beta_hs_y3[14]      0.00       0 0.02 -0.05 -0.01  0.00  0.01  0.05  5206    1
beta_hs_y3[15]     -0.01       0 0.03 -0.08 -0.02  0.00  0.00  0.03  4232    1
beta_hs_y3[16]      0.01       0 0.03 -0.04  0.00  0.00  0.02  0.08  3868    1
beta_hs_y3[17]      0.01       0 0.02 -0.04 -0.01  0.00  0.02  0.07  4355    1
beta_hs_y3[18]      0.00       0 0.02 -0.05 -0.01  0.00  0.01  0.06  5744    1
beta_hs_y3[19]      0.03       0 0.04 -0.02  0.00  0.01  0.04  0.12  3647    1
beta_hs_y3[20]     -0.01       0 0.03 -0.08 -0.02  0.00  0.00  0.04  4726    1
beta_hs_y3[21]      0.01       0 0.03 -0.04 -0.01  0.00  0.02  0.08  5146    1
beta_hs_y3[22]     -0.02       0 0.03 -0.11 -0.03 -0.01  0.00  0.02  3137    1
beta_hs_y3[23]      0.00       0 0.02 -0.06 -0.01  0.00  0.01  0.04  5195    1
beta_hs_y3[24]     -0.01       0 0.03 -0.07 -0.01  0.00  0.01  0.04  5497    1
beta_hs_y3[25]      0.00       0 0.02 -0.06 -0.01  0.00  0.01  0.05  4898    1
beta_hs_y3[26]     -0.02       0 0.04 -0.11 -0.04 -0.01  0.00  0.02  3211    1
beta_hs_y3[27]     -0.02       0 0.03 -0.10 -0.03 -0.01  0.00  0.03  3925    1
beta_hs_y3[28]      0.00       0 0.02 -0.06 -0.01  0.00  0.01  0.04  4680    1
beta_hs_y3[29]     -0.01       0 0.03 -0.07 -0.01  0.00  0.01  0.05  4816    1
beta_hs_y3[30]      0.01       0 0.03 -0.04  0.00  0.00  0.02  0.09  3914    1
sigma_y[1]          0.20       0 0.01  0.18  0.19  0.20  0.21  0.22  4853    1
sigma_y[2]          0.43       0 0.02  0.39  0.41  0.43  0.44  0.48  4463    1
sigma_y[3]          0.67       0 0.04  0.61  0.65  0.67  0.70  0.74  4312    1
Residual_cors[1,1]  1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
Residual_cors[1,2]  0.19       0 0.07  0.05  0.14  0.19  0.24  0.33  3924    1
Residual_cors[1,3]  0.17       0 0.07  0.03  0.12  0.17  0.22  0.30  5000    1
Residual_cors[2,1]  0.19       0 0.07  0.05  0.14  0.19  0.24  0.33  3924    1
Residual_cors[2,2]  1.00       0 0.00  1.00  1.00  1.00  1.00  1.00  4075    1
Residual_cors[2,3]  0.20       0 0.07  0.06  0.15  0.20  0.25  0.33  4976    1
Residual_cors[3,1]  0.17       0 0.07  0.03  0.12  0.17  0.22  0.30  5000    1
Residual_cors[3,2]  0.20       0 0.07  0.06  0.15  0.20  0.25  0.33  4976    1
Residual_cors[3,3]  1.00       0 0.00  1.00  1.00  1.00  1.00  1.00  3575    1
residual_cors[1]    0.19       0 0.07  0.05  0.14  0.19  0.24  0.33  3924    1
residual_cors[2]    0.17       0 0.07  0.03  0.12  0.17  0.22  0.30  5000    1
residual_cors[3]    0.20       0 0.07  0.06  0.15  0.20  0.25  0.33  4976    1

Samples were drawn using NUTS(diag_e) at Tue Mar 24 17:30:03 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

So in this case, the results are as expected. So it looks like the code works but for 30 coefficients I don’t need a horseshoe prior. In our actual data set we have 200 observations and at least 160 potential predictors. So it looks like the problem is related to the total number of predictors.

2 Likes

Gonna go ahead and ping @avehtari and @jpiironen.

1 Like

I have also simulated from the prior now. For the sigmas I used half-Normal(0, 1) for this one.These are the results, which look reasonable to me:

Inference for Stan model: 5cdf56c8dbd72cdf13034970b427787a.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                    mean se_mean   sd  2.5%   25%   50%  75% 97.5% n_eff Rhat
alpha[1]           -0.01    0.01 1.01 -2.02 -0.69  0.00 0.69  1.94 11407    1
alpha[2]            0.01    0.01 1.03 -2.00 -0.68  0.01 0.70  2.01 10061    1
alpha[3]           -0.01    0.01 0.99 -1.99 -0.67 -0.01 0.65  1.90  7919    1
beta_hs_y1[1]      -0.02    0.01 0.48 -0.83 -0.02  0.00 0.02  0.72  3031    1
beta_hs_y1[2]       0.00    0.01 0.51 -0.78 -0.03  0.00 0.02  0.78  4229    1
beta_hs_y1[3]       0.01    0.01 0.46 -0.63 -0.02  0.00 0.02  0.78  3416    1
beta_hs_y1[4]       0.02    0.01 0.54 -0.67 -0.02  0.00 0.03  0.80  3300    1
beta_hs_y1[5]      -0.01    0.01 0.51 -0.83 -0.02  0.00 0.02  0.75  4248    1
beta_hs_y1[6]      -0.01    0.01 0.46 -0.79 -0.02  0.00 0.02  0.62  4210    1
beta_hs_y1[7]       0.00    0.01 0.51 -0.75 -0.03  0.00 0.02  0.77  4126    1
beta_hs_y1[8]      -0.01    0.01 0.45 -0.76 -0.02  0.00 0.02  0.79  3821    1
beta_hs_y1[9]       0.01    0.01 0.43 -0.67 -0.02  0.00 0.02  0.85  3703    1
beta_hs_y1[10]      0.00    0.01 0.36 -0.67 -0.02  0.00 0.02  0.71  4380    1
beta_hs_y1[11]     -0.01    0.01 0.46 -0.75 -0.03  0.00 0.02  0.79  3217    1
beta_hs_y1[12]      0.00    0.01 0.42 -0.69 -0.02  0.00 0.02  0.73  3972    1
beta_hs_y1[13]      0.00    0.01 0.44 -0.73 -0.02  0.00 0.02  0.77  4034    1
beta_hs_y1[14]      0.00    0.01 0.40 -0.75 -0.02  0.00 0.02  0.74  3998    1
beta_hs_y1[15]      0.00    0.01 0.52 -0.78 -0.03  0.00 0.02  0.88  3222    1
beta_hs_y1[16]      0.00    0.01 0.42 -0.76 -0.02  0.00 0.02  0.73  4408    1
beta_hs_y1[17]      0.00    0.01 0.45 -0.74 -0.02  0.00 0.02  0.85  3675    1
beta_hs_y1[18]      0.00    0.01 0.42 -0.73 -0.02  0.00 0.02  0.78  4082    1
beta_hs_y1[19]      0.01    0.01 0.43 -0.66 -0.02  0.00 0.02  0.84  3541    1
beta_hs_y1[20]      0.02    0.01 0.40 -0.68 -0.02  0.00 0.02  0.85  3875    1
beta_hs_y1[21]      0.01    0.01 0.37 -0.65 -0.02  0.00 0.03  0.78  3494    1
beta_hs_y1[22]      0.00    0.01 0.41 -0.78 -0.02  0.00 0.02  0.75  3734    1
beta_hs_y1[23]      0.00    0.01 0.42 -0.82 -0.02  0.00 0.02  0.74  4382    1
beta_hs_y1[24]     -0.01    0.01 0.47 -0.78 -0.02  0.00 0.02  0.65  3964    1
beta_hs_y1[25]      0.00    0.01 0.47 -0.79 -0.02  0.00 0.02  0.74  4049    1
beta_hs_y1[26]     -0.01    0.01 0.45 -0.89 -0.02  0.00 0.02  0.75  4174    1
beta_hs_y1[27]      0.00    0.01 0.45 -0.72 -0.02  0.00 0.03  0.81  3799    1
beta_hs_y1[28]      0.00    0.01 0.50 -0.84 -0.03  0.00 0.02  0.80  3951    1
beta_hs_y1[29]      0.00    0.01 0.40 -0.74 -0.02  0.00 0.02  0.80  3839    1
beta_hs_y1[30]      0.00    0.01 0.39 -0.69 -0.02  0.00 0.02  0.79  4520    1
beta_hs_y1[31]     -0.01    0.01 0.47 -0.78 -0.03  0.00 0.02  0.75  4200    1
beta_hs_y1[32]      0.00    0.01 0.60 -0.75 -0.02  0.00 0.02  0.81  3531    1
beta_hs_y1[33]     -0.01    0.01 0.48 -0.84 -0.02  0.00 0.02  0.77  3491    1
beta_hs_y1[34]      0.00    0.01 0.42 -0.74 -0.03  0.00 0.02  0.71  3307    1
beta_hs_y1[35]     -0.01    0.01 0.49 -0.70 -0.02  0.00 0.02  0.64  4160    1
beta_hs_y1[36]      0.00    0.01 0.44 -0.72 -0.02  0.00 0.02  0.71  3897    1
beta_hs_y1[37]      0.02    0.01 0.44 -0.67 -0.02  0.00 0.02  0.86  3824    1
beta_hs_y1[38]     -0.01    0.01 0.50 -0.84 -0.02  0.00 0.02  0.75  3511    1
beta_hs_y1[39]     -0.01    0.01 0.55 -0.82 -0.02  0.00 0.02  0.75  3584    1
beta_hs_y1[40]      0.00    0.01 0.47 -0.72 -0.02  0.00 0.02  0.73  3946    1
beta_hs_y1[41]      0.00    0.01 0.40 -0.77 -0.03  0.00 0.02  0.75  4030    1
beta_hs_y1[42]     -0.01    0.01 0.42 -0.79 -0.03  0.00 0.02  0.71  4398    1
beta_hs_y1[43]      0.00    0.01 0.40 -0.81 -0.02  0.00 0.02  0.76  3687    1
beta_hs_y1[44]      0.00    0.01 0.41 -0.82 -0.02  0.00 0.02  0.78  4414    1
beta_hs_y1[45]      0.01    0.01 0.42 -0.75 -0.02  0.00 0.02  0.79  4466    1
beta_hs_y1[46]     -0.01    0.01 0.38 -0.83 -0.02  0.00 0.02  0.67  3969    1
beta_hs_y1[47]      0.01    0.01 0.46 -0.86 -0.02  0.00 0.02  0.87  3672    1
beta_hs_y1[48]     -0.01    0.01 0.45 -0.82 -0.02  0.00 0.02  0.74  3470    1
beta_hs_y1[49]      0.01    0.01 0.42 -0.71 -0.02  0.00 0.02  0.82  3404    1
beta_hs_y1[50]      0.00    0.01 0.39 -0.64 -0.02  0.00 0.02  0.66  4098    1
beta_hs_y1[51]      0.00    0.01 0.41 -0.80 -0.02  0.00 0.02  0.75  4059    1
beta_hs_y1[52]      0.00    0.01 0.50 -0.76 -0.02  0.00 0.02  0.81  3504    1
beta_hs_y1[53]     -0.01    0.01 0.35 -0.76 -0.02  0.00 0.02  0.63  4198    1
beta_hs_y1[54]     -0.01    0.01 0.40 -0.79 -0.02  0.00 0.02  0.68  4283    1
beta_hs_y1[55]      0.00    0.01 0.38 -0.82 -0.02  0.00 0.02  0.67  3569    1
beta_hs_y1[56]      0.01    0.01 0.38 -0.74 -0.02  0.00 0.02  0.81  3292    1
beta_hs_y1[57]     -0.01    0.01 0.38 -0.82 -0.02  0.00 0.02  0.68  4512    1
beta_hs_y1[58]      0.00    0.01 0.40 -0.75 -0.02  0.00 0.02  0.72  4412    1
beta_hs_y1[59]      0.00    0.01 0.41 -0.78 -0.02  0.00 0.02  0.83  4447    1
beta_hs_y1[60]      0.01    0.01 0.41 -0.71 -0.02  0.00 0.02  0.76  3622    1
beta_hs_y1[61]      0.00    0.01 0.47 -0.76 -0.02  0.00 0.02  0.75  4200    1
beta_hs_y1[62]      0.00    0.01 0.41 -0.79 -0.02  0.00 0.02  0.78  4171    1
beta_hs_y1[63]     -0.01    0.01 0.44 -0.81 -0.02  0.00 0.02  0.70  3982    1
beta_hs_y1[64]      0.01    0.01 0.44 -0.75 -0.02  0.00 0.02  0.84  3663    1
beta_hs_y1[65]      0.00    0.01 0.43 -0.78 -0.02  0.00 0.02  0.75  4022    1
beta_hs_y1[66]     -0.01    0.01 0.39 -0.81 -0.02  0.00 0.02  0.70  3715    1
beta_hs_y1[67]      0.00    0.01 0.44 -0.69 -0.02  0.00 0.02  0.74  3415    1
beta_hs_y1[68]      0.01    0.01 0.51 -0.70 -0.02  0.00 0.03  0.85  3462    1
beta_hs_y1[69]      0.00    0.01 0.51 -0.83 -0.02  0.00 0.02  0.72  4016    1
beta_hs_y1[70]     -0.01    0.01 0.52 -0.79 -0.03  0.00 0.02  0.70  4898    1
beta_hs_y1[71]      0.01    0.01 0.36 -0.61 -0.02  0.00 0.03  0.74  4263    1
beta_hs_y1[72]      0.00    0.01 0.54 -0.78 -0.02  0.00 0.03  0.72  4096    1
beta_hs_y1[73]      0.00    0.01 0.44 -0.75 -0.03  0.00 0.02  0.77  4037    1
beta_hs_y1[74]      0.00    0.01 0.46 -0.84 -0.02  0.00 0.02  0.76  3915    1
beta_hs_y1[75]      0.00    0.01 0.45 -0.84 -0.02  0.00 0.02  0.67  3840    1
beta_hs_y1[76]     -0.01    0.01 0.42 -0.82 -0.03  0.00 0.02  0.73  3758    1
beta_hs_y1[77]      0.00    0.01 0.47 -0.79 -0.02  0.00 0.03  0.70  3880    1
beta_hs_y1[78]      0.00    0.01 0.43 -0.75 -0.03  0.00 0.03  0.83  3664    1
beta_hs_y1[79]      0.00    0.01 0.44 -0.81 -0.02  0.00 0.02  0.81  3509    1
beta_hs_y1[80]      0.00    0.01 0.39 -0.79 -0.02  0.00 0.02  0.72  3918    1
beta_hs_y1[81]      0.00    0.01 0.40 -0.71 -0.02  0.00 0.02  0.71  4563    1
beta_hs_y1[82]     -0.01    0.01 0.43 -0.74 -0.02  0.00 0.02  0.65  4085    1
beta_hs_y1[83]     -0.01    0.01 0.43 -0.73 -0.02  0.00 0.02  0.73  3284    1
beta_hs_y1[84]     -0.01    0.01 0.41 -0.83 -0.02  0.00 0.02  0.68  3733    1
beta_hs_y1[85]      0.01    0.01 0.43 -0.74 -0.02  0.00 0.02  0.73  3602    1
beta_hs_y1[86]      0.01    0.01 0.40 -0.69 -0.02  0.00 0.02  0.81  4530    1
beta_hs_y1[87]      0.00    0.01 0.42 -0.73 -0.02  0.00 0.02  0.70  4413    1
beta_hs_y1[88]     -0.01    0.01 0.43 -0.76 -0.02  0.00 0.02  0.68  3884    1
beta_hs_y1[89]      0.00    0.01 0.47 -0.75 -0.02  0.00 0.02  0.74  4103    1
beta_hs_y1[90]      0.00    0.01 0.45 -0.81 -0.02  0.00 0.02  0.79  3967    1
beta_hs_y1[91]      0.00    0.01 0.43 -0.75 -0.02  0.00 0.02  0.67  4160    1
beta_hs_y1[92]     -0.01    0.01 0.45 -0.83 -0.02  0.00 0.02  0.88  3500    1
beta_hs_y1[93]      0.01    0.01 0.46 -0.77 -0.02  0.00 0.03  0.80  3128    1
beta_hs_y1[94]     -0.01    0.01 0.38 -0.74 -0.02  0.00 0.02  0.65  3754    1
beta_hs_y1[95]      0.00    0.01 0.40 -0.74 -0.02  0.00 0.02  0.80  4625    1
beta_hs_y1[96]      0.00    0.01 0.43 -0.70 -0.02  0.00 0.02  0.72  4167    1
beta_hs_y1[97]      0.01    0.01 0.45 -0.74 -0.02  0.00 0.02  0.82  4062    1
beta_hs_y1[98]      0.01    0.01 0.42 -0.69 -0.02  0.00 0.02  0.72  3732    1
beta_hs_y1[99]      0.01    0.01 0.47 -0.71 -0.02  0.00 0.02  0.80  4002    1
beta_hs_y1[100]     0.00    0.01 0.40 -0.72 -0.02  0.00 0.02  0.68  3616    1
beta_hs_y2[1]       0.01    0.01 0.43 -0.74 -0.02  0.00 0.02  0.82  4269    1
beta_hs_y2[2]      -0.02    0.01 0.49 -0.91 -0.02  0.00 0.02  0.64  4243    1
beta_hs_y2[3]       0.01    0.01 0.41 -0.69 -0.02  0.00 0.02  0.76  4328    1
beta_hs_y2[4]       0.01    0.01 0.43 -0.69 -0.02  0.00 0.02  0.81  3923    1
beta_hs_y2[5]       0.00    0.01 0.35 -0.65 -0.02  0.00 0.02  0.68  4335    1
beta_hs_y2[6]       0.00    0.01 0.45 -0.76 -0.02  0.00 0.02  0.75  3914    1
beta_hs_y2[7]       0.00    0.01 0.44 -0.85 -0.02  0.00 0.03  0.75  4171    1
beta_hs_y2[8]       0.00    0.01 0.40 -0.71 -0.02  0.00 0.02  0.70  3634    1
beta_hs_y2[9]       0.00    0.01 0.46 -0.79 -0.02  0.00 0.02  0.83  4277    1
beta_hs_y2[10]      0.00    0.01 0.44 -0.72 -0.02  0.00 0.02  0.81  4004    1
beta_hs_y2[11]      0.00    0.01 0.48 -0.84 -0.02  0.00 0.02  0.84  4250    1
beta_hs_y2[12]      0.01    0.01 0.42 -0.76 -0.02  0.00 0.02  0.80  3605    1
beta_hs_y2[13]      0.00    0.01 0.40 -0.74 -0.02  0.00 0.02  0.74  3414    1
beta_hs_y2[14]      0.00    0.01 0.43 -0.73 -0.02  0.00 0.02  0.75  3644    1
beta_hs_y2[15]     -0.01    0.01 0.43 -0.68 -0.02  0.00 0.02  0.65  3655    1
beta_hs_y2[16]     -0.01    0.01 0.43 -0.83 -0.02  0.00 0.02  0.71  4510    1
beta_hs_y2[17]      0.00    0.01 0.43 -0.74 -0.02  0.00 0.02  0.79  4659    1
beta_hs_y2[18]     -0.01    0.01 0.42 -0.81 -0.02  0.00 0.02  0.72  3590    1
beta_hs_y2[19]      0.01    0.01 0.52 -0.66 -0.02  0.00 0.02  0.72  3264    1
beta_hs_y2[20]      0.00    0.01 0.52 -0.82 -0.02  0.00 0.02  0.78  3462    1
beta_hs_y2[21]      0.00    0.01 0.50 -0.75 -0.02  0.00 0.02  0.77  3755    1
beta_hs_y2[22]      0.00    0.01 0.44 -0.75 -0.02  0.00 0.02  0.71  3749    1
beta_hs_y2[23]      0.01    0.01 0.43 -0.74 -0.02  0.00 0.02  0.90  3608    1
beta_hs_y2[24]      0.01    0.01 0.47 -0.67 -0.02  0.00 0.02  0.80  3674    1
beta_hs_y2[25]      0.00    0.01 0.49 -0.83 -0.02  0.00 0.02  0.87  3547    1
beta_hs_y2[26]     -0.01    0.01 0.49 -0.85 -0.02  0.00 0.02  0.82  4302    1
beta_hs_y2[27]      0.01    0.01 0.42 -0.70 -0.02  0.00 0.02  0.75  3625    1
beta_hs_y2[28]      0.00    0.01 0.45 -0.84 -0.02  0.00 0.02  0.75  3584    1
beta_hs_y2[29]     -0.01    0.01 0.43 -0.73 -0.02  0.00 0.02  0.71  4042    1
beta_hs_y2[30]      0.01    0.01 0.41 -0.69 -0.02  0.00 0.02  0.74  3859    1
beta_hs_y2[31]      0.00    0.01 0.46 -0.72 -0.02  0.00 0.02  0.77  3403    1
beta_hs_y2[32]      0.00    0.01 0.52 -0.71 -0.02  0.00 0.02  0.79  3198    1
beta_hs_y2[33]      0.00    0.01 0.42 -0.71 -0.02  0.00 0.02  0.83  3635    1
beta_hs_y2[34]      0.00    0.01 0.43 -0.71 -0.02  0.00 0.02  0.74  3387    1
beta_hs_y2[35]     -0.01    0.01 0.46 -0.81 -0.02  0.00 0.02  0.79  3514    1
beta_hs_y2[36]      0.00    0.01 0.42 -0.75 -0.03  0.00 0.02  0.75  3684    1
beta_hs_y2[37]      0.01    0.01 0.45 -0.68 -0.02  0.00 0.02  0.74  4026    1
beta_hs_y2[38]      0.01    0.01 0.44 -0.69 -0.02  0.00 0.02  0.84  3737    1
beta_hs_y2[39]      0.00    0.01 0.49 -0.83 -0.02  0.00 0.02  0.77  3506    1
beta_hs_y2[40]      0.00    0.01 0.44 -0.69 -0.02  0.00 0.02  0.75  3828    1
beta_hs_y2[41]     -0.01    0.01 0.38 -0.70 -0.02  0.00 0.02  0.63  3628    1
beta_hs_y2[42]      0.00    0.01 0.49 -0.74 -0.02  0.00 0.02  0.72  4320    1
beta_hs_y2[43]     -0.02    0.01 0.43 -0.85 -0.02  0.00 0.02  0.70  3806    1
beta_hs_y2[44]      0.00    0.01 0.43 -0.80 -0.03  0.00 0.02  0.82  3753    1
beta_hs_y2[45]      0.00    0.01 0.44 -0.80 -0.03  0.00 0.02  0.82  4488    1
beta_hs_y2[46]      0.00    0.01 0.45 -0.76 -0.02  0.00 0.02  0.72  3498    1
beta_hs_y2[47]     -0.01    0.01 0.52 -0.79 -0.02  0.00 0.02  0.66  3703    1
beta_hs_y2[48]      0.01    0.01 0.55 -0.81 -0.02  0.00 0.03  0.89  3354    1
beta_hs_y2[49]      0.01    0.01 0.45 -0.78 -0.02  0.00 0.02  0.84  3776    1
beta_hs_y2[50]      0.00    0.01 0.49 -0.84 -0.02  0.00 0.02  0.81  3993    1
beta_hs_y2[51]      0.01    0.01 0.47 -0.76 -0.02  0.00 0.02  0.72  4187    1
beta_hs_y2[52]      0.01    0.01 0.41 -0.75 -0.02  0.00 0.02  0.75  4273    1
beta_hs_y2[53]      0.02    0.01 0.44 -0.71 -0.02  0.00 0.02  0.90  4232    1
beta_hs_y2[54]     -0.01    0.01 0.43 -0.87 -0.02  0.00 0.02  0.71  4556    1
beta_hs_y2[55]      0.00    0.01 0.44 -0.79 -0.03  0.00 0.02  0.82  3223    1
beta_hs_y2[56]      0.00    0.01 0.42 -0.69 -0.02  0.00 0.02  0.73  4006    1
beta_hs_y2[57]      0.01    0.01 0.49 -0.75 -0.02  0.00 0.02  0.87  3685    1
beta_hs_y2[58]      0.01    0.01 0.50 -0.75 -0.02  0.00 0.03  0.77  3875    1
beta_hs_y2[59]      0.00    0.01 0.43 -0.80 -0.02  0.00 0.02  0.72  4082    1
beta_hs_y2[60]      0.00    0.01 0.40 -0.69 -0.02  0.00 0.02  0.78  3810    1
beta_hs_y2[61]      0.00    0.01 0.40 -0.71 -0.02  0.00 0.02  0.69  3269    1
beta_hs_y2[62]      0.00    0.01 0.45 -0.74 -0.02  0.00 0.02  0.71  3695    1
beta_hs_y2[63]      0.00    0.01 0.40 -0.75 -0.02  0.00 0.02  0.82  3701    1
beta_hs_y2[64]      0.00    0.01 0.41 -0.72 -0.02  0.00 0.02  0.75  3859    1
beta_hs_y2[65]      0.00    0.01 0.45 -0.75 -0.02  0.00 0.02  0.75  3811    1
beta_hs_y2[66]      0.00    0.01 0.48 -0.75 -0.02  0.00 0.02  0.75  3337    1
beta_hs_y2[67]      0.00    0.01 0.44 -0.73 -0.02  0.00 0.02  0.82  3931    1
beta_hs_y2[68]     -0.01    0.01 0.44 -0.80 -0.02  0.00 0.02  0.70  4127    1
beta_hs_y2[69]     -0.01    0.01 0.41 -0.80 -0.02  0.00 0.02  0.73  3627    1
beta_hs_y2[70]      0.00    0.01 0.47 -0.81 -0.02  0.00 0.02  0.85  4183    1
beta_hs_y2[71]      0.00    0.01 0.71 -0.87 -0.02  0.00 0.02  0.85  3767    1
beta_hs_y2[72]      0.00    0.01 0.43 -0.71 -0.02  0.00 0.02  0.72  3655    1
beta_hs_y2[73]      0.00    0.01 0.49 -0.76 -0.02  0.00 0.02  0.84  3508    1
beta_hs_y2[74]      0.00    0.01 0.44 -0.80 -0.02  0.00 0.03  0.73  3793    1
beta_hs_y2[75]      0.02    0.01 0.44 -0.69 -0.02  0.00 0.03  0.96  3700    1
beta_hs_y2[76]      0.00    0.01 0.45 -0.80 -0.02  0.00 0.02  0.75  3988    1
beta_hs_y2[77]     -0.01    0.01 0.40 -0.74 -0.02  0.00 0.02  0.69  3274    1
beta_hs_y2[78]      0.01    0.01 0.47 -0.67 -0.02  0.00 0.02  0.81  3416    1
beta_hs_y2[79]      0.00    0.01 0.48 -0.80 -0.03  0.00 0.02  0.79  3633    1
beta_hs_y2[80]      0.00    0.01 0.43 -0.73 -0.02  0.00 0.02  0.82  3461    1
beta_hs_y2[81]      0.00    0.01 0.45 -0.74 -0.02  0.00 0.02  0.75  3743    1
beta_hs_y2[82]      0.00    0.01 0.44 -0.85 -0.02  0.00 0.02  0.77  3805    1
beta_hs_y2[83]      0.00    0.01 0.43 -0.73 -0.02  0.00 0.02  0.78  3631    1
beta_hs_y2[84]      0.00    0.01 0.41 -0.76 -0.02  0.00 0.02  0.67  4538    1
beta_hs_y2[85]      0.00    0.01 0.46 -0.81 -0.02  0.00 0.02  0.74  3711    1
beta_hs_y2[86]      0.01    0.01 0.46 -0.75 -0.02  0.00 0.03  0.83  3693    1
beta_hs_y2[87]      0.00    0.01 0.45 -0.75 -0.02  0.00 0.02  0.83  3415    1
beta_hs_y2[88]      0.01    0.01 0.42 -0.72 -0.02  0.00 0.02  0.83  3265    1
beta_hs_y2[89]     -0.01    0.01 0.46 -0.80 -0.02  0.00 0.02  0.74  3154    1
beta_hs_y2[90]      0.01    0.01 0.45 -0.64 -0.02  0.00 0.02  0.81  4321    1
beta_hs_y2[91]      0.00    0.01 0.46 -0.76 -0.02  0.00 0.02  0.73  4191    1
beta_hs_y2[92]      0.01    0.01 0.67 -0.83 -0.02  0.00 0.02  0.81  4245    1
beta_hs_y2[93]      0.00    0.01 0.47 -0.71 -0.02  0.00 0.02  0.82  3530    1
beta_hs_y2[94]      0.00    0.01 0.45 -0.71 -0.02  0.00 0.02  0.82  3199    1
beta_hs_y2[95]      0.00    0.01 0.42 -0.77 -0.02  0.00 0.02  0.76  3467    1
beta_hs_y2[96]      0.00    0.01 0.40 -0.67 -0.02  0.00 0.02  0.77  3615    1
beta_hs_y2[97]      0.00    0.01 0.44 -0.79 -0.02  0.00 0.02  0.78  4119    1
beta_hs_y2[98]     -0.01    0.01 0.47 -0.79 -0.02  0.00 0.02  0.67  3401    1
beta_hs_y2[99]      0.00    0.01 0.49 -0.79 -0.02  0.00 0.02  0.79  3617    1
beta_hs_y2[100]     0.00    0.01 0.43 -0.76 -0.02  0.00 0.02  0.70  3502    1
beta_hs_y3[1]       0.00    0.01 0.39 -0.80 -0.02  0.00 0.02  0.75  4204    1
beta_hs_y3[2]       0.01    0.01 0.47 -0.74 -0.02  0.00 0.02  0.82  3904    1
beta_hs_y3[3]       0.00    0.01 0.41 -0.83 -0.02  0.00 0.02  0.81  3624    1
beta_hs_y3[4]       0.00    0.01 0.56 -0.66 -0.02  0.00 0.02  0.76  3653    1
beta_hs_y3[5]      -0.01    0.01 0.49 -0.88 -0.02  0.00 0.02  0.74  3674    1
beta_hs_y3[6]       0.00    0.01 0.48 -0.77 -0.02  0.00 0.02  0.86  4181    1
beta_hs_y3[7]       0.02    0.01 0.53 -0.70 -0.02  0.00 0.02  0.75  4010    1
beta_hs_y3[8]       0.00    0.01 0.74 -0.76 -0.02  0.00 0.02  0.70  4172    1
beta_hs_y3[9]       0.00    0.01 0.50 -0.76 -0.02  0.00 0.02  0.76  3607    1
beta_hs_y3[10]      0.00    0.01 0.57 -0.76 -0.02  0.00 0.03  0.81  4138    1
beta_hs_y3[11]      0.00    0.01 0.45 -0.73 -0.02  0.00 0.03  0.73  4202    1
beta_hs_y3[12]      0.00    0.01 0.40 -0.72 -0.02  0.00 0.02  0.74  4407    1
beta_hs_y3[13]     -0.01    0.01 0.50 -0.73 -0.03  0.00 0.02  0.73  4336    1
beta_hs_y3[14]      0.00    0.01 0.39 -0.70 -0.02  0.00 0.02  0.72  3731    1
beta_hs_y3[15]     -0.01    0.01 0.52 -0.79 -0.02  0.00 0.02  0.75  4066    1
beta_hs_y3[16]      0.00    0.01 0.43 -0.66 -0.02  0.00 0.02  0.80  3788    1
beta_hs_y3[17]      0.01    0.01 0.49 -0.69 -0.02  0.00 0.02  0.77  4500    1
beta_hs_y3[18]     -0.01    0.01 0.42 -0.79 -0.02  0.00 0.02  0.68  4065    1
beta_hs_y3[19]     -0.01    0.01 0.43 -0.78 -0.02  0.00 0.02  0.75  3915    1
beta_hs_y3[20]      0.00    0.01 0.48 -0.79 -0.02  0.00 0.02  0.75  4038    1
beta_hs_y3[21]     -0.01    0.01 0.47 -0.77 -0.02  0.00 0.02  0.75  3958    1
beta_hs_y3[22]      0.00    0.01 0.46 -0.85 -0.02  0.00 0.02  0.78  3635    1
beta_hs_y3[23]      0.01    0.01 0.45 -0.76 -0.02  0.00 0.02  0.79  4144    1
beta_hs_y3[24]     -0.01    0.01 0.46 -0.78 -0.02  0.00 0.02  0.73  4096    1
beta_hs_y3[25]      0.01    0.01 0.47 -0.74 -0.02  0.00 0.02  0.72  4065    1
beta_hs_y3[26]     -0.01    0.01 0.49 -0.83 -0.02  0.00 0.02  0.74  3831    1
beta_hs_y3[27]      0.01    0.01 0.44 -0.76 -0.02  0.00 0.02  0.79  4166    1
beta_hs_y3[28]      0.00    0.01 0.46 -0.67 -0.02  0.00 0.02  0.67  4362    1
beta_hs_y3[29]      0.01    0.01 0.46 -0.68 -0.02  0.00 0.02  0.80  4249    1
beta_hs_y3[30]      0.00    0.01 0.48 -0.72 -0.03  0.00 0.02  0.72  4178    1
beta_hs_y3[31]     -0.01    0.01 0.44 -0.83 -0.02  0.00 0.02  0.68  3999    1
beta_hs_y3[32]      0.00    0.01 0.41 -0.73 -0.02  0.00 0.02  0.69  3991    1
beta_hs_y3[33]      0.01    0.01 0.69 -0.74 -0.02  0.00 0.02  0.81  3618    1
beta_hs_y3[34]      0.00    0.01 0.50 -0.73 -0.02  0.00 0.02  0.71  3203    1
beta_hs_y3[35]     -0.02    0.01 0.48 -0.79 -0.02  0.00 0.02  0.64  4099    1
beta_hs_y3[36]     -0.01    0.01 0.43 -0.77 -0.02  0.00 0.02  0.71  3337    1
beta_hs_y3[37]     -0.01    0.01 0.46 -0.66 -0.02  0.00 0.02  0.70  3027    1
beta_hs_y3[38]     -0.01    0.01 0.39 -0.71 -0.02  0.00 0.02  0.71  3905    1
beta_hs_y3[39]      0.00    0.01 0.49 -0.77 -0.02  0.00 0.03  0.75  3996    1
beta_hs_y3[40]      0.00    0.01 0.61 -0.85 -0.02  0.00 0.02  0.73  4438    1
beta_hs_y3[41]      0.00    0.01 0.49 -0.75 -0.02  0.00 0.03  0.76  3803    1
beta_hs_y3[42]      0.00    0.01 0.52 -0.78 -0.02  0.00 0.02  0.79  3486    1
beta_hs_y3[43]      0.00    0.01 0.47 -0.66 -0.02  0.00 0.02  0.75  4235    1
beta_hs_y3[44]      0.00    0.01 0.44 -0.78 -0.02  0.00 0.02  0.76  4187    1
beta_hs_y3[45]      0.01    0.01 0.44 -0.69 -0.02  0.00 0.03  0.76  3479    1
beta_hs_y3[46]     -0.01    0.01 0.49 -0.79 -0.02  0.00 0.02  0.72  3706    1
beta_hs_y3[47]      0.00    0.01 0.56 -0.74 -0.02  0.00 0.02  0.72  4347    1
beta_hs_y3[48]      0.00    0.01 0.46 -0.81 -0.02  0.00 0.02  0.73  5024    1
beta_hs_y3[49]     -0.02    0.01 0.45 -0.76 -0.02  0.00 0.02  0.60  4006    1
beta_hs_y3[50]      0.00    0.01 0.42 -0.73 -0.02  0.00 0.02  0.77  3561    1
beta_hs_y3[51]      0.00    0.01 0.46 -0.70 -0.03  0.00 0.02  0.85  4298    1
beta_hs_y3[52]      0.00    0.01 0.47 -0.83 -0.02  0.00 0.02  0.82  3792    1
beta_hs_y3[53]      0.00    0.01 0.39 -0.67 -0.02  0.00 0.02  0.77  3370    1
beta_hs_y3[54]      0.00    0.01 0.41 -0.76 -0.02  0.00 0.02  0.74  4278    1
beta_hs_y3[55]      0.00    0.01 0.47 -0.87 -0.02  0.00 0.02  0.77  3440    1
beta_hs_y3[56]      0.00    0.01 0.39 -0.74 -0.02  0.00 0.02  0.73  3630    1
beta_hs_y3[57]      0.00    0.01 0.48 -0.74 -0.02  0.00 0.02  0.83  3700    1
beta_hs_y3[58]      0.00    0.01 0.43 -0.76 -0.02  0.00 0.02  0.76  3892    1
beta_hs_y3[59]     -0.01    0.01 0.42 -0.78 -0.02  0.00 0.02  0.79  4496    1
beta_hs_y3[60]     -0.01    0.01 0.48 -0.75 -0.02  0.00 0.02  0.74  3793    1
beta_hs_y3[61]      0.00    0.01 0.38 -0.74 -0.02  0.00 0.02  0.74  4003    1
beta_hs_y3[62]      0.01    0.01 0.46 -0.70 -0.02  0.00 0.02  0.74  4367    1
beta_hs_y3[63]      0.02    0.01 0.52 -0.65 -0.02  0.00 0.02  0.82  3597    1
beta_hs_y3[64]      0.01    0.01 0.41 -0.69 -0.02  0.00 0.03  0.79  3648    1
beta_hs_y3[65]      0.00    0.01 0.43 -0.69 -0.02  0.00 0.02  0.69  4052    1
beta_hs_y3[66]      0.01    0.01 0.47 -0.69 -0.02  0.00 0.02  0.78  3633    1
beta_hs_y3[67]     -0.01    0.01 0.51 -0.82 -0.02  0.00 0.02  0.71  3925    1
beta_hs_y3[68]     -0.01    0.01 0.48 -0.81 -0.02  0.00 0.02  0.74  4121    1
beta_hs_y3[69]      0.00    0.01 0.45 -0.65 -0.02  0.00 0.02  0.67  3894    1
beta_hs_y3[70]      0.00    0.01 0.49 -0.75 -0.02  0.00 0.02  0.78  3721    1
beta_hs_y3[71]      0.01    0.01 0.62 -0.74 -0.02  0.00 0.02  0.84  4667    1
beta_hs_y3[72]      0.00    0.01 0.54 -0.76 -0.02  0.00 0.02  0.79  4094    1
beta_hs_y3[73]      0.00    0.01 0.53 -0.74 -0.02  0.00 0.03  0.73  4062    1
beta_hs_y3[74]      0.00    0.01 0.48 -0.79 -0.03  0.00 0.02  0.84  4073    1
beta_hs_y3[75]      0.01    0.01 0.52 -0.72 -0.02  0.00 0.02  0.76  3562    1
beta_hs_y3[76]      0.01    0.01 0.38 -0.69 -0.02  0.00 0.02  0.71  3755    1
beta_hs_y3[77]      0.01    0.01 0.49 -0.69 -0.02  0.00 0.02  0.78  3966    1
beta_hs_y3[78]      0.00    0.01 0.52 -0.71 -0.02  0.00 0.02  0.80  3758    1
beta_hs_y3[79]      0.00    0.01 0.46 -0.67 -0.02  0.00 0.02  0.68  4403    1
beta_hs_y3[80]      0.00    0.01 0.47 -0.71 -0.02  0.00 0.02  0.77  3886    1
beta_hs_y3[81]      0.00    0.01 0.63 -0.78 -0.02  0.00 0.02  0.63  4297    1
beta_hs_y3[82]      0.00    0.01 0.39 -0.69 -0.02  0.00 0.02  0.71  3728    1
beta_hs_y3[83]     -0.01    0.01 0.43 -0.83 -0.02  0.00 0.02  0.68  3519    1
beta_hs_y3[84]      0.00    0.01 0.47 -0.79 -0.02  0.00 0.02  0.74  4615    1
beta_hs_y3[85]      0.01    0.01 0.44 -0.79 -0.02  0.00 0.02  0.76  3930    1
beta_hs_y3[86]      0.00    0.01 0.47 -0.65 -0.02  0.00 0.02  0.71  4136    1
beta_hs_y3[87]     -0.02    0.01 0.41 -0.74 -0.02  0.00 0.02  0.61  3722    1
beta_hs_y3[88]     -0.01    0.01 0.50 -0.78 -0.02  0.00 0.02  0.71  4160    1
beta_hs_y3[89]     -0.01    0.01 0.53 -0.86 -0.02  0.00 0.02  0.71  3826    1
beta_hs_y3[90]      0.00    0.01 0.44 -0.78 -0.02  0.00 0.02  0.76  3450    1
beta_hs_y3[91]      0.00    0.01 0.44 -0.75 -0.02  0.00 0.02  0.82  3617    1
beta_hs_y3[92]      0.00    0.01 0.39 -0.75 -0.02  0.00 0.02  0.74  3887    1
beta_hs_y3[93]      0.02    0.01 0.44 -0.65 -0.02  0.00 0.02  0.72  4485    1
beta_hs_y3[94]      0.00    0.01 0.44 -0.77 -0.02  0.00 0.02  0.82  4260    1
beta_hs_y3[95]      0.00    0.01 0.45 -0.73 -0.02  0.00 0.02  0.76  3529    1
beta_hs_y3[96]      0.00    0.01 0.41 -0.77 -0.02  0.00 0.02  0.77  4242    1
beta_hs_y3[97]      0.01    0.01 0.45 -0.68 -0.02  0.00 0.02  0.87  3271    1
beta_hs_y3[98]     -0.01    0.01 0.48 -0.80 -0.02  0.00 0.02  0.67  3967    1
beta_hs_y3[99]      0.00    0.01 0.48 -0.71 -0.02  0.00 0.02  0.74  3825    1
beta_hs_y3[100]     0.00    0.01 0.48 -0.71 -0.02  0.00 0.02  0.68  4072    1
sigma_y[1]          0.80    0.01 0.60  0.03  0.33  0.67 1.17  2.23  5438    1
sigma_y[2]          0.79    0.01 0.62  0.03  0.30  0.66 1.16  2.29  6400    1
sigma_y[3]          0.80    0.01 0.59  0.03  0.34  0.69 1.15  2.21  5997    1
Residual_cors[1,1]  1.00     NaN 0.00  1.00  1.00  1.00 1.00  1.00   NaN  NaN
Residual_cors[1,2]  0.01    0.00 0.35 -0.63 -0.25  0.01 0.26  0.65 10396    1
Residual_cors[1,3]  0.01    0.00 0.35 -0.66 -0.24  0.01 0.27  0.68  8619    1
Residual_cors[2,1]  0.01    0.00 0.35 -0.63 -0.25  0.01 0.26  0.65 10396    1
Residual_cors[2,2]  1.00    0.00 0.00  1.00  1.00  1.00 1.00  1.00  4070    1
Residual_cors[2,3]  0.00    0.00 0.35 -0.67 -0.25 -0.01 0.25  0.69  7857    1
Residual_cors[3,1]  0.01    0.00 0.35 -0.66 -0.24  0.01 0.27  0.68  8619    1
Residual_cors[3,2]  0.00    0.00 0.35 -0.67 -0.25 -0.01 0.25  0.69  7857    1
Residual_cors[3,3]  1.00    0.00 0.00  1.00  1.00  1.00 1.00  1.00  3612    1
residual_cors[1]    0.01    0.00 0.35 -0.63 -0.25  0.01 0.26  0.65 10396    1
residual_cors[2]    0.01    0.00 0.35 -0.66 -0.24  0.01 0.27  0.68  8619    1
residual_cors[3]    0.00    0.00 0.35 -0.67 -0.25 -0.01 0.25  0.69  7857    1

Samples were drawn using NUTS(diag_e) at Tue Mar 24 21:09:04 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Ooo, I think you solved it with the M/M_0. Did you see the discussion of Number of Effective Non-zero Parameters in the horseshoe paper: https://arxiv.org/pdf/1707.01694.pdf ?

If you know beforehand you expect M_0 to be so big or whatnot then you can choose the prior using the info in section 3.3 of that paper.

2 Likes

Thanks, I will have a look at the paper again! But I think the section was related to setting \tau _0 = \frac {M_0} {M - M_0} \frac {\sigma} {\sqrt N} which is already included in the code by setting scale_global = \frac {M_0} {(M - M_0) \sqrt N} which is multiplied by sigma in the code like advised in appendix C of the paper. I actually built my code based on the code in the appendix.

But I think I’m approaching a solution. I could run the model with \sigma ~ half-N(0, 1), M = 160 and M_0 = 5 now and the model gave me the expected results. But even increasing M_0 to 6 or 7 leads to problems. I read in another thread that the horseshoe can handle something up to M_0 < \frac {N}{2}. So 6 or 7 significant predictors should be far away from A LOT of significant predictors. On the other hand, in the univariate case I can easily go up to 75 significant predictors with present high dimensionalty. In the multivariate example we don’t even have that :) I will have a look the paper again and play around a little bit with the priors.

Is 160 M or M_0? And what is then the other one?

I recommend to switch to the simpler parameterization of RHS as in Appendix C.1 of Sparsity information and regularization in the horseshoe and other shrinkage priors

Furthermore if you are using only normal model and have only 200 observations you could also integrate analytically beta out. I can post the code if you are interested to test. I have succesfully used it for 5000+ predictors and 100 observations.

2 Likes

And are the predictors correlated?

Thanks for your reply! I am trying the simple parameterization right now! :)

Our total number of predictors M is at least 160 and we assume that actually a larger number of them can have an effect, but not too many either. I think that M_0 could be somewhere between 50 and 75? In the univariate case the model runs fine even with M_0=75 but the model for the multivariate setting leads to problems.

Of course I would be very happy about the code and would like to have a look at it! :) We are only dealing with a (multivariate) normal model. On the other hand, the 200 observations reflect the number of observations in our data set, but the goal of our project is to create a model that is able to perform well also with more observations, as long as the user has access to more data points. So getting the horseshoe running somehow for my multivariate case would be awesome :)

In my toy data I didn’t include correlations among predictors but in our real data set some correlations are pretty likely.

If M_0 is large compared to the number of observations there can be identifiability issues which are more likely if the covariance matrix Sigma has weakly informative prior.

Are you assuming different coefficients to be dependent? That is if x_1 is relevant for predicting y_1, do you assume that there is higher probability that x_1 is relevant also for predicting y_2? You might want to use more elaborate structure for the prior to include that information.

Here’s the linear regression with RHS prior and beta integrated out. It will probably be very slow if you have more than 1000 observations.

data {
  int<lower=0> n;				      // number of observations
  int<lower=0> d;             // number of predictors
  vector<lower=-1,upper=1>[n] y;	// outputs
  matrix[n,d] x;				      // inputs
  real<lower=0> scale_icept;	// prior std for the intercept
  real<lower=0> scale_global;	// scale for the half-t prior for tau
  real<lower=1> nu_global;	  // degrees of freedom for the half-t priors for tau
  real<lower=1> nu_local;		  // degrees of freedom for the half-t priors for lambdas
                              // (nu_local = 1 corresponds to the horseshoe)
  real<lower=0> slab_scale;   // for the regularized horseshoe
  real<lower=0> slab_df;
}
transformed data {
  matrix[d,n] xt = x';
  vector[n] zeros = rep_vector(0.0, n);
  real scale_icept2 = scale_icept^2;
}
parameters {
  real<lower=0> sigma;
  real <lower=0> tau;         // global shrinkage parameter
  vector <lower=0>[d] lambda; // local shrinkage parameter
  real<lower=0> caux;
}
model {
  real c2 = (slab_scale * sqrt(caux))^2; // slab scale
  real tau2 = tau^2; // slab scale
  vector[d] sqr_lambda = square(lambda);
  vector[d] lambda_tilde2 = c2 * sqr_lambda ./ (c2 + tau2*sqr_lambda);
  matrix[n, n] C = add_diag(diag_post_multiply(x, lambda_tilde2*tau2)*xt, sigma^2) + scale_icept2;
  
  sigma ~ std_normal();
  // half-t priors for lambdas and tau, and inverse-gamma for c^2
  lambda ~ student_t(nu_local, 0, 1);
  tau ~ student_t(nu_global, 0, scale_global*sigma);
  caux ~ inv_gamma(0.5*slab_df, 0.5*slab_df);
  
  y ~ multi_normal(zeros, C);
}
1 Like

Thanks again for your help!

What I find interesting is that setting M_0 = 5 or lower in the multivariate case works flawless but M_0 = 6 already leads to a problem. It look likes the prior is somehow struggling with M_0 = 6 and 7 but tries to identify the non-zero coefficients and “gave up” with M_0 > 7 and directly sets everything to 0. Or is 6 already a large number in the multivariate case (especially compared to the 75 non-zero coefficients for the univariate model)? You can find experiments for setting M_0 between 5 and 8 below.

I generate the data using:

set.seed(2409)
options(max.print = 9999)
library(rstan)
library(dplyr)
library(MASS)

N <- 200 #number of observations
M <- 250 #number of predictors
M0 <- 75 #number of predictors that have a significant effect

#true values
intercept <- c(0.5, 0, -0.5)

beta <- list()
beta[[1]] <- c(rep(0.5, M0), rep(0, M-M0)) #true effects of predictors for y1
beta[[2]] <- c(rep(-0.5, M0), rep(0, M-M0)) #true effects of predictors for y1
beta[[3]] <- c(rep(0.5, M0), rep(0, M-M0)) #true effects of predictors for y1

sigma1<-0.2 #sigma for y1
sigma2<-0.5 #sigma for y2
sigma3<-0.7 #sigma for y3
rho<-0.3 #correlation across sigmas

##this creates the errors with correlations across equations

eps<-mvrnorm(N,mu=c(0,0,0),Sigma=cbind(c(1,rho,rho), c(rho,1,rho),c(rho, rho, 1)))
e1<-eps[,1]*sigma1
e2<-eps[,2]*sigma2
e3<-eps[,3]*sigma3

X <- matrix(rnorm(N*M),N,M) %>% 
  data.matrix()

y1 <- intercept[1] + X %*% beta[[1]] + e1
y2 <- intercept[2] + X %*% beta[[2]] + e2
y3 <- intercept[3] + X %*% beta[[3]] + e3

and run the following model:

code <- "data {
  int <lower=1> N; // number of obs
  int <lower=1> M; // number of predictors
  matrix[N,M] X;//predictor matrix
  vector[N] y1;//observed outcome
  vector[N] y2;//observed outcome
  vector[N] y3;//observed outcome
  //horseshoe prior
  real<lower=0> scale_global;
  real<lower=1> nu_global; 
  real<lower=1> nu_local;
  real<lower=0> slab_scale; 
  real<lower=1> slab_df;
}
transformed data {
  vector[3] y[N];
  
  for (n in 1:N) {
    y[n] = [y1[n], y2[n], y3[n]]';
  }
}
parameters {
  
  //coefficients
  real alpha[3];
  vector[M] beta_hs_y1_z;
  vector[M] beta_hs_y2_z;
  vector[M] beta_hs_y3_z;
  
  //horseshoe
  //y1
  real<lower=0> tau_y1; 
  vector<lower=0>[M] lambda_hs_y1; 
  real<lower=0> caux_y1;
  //y2
  real<lower=0> tau_y2; 
  vector<lower=0>[M] lambda_hs_y2; 
  real<lower=0> caux_y2;
  //y3
  real<lower=0> tau_y3; 
  vector<lower=0>[M] lambda_hs_y3; 
  real<lower=0> caux_y3;
  
  //scales
  vector <lower=0> [3] sigma_y;
  cholesky_factor_corr[3] L_Omega;
}

transformed parameters {
  vector[M] beta_hs_y1;
  vector<lower=0>[M] lambda_hs_tilde_y1;
  real<lower=0> c_y1;
  vector[M] beta_hs_y2;
  vector<lower=0>[M] lambda_hs_tilde_y2;
  real<lower=0> c_y2;
  vector[M] beta_hs_y3;
  vector<lower=0>[M] lambda_hs_tilde_y3;
  real<lower=0> c_y3;
  
  //horseshoe
  //y1
  c_y1 = slab_scale * sqrt(caux_y1);
  lambda_hs_tilde_y1 = sqrt( c_y1^2 * square(lambda_hs_y1) ./ (c_y1^2 + tau_y1^2*square(lambda_hs_y1)) );
  beta_hs_y1 = beta_hs_y1_z .*lambda_hs_tilde_y1*tau_y1;
  //y2
  c_y2 = slab_scale * sqrt(caux_y2);
  lambda_hs_tilde_y2 = sqrt( c_y2^2 * square(lambda_hs_y2) ./ (c_y2^2 + tau_y2^2*square(lambda_hs_y2)) );
  beta_hs_y2 = beta_hs_y2_z .*lambda_hs_tilde_y2*tau_y2;
  //y3
  c_y3 = slab_scale * sqrt(caux_y3);
  lambda_hs_tilde_y3 = sqrt( c_y3^2 * square(lambda_hs_y3) ./ (c_y3^2 + tau_y3^2*square(lambda_hs_y3)) );
  beta_hs_y3 = beta_hs_y3_z .*lambda_hs_tilde_y3*tau_y3;
  }

model {
  vector[N] mu1;
  vector[N] mu2;
  vector[N] mu3;
  matrix[3, 3] LSigma;
  vector[3] mu[N];
  
  mu1 = alpha[1] + X * beta_hs_y1; 
  mu2 = alpha[2] + X * beta_hs_y2; 
  mu3 = alpha[3] + X * beta_hs_y3;  

  for (n in 1:N) {
    mu[n] = [mu1[n], mu2[n], mu3[n]]';
  }
  
  LSigma = diag_pre_multiply(sigma_y, L_Omega);

  //priors
  
  //coefficients
  alpha ~ normal(0, 1);
  
  beta_hs_y1_z ~ normal(0, 1);
  beta_hs_y2_z ~ normal(0, 1);
  beta_hs_y3_z ~ normal(0, 1);
  
  //horseshoe
  //y1
  lambda_hs_y1 ∼ student_t(nu_local , 0, 1);
  tau_y1 ∼ student_t(nu_global , 0, scale_global*sigma_y[1]);
  caux_y1 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  //y2
  lambda_hs_y2 ∼ student_t(nu_local , 0, 1);
  tau_y2 ∼ student_t(nu_global , 0, scale_global*sigma_y[2]);
  caux_y2 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  //y3
  lambda_hs_y3 ∼ student_t(nu_local , 0, 1);
  tau_y3 ∼ student_t(nu_global , 0, scale_global*sigma_y[3]);
  caux_y3 ~ inv_gamma (0.5* slab_df , 0.5* slab_df );
  
  //scale
  L_Omega~lkj_corr_cholesky(2);
  sigma_y~normal(0,1);
  
  //likelihood
     y ~ multi_normal_cholesky(mu, LSigma);
}
generated quantities {
    corr_matrix[3] Residual_cors = multiply_lower_tri_self_transpose(L_Omega);
    vector<lower=-1,upper=1>[3] residual_cors;

    for (l in 1:3) {
    for (j in 1:(l - 1)) {
      residual_cors[choose(l - 1, 2) + j] = Residual_cors[j, l];
    }
  }
}"
model <- stan(model_code=code, 
              data = list(N = N,
                          M = M,
                          X = X,
                          y1 = as.numeric(y1),
                          y2 = as.numeric(y2),
                          y3 = as.numeric(y3),
                          scale_global = (M0)/((M-M0)*sqrt(N)),
                          nu_global = 1,
                          nu_local = 1,
                          slab_scale = 2,
                          slab_df = 10),
              chains = 4,
              cores = 4,
              iter = 2000,
              warmup = 1000,
              control=list(adapt_delta=0.99999, stepsize=0.0001, max_treedepth=12)) 

print(model, c("alpha", 
               "beta_hs_y1", 
               "beta_hs_y2",
               "beta_hs_y3",
               "sigma_y",
               "L_Omega"))

N = 200, M = 160, M_0 = 5

Inference for Stan model: 9179a975058675e62897a6d9f380704b.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                 mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha[1]         0.51       0 0.01  0.48  0.50  0.51  0.52  0.53  4793 1.00
alpha[2]        -0.02       0 0.03 -0.09 -0.04 -0.02  0.00  0.04  4811 1.00
alpha[3]        -0.55       0 0.05 -0.65 -0.58 -0.55 -0.52 -0.45  7583 1.00
beta_hs_y1[1]    0.51       0 0.01  0.48  0.50  0.51  0.52  0.54  3506 1.00
beta_hs_y1[2]    0.49       0 0.02  0.46  0.48  0.49  0.50  0.52  3780 1.00
beta_hs_y1[3]    0.46       0 0.02  0.43  0.45  0.46  0.47  0.49  3771 1.00
beta_hs_y1[4]    0.52       0 0.01  0.49  0.51  0.52  0.53  0.55  3841 1.00
beta_hs_y1[5]    0.47       0 0.01  0.44  0.46  0.47  0.48  0.49  3329 1.00
beta_hs_y1[6]    0.00       0 0.01 -0.02  0.00  0.00  0.00  0.00  3479 1.00
beta_hs_y1[7]    0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3395 1.00
beta_hs_y1[8]    0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3239 1.00
beta_hs_y1[9]    0.00       0 0.00 -0.02  0.00  0.00  0.00  0.00  3263 1.00
beta_hs_y1[10]   0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3395 1.00
...
beta_hs_y2[1]   -0.45       0 0.03 -0.52 -0.47 -0.45 -0.43 -0.39  3343 1.00
beta_hs_y2[2]   -0.53       0 0.03 -0.59 -0.55 -0.53 -0.50 -0.46  4158 1.00
beta_hs_y2[3]   -0.47       0 0.03 -0.54 -0.49 -0.47 -0.45 -0.40  3964 1.00
beta_hs_y2[4]   -0.50       0 0.03 -0.57 -0.53 -0.51 -0.48 -0.44  4097 1.00
beta_hs_y2[5]   -0.55       0 0.03 -0.61 -0.57 -0.55 -0.53 -0.49  3871 1.00
beta_hs_y2[6]    0.01       0 0.02 -0.01  0.00  0.00  0.01  0.05  2598 1.00
beta_hs_y2[7]    0.00       0 0.01 -0.02  0.00  0.00  0.00  0.02  3956 1.00
beta_hs_y2[8]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.02  3312 1.00
beta_hs_y2[9]    0.00       0 0.01 -0.05  0.00  0.00  0.00  0.01  3361 1.00
beta_hs_y2[10]   0.00       0 0.01 -0.02  0.00  0.00  0.00  0.02  3485 1.00
...
beta_hs_y3[1]    0.60       0 0.05  0.50  0.57  0.60  0.63  0.70  4024 1.00
beta_hs_y3[2]    0.45       0 0.05  0.35  0.41  0.45  0.48  0.55  3763 1.00
beta_hs_y3[3]    0.41       0 0.05  0.30  0.37  0.41  0.44  0.52  3915 1.00
beta_hs_y3[4]    0.54       0 0.05  0.44  0.50  0.54  0.57  0.63  3562 1.00
beta_hs_y3[5]    0.48       0 0.05  0.37  0.44  0.48  0.51  0.58  3825 1.00
beta_hs_y3[6]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.04  2929 1.00
beta_hs_y3[7]    0.00       0 0.01 -0.03  0.00  0.00  0.00  0.02  3480 1.00
beta_hs_y3[8]    0.00       0 0.01 -0.03  0.00  0.00  0.00  0.02  3043 1.00
beta_hs_y3[9]    0.00       0 0.01 -0.02  0.00  0.00  0.00  0.02  3337 1.00
beta_hs_y3[10]   0.00       0 0.01 -0.02  0.00  0.00  0.00  0.02  3661 1.00
...
sigma_y[1]       0.19       0 0.01  0.17  0.19  0.19  0.20  0.22  2646 1.00
sigma_y[2]       0.42       0 0.02  0.37  0.40  0.42  0.43  0.47  2525 1.00
sigma_y[3]       0.67       0 0.04  0.60  0.64  0.67  0.69  0.74  4892 1.00
L_Omega[1,1]     1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
L_Omega[1,2]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[1,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[2,1]     0.25       0 0.08  0.10  0.20  0.25  0.30  0.40  3755 1.00
L_Omega[2,2]     0.97       0 0.02  0.92  0.95  0.97  0.98  1.00  3431 1.00
L_Omega[2,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[3,1]     0.18       0 0.08  0.03  0.13  0.18  0.23  0.32  4447 1.00
L_Omega[3,2]     0.14       0 0.07 -0.01  0.09  0.14  0.19  0.27  5319 1.00
L_Omega[3,3]     0.97       0 0.02  0.93  0.96  0.97  0.98  0.99  4109 1.00

Samples were drawn using NUTS(diag_e) at Thu Mar 26 17:51:40 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

N = 200, M = 160, M_0 = 6

Inference for Stan model: 9179a975058675e62897a6d9f380704b.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                 mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff  Rhat
alpha[1]         0.44    0.05 0.08  0.25  0.38  0.49  0.51  0.54     3  1.52
alpha[2]         0.04    0.05 0.09 -0.08 -0.03  0.01  0.11  0.25     4  1.48
alpha[3]        -0.61    0.05 0.10 -0.82 -0.67 -0.59 -0.54 -0.46     4  1.33
beta_hs_y1[1]    0.26    0.18 0.26  0.00  0.00  0.29  0.51  0.54     2 21.87
beta_hs_y1[2]    0.24    0.17 0.24  0.00  0.00  0.27  0.49  0.51     2 23.06
beta_hs_y1[3]    0.23    0.16 0.23  0.00  0.00  0.27  0.46  0.48     2 17.85
beta_hs_y1[4]    0.26    0.18 0.26  0.00  0.00  0.28  0.52  0.54     2 24.34
beta_hs_y1[5]    0.24    0.16 0.23  0.00  0.00  0.43  0.47  0.49     2  4.76
beta_hs_y1[6]    0.25    0.17 0.24  0.00  0.00  0.29  0.49  0.52     2 18.82
beta_hs_y1[7]    0.00    0.00 0.00 -0.01  0.00  0.00  0.00  0.01  2424  1.01
beta_hs_y1[8]    0.00    0.00 0.00 -0.01  0.00  0.00  0.00  0.01  3280  1.00
beta_hs_y1[9]    0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.00  3048  1.01
beta_hs_y1[10]   0.00    0.00 0.00 -0.01  0.00  0.00  0.00  0.01  1014  1.01
...
beta_hs_y2[1]   -0.23    0.16 0.23 -0.51 -0.45 -0.20  0.00  0.00     2 10.77
beta_hs_y2[2]   -0.26    0.18 0.26 -0.58 -0.53 -0.30  0.00  0.00     2 11.54
beta_hs_y2[3]   -0.24    0.17 0.24 -0.53 -0.48 -0.24  0.00  0.00     2 10.53
beta_hs_y2[4]   -0.25    0.18 0.25 -0.56 -0.51 -0.23  0.00  0.00     2 11.92
beta_hs_y2[5]   -0.29    0.18 0.27 -0.60 -0.55 -0.48  0.00  0.00     2  4.03
beta_hs_y2[6]   -0.22    0.16 0.22 -0.50 -0.44 -0.21  0.00  0.00     2 10.00
beta_hs_y2[7]    0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.01  3009  1.00
beta_hs_y2[8]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.02  3101  1.01
beta_hs_y2[9]    0.00    0.00 0.01 -0.04  0.00  0.00  0.00  0.01  2069  1.01
beta_hs_y2[10]   0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.02  3263  1.00
...
beta_hs_y3[1]    0.30    0.21 0.30  0.00  0.00  0.34  0.60  0.68     2  8.88
beta_hs_y3[2]    0.22    0.16 0.23  0.00  0.00  0.17  0.45  0.53     2  6.83
beta_hs_y3[3]    0.20    0.14 0.20  0.00  0.00  0.15  0.40  0.49     2  5.54
beta_hs_y3[4]    0.27    0.19 0.27  0.00  0.00  0.21  0.54  0.62     2  8.08
beta_hs_y3[5]    0.24    0.16 0.24  0.00  0.00  0.37  0.48  0.56     2  4.18
beta_hs_y3[6]    0.27    0.19 0.28  0.00  0.00  0.27  0.55  0.63     2  7.74
beta_hs_y3[7]    0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.01  3182  1.00
beta_hs_y3[8]    0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.01  3544  1.00
beta_hs_y3[9]    0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.02  3342  1.00
beta_hs_y3[10]   0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.02  3765  1.00
...
sigma_y[1]       0.67    0.34 0.48  0.18  0.19  0.58  1.14  1.25     2 11.66
sigma_y[2]       0.81    0.28 0.41  0.37  0.41  0.75  1.21  1.33     2  8.40
sigma_y[3]       1.01    0.24 0.35  0.61  0.67  0.94  1.36  1.47     2  6.74
L_Omega[1,1]     1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN   NaN
L_Omega[1,2]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN   NaN
L_Omega[1,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN   NaN
L_Omega[2,1]    -0.32    0.41 0.59 -0.92 -0.90 -0.49  0.27  0.39     2 11.49
L_Omega[2,2]     0.69    0.19 0.27  0.38  0.43  0.68  0.96  0.99     2 11.13
L_Omega[2,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN   NaN
L_Omega[3,1]     0.52    0.24 0.35  0.06  0.18  0.61  0.87  0.90     2  6.90
L_Omega[3,2]     0.08    0.03 0.07 -0.04  0.03  0.07  0.12  0.24     5  1.27
L_Omega[3,3]     0.73    0.17 0.24  0.44  0.49  0.74  0.97  0.99     2 10.00

Samples were drawn using NUTS(diag_e) at Thu Mar 26 17:49:59 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

N = 200, M = 160, M_0 = 7

Inference for Stan model: 9179a975058675e62897a6d9f380704b.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                 mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha[1]         0.37    0.01 0.09  0.19  0.32  0.37  0.43  0.55   235 1.02
alpha[2]         0.12    0.01 0.09 -0.08  0.06  0.12  0.18  0.30   217 1.02
alpha[3]        -0.68    0.01 0.10 -0.88 -0.74 -0.68 -0.62 -0.48   238 1.02
beta_hs_y1[1]    0.00    0.00 0.01  0.00  0.00  0.00  0.00  0.01  2226 1.00
beta_hs_y1[2]    0.00    0.00 0.00  0.00  0.00  0.00  0.00  0.01   369 1.02
beta_hs_y1[3]    0.00    0.00 0.01  0.00  0.00  0.00  0.00  0.02   414 1.02
beta_hs_y1[4]    0.00    0.00 0.00  0.00  0.00  0.00  0.00  0.01  2417 1.00
beta_hs_y1[5]    0.05    0.06 0.15  0.00  0.00  0.00  0.00  0.54     7 2.54
beta_hs_y1[6]    0.00    0.00 0.01  0.00  0.00  0.00  0.00  0.01  2085 1.00
beta_hs_y1[7]    0.01    0.01 0.05  0.00  0.00  0.00  0.00  0.05    78 1.05
beta_hs_y1[8]    0.00    0.00 0.00  0.00  0.00  0.00  0.00  0.00  2989 1.00
beta_hs_y1[9]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.00  2418 1.00
beta_hs_y1[10]   0.00    0.00 0.00  0.00  0.00  0.00  0.00  0.01  1977 1.00
...
beta_hs_y2[1]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  1954 1.00
beta_hs_y2[2]    0.00    0.00 0.01 -0.02  0.00  0.00  0.00  0.00  2132 1.00
beta_hs_y2[3]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01   346 1.01
beta_hs_y2[4]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  1584 1.00
beta_hs_y2[5]   -0.08    0.06 0.18 -0.63 -0.04  0.00  0.00  0.00     8 2.37
beta_hs_y2[6]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  2276 1.00
beta_hs_y2[7]   -0.01    0.01 0.05 -0.02  0.00  0.00  0.00  0.01    79 1.05
beta_hs_y2[8]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  1587 1.00
beta_hs_y2[9]    0.00    0.00 0.01 -0.03  0.00  0.00  0.00  0.00  2004 1.00
beta_hs_y2[10]   0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  2709 1.00
...
beta_hs_y3[1]    0.00    0.00 0.01  0.00  0.00  0.00  0.00  0.02  2005 1.00
beta_hs_y3[2]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  1042 1.00
beta_hs_y3[3]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  2944 1.00
beta_hs_y3[4]    0.00    0.00 0.00 -0.01  0.00  0.00  0.00  0.01  2321 1.00
beta_hs_y3[5]    0.05    0.06 0.16 -0.01  0.00  0.00  0.00  0.58     7 2.47
beta_hs_y3[6]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  1864 1.01
beta_hs_y3[7]    0.00    0.01 0.05 -0.01  0.00  0.00  0.00  0.01    82 1.05
beta_hs_y3[8]    0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.01  1088 1.00
beta_hs_y3[9]    0.00    0.00 0.00 -0.01  0.00  0.00  0.00  0.01  1809 1.00
beta_hs_y3[10]   0.00    0.00 0.01 -0.01  0.00  0.00  0.00  0.02   131 1.04
...
sigma_y[1]       1.25    0.02 0.07  1.12  1.20  1.25  1.30  1.38    19 1.19
sigma_y[2]       1.31    0.02 0.08  1.14  1.26  1.31  1.37  1.46    17 1.24
sigma_y[3]       1.43    0.02 0.08  1.28  1.38  1.43  1.48  1.59    25 1.14
L_Omega[1,1]     1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
L_Omega[1,2]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[1,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[2,1]    -0.92    0.00 0.01 -0.94 -0.93 -0.92 -0.91 -0.89    72 1.05
L_Omega[2,2]     0.39    0.00 0.03  0.35  0.38  0.39  0.41  0.45    73 1.05
L_Omega[2,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[3,1]     0.88    0.00 0.02  0.85  0.87  0.88  0.89  0.91    34 1.10
L_Omega[3,2]     0.03    0.00 0.03 -0.03  0.01  0.04  0.06  0.10  1106 1.00
L_Omega[3,3]     0.47    0.01 0.03  0.41  0.45  0.47  0.49  0.53    35 1.10

Samples were drawn using NUTS(diag_e) at Thu Mar 26 16:15:53 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

N = 200, M = 160, M_0 = 8

Inference for Stan model: 9179a975058675e62897a6d9f380704b.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                 mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha[1]         0.39       0 0.09  0.21  0.33  0.39  0.45  0.57  3252 1.00
alpha[2]         0.10       0 0.09 -0.08  0.04  0.10  0.17  0.29  3480 1.00
alpha[3]        -0.67       0 0.10 -0.88 -0.73 -0.66 -0.60 -0.46  3339 1.00
beta_hs_y1[1]    0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  1667 1.00
beta_hs_y1[2]    0.00       0 0.00  0.00  0.00  0.00  0.00  0.01  3375 1.00
beta_hs_y1[3]    0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  2790 1.00
beta_hs_y1[4]    0.00       0 0.00  0.00  0.00  0.00  0.00  0.01  2831 1.00
beta_hs_y1[5]    0.00       0 0.00  0.00  0.00  0.00  0.00  0.00  3004 1.00
beta_hs_y1[6]    0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  3206 1.00
beta_hs_y1[7]    0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  2557 1.00
beta_hs_y1[8]    0.00       0 0.01  0.00  0.00  0.00  0.00  0.02  2536 1.00
beta_hs_y1[9]    0.00       0 0.01 -0.02  0.00  0.00  0.00  0.00  3267 1.00
beta_hs_y1[10]   0.00       0 0.00  0.00  0.00  0.00  0.00  0.00  3230 1.00
...
beta_hs_y2[1]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3247 1.00
beta_hs_y2[2]    0.00       0 0.01 -0.02  0.00  0.00  0.00  0.00  2882 1.00
beta_hs_y2[3]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3038 1.00
beta_hs_y2[4]    0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  2613 1.00
beta_hs_y2[5]   -0.01       0 0.03 -0.12  0.00  0.00  0.00  0.00   991 1.00
beta_hs_y2[6]    0.00       0 0.00 -0.01  0.00  0.00  0.00  0.01  3251 1.00
beta_hs_y2[7]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  2248 1.00
beta_hs_y2[8]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3418 1.00
beta_hs_y2[9]    0.00       0 0.01 -0.03  0.00  0.00  0.00  0.01  2355 1.00
beta_hs_y2[10]   0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  2791 1.00
...
beta_hs_y3[1]    0.00       0 0.02  0.00  0.00  0.00  0.00  0.04  1544 1.00
beta_hs_y3[2]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3358 1.00
beta_hs_y3[3]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3063 1.00
beta_hs_y3[4]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3325 1.00
beta_hs_y3[5]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3069 1.00
beta_hs_y3[6]    0.00       0 0.01  0.00  0.00  0.00  0.00  0.01  3105 1.00
beta_hs_y3[7]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3164 1.00
beta_hs_y3[8]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  2949 1.00
beta_hs_y3[9]    0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3047 1.00
beta_hs_y3[10]   0.00       0 0.01 -0.01  0.00  0.00  0.00  0.01  3614 1.00
...
sigma_y[1]       1.32       0 0.06  1.20  1.27  1.32  1.36  1.45  2797 1.00
sigma_y[2]       1.38       0 0.07  1.25  1.33  1.37  1.42  1.52  2773 1.00
sigma_y[3]       1.49       0 0.07  1.35  1.44  1.48  1.53  1.63  2829 1.00
L_Omega[1,1]     1.00     NaN 0.00  1.00  1.00  1.00  1.00  1.00   NaN  NaN
L_Omega[1,2]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[1,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[2,1]    -0.93       0 0.01 -0.94 -0.93 -0.93 -0.92 -0.90  3375 1.00
L_Omega[2,2]     0.38       0 0.03  0.33  0.36  0.38  0.39  0.43  3358 1.00
L_Omega[2,3]     0.00     NaN 0.00  0.00  0.00  0.00  0.00  0.00   NaN  NaN
L_Omega[3,1]     0.89       0 0.01  0.86  0.88  0.89  0.90  0.92  3433 1.00
L_Omega[3,2]     0.03       0 0.03 -0.03  0.01  0.03  0.06  0.10  8740 1.00
L_Omega[3,3]     0.45       0 0.03  0.40  0.43  0.45  0.47  0.51  3462 1.00

Samples were drawn using NUTS(diag_e) at Thu Mar 26 18:37:57 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

I would say that in our setting this could be the case. So if e.g. x_1 has a significant effect on y_1, we can assume the same for y_2 and y_3. Is there any way to account for this?

Thanks! I’ll have a look at that too!

I don’t know answer, but this is very interesting. How this behaves with respect to the number of dimensions in the multivariate normal? What is the posterior for that covariance matrix? Is it getting singular?

Yes. You could use the same lambda_1 for all effects related to x_1. The effects can still be different even if the local prior scales are the same.

Thanks for all your help! I haven’t tried the multivariate model anymore because we decided to focus on the univariate, lot more stable model.

But I have another question not directly related to the initial question. I hope it is okay, if I ask it here without opening a new topic:

What is the best way to make the (regularized) horseshoe prior more narrow? The true coefficients we used in the example above are actually quite large in relation to the values we truly expect. We would rather expect true values for the coefficients somewhere around 0.05 instead of 0.5. With setting the true coefficients to 0.5 the horseshoe in the univariate case works flawless but the results were not close to the true values in the presence of high dimensionality if I set the true coefficients to e.g. 0.1. The model didn’t directly show any fitting issues (I didn’t have any divergent transitions etc.), the estimates were just not really good.
On the other hand, we also run the same model with a Spike-and-Slab prior. Here I set the “spike” to Normal(0, 0.01) and the “slab” to Student-t(4, 0, 0.25). Maybe this helps to get an idea of which distribution we would expect for the horseshoe. The Spike-and-Slab prior works well but is a lot more slower compared to the horseshoe in the high dimensionality case.

Thank you in advance!

Use regularized horseshoe prior which has slab scale parameter. See Figures 1 and 2 in Sparsity information and regularization in the horseshoe and other shrinkage priors for illustration how spike-and-slab prior and regularized horseshoe prior are related.

Thanks for your fast reply! When running the horseshoe for a true coefficient of 0.1, I already tried setting the slab_scale to smaller values. I tried some pretty small values like 0.25 but also larger values like 1. I also set c directly to a value like 0.5. The results always look as the follows (where the true values are \alpha=1, \delta_1,...,\delta_{80}=0.1, \delta_{81},...,\delta_{240}=0 and \sigma = 0.5 with N=200). There are no convergence issues or any warnings:

Inference for Stan model: fa28aead507adf88c1ad9605d177650d.
4 chains, each with iter=4000; warmup=1000; thin=1; 
post-warmup draws per chain=3000, total post-warmup draws=12000.

                 mean se_mean   sd  2.5%   25%   50%   75% 97.5% n_eff Rhat
alpha            2.21    0.01 0.41  1.44  1.92  2.20  2.48  3.04  3588    1
delta_hs[1,1]    0.17    0.00 0.06  0.05  0.13  0.17  0.20  0.27  3459    1
delta_hs[1,2]    0.02    0.00 0.04 -0.04  0.00  0.01  0.04  0.11  4466    1
delta_hs[1,3]    0.10    0.00 0.05  0.00  0.06  0.10  0.13  0.20  2846    1
delta_hs[1,4]    0.11    0.00 0.05  0.00  0.07  0.11  0.14  0.21  3723    1
delta_hs[1,5]    0.09    0.00 0.05  0.00  0.05  0.09  0.12  0.19  3182    1
delta_hs[1,6]    0.18    0.00 0.04  0.09  0.15  0.18  0.21  0.26  4894    1
delta_hs[1,7]   -0.01    0.00 0.04 -0.09 -0.02  0.00  0.01  0.06  4881    1
delta_hs[1,8]    0.05    0.00 0.05 -0.02  0.01  0.05  0.09  0.16  4001    1
delta_hs[1,9]    0.09    0.00 0.05  0.00  0.05  0.09  0.12  0.19  2672    1
delta_hs[1,10]   0.08    0.00 0.06 -0.01  0.03  0.08  0.12  0.21  2370    1
...
delta_hs[1,70]   0.09    0.00 0.05  0.00  0.05  0.09  0.13  0.20  2905    1
delta_hs[1,71]   0.00    0.00 0.03 -0.06 -0.01  0.00  0.01  0.06  7781    1
delta_hs[1,72]   0.00    0.00 0.03 -0.07 -0.02  0.00  0.01  0.05  8060    1
delta_hs[1,73]   0.10    0.00 0.05  0.00  0.07  0.10  0.13  0.19  3272    1
delta_hs[1,74]   0.04    0.00 0.05 -0.03  0.00  0.03  0.07  0.16  2935    1
delta_hs[1,75]   0.09    0.00 0.05 -0.01  0.04  0.09  0.12  0.20  3576    1
delta_hs[1,76]   0.08    0.00 0.05  0.00  0.04  0.08  0.11  0.18  4136    1
delta_hs[1,77]   0.14    0.00 0.06  0.01  0.10  0.14  0.18  0.26  3322    1
delta_hs[1,78]   0.03    0.00 0.04 -0.03  0.00  0.02  0.05  0.12  6191    1
delta_hs[1,79]   0.17    0.00 0.05  0.05  0.13  0.17  0.20  0.27  3503    1
delta_hs[1,80]   0.13    0.00 0.05  0.02  0.10  0.14  0.17  0.23  3427    1
delta_hs[1,81]   0.01    0.00 0.03 -0.04  0.00  0.01  0.03  0.09  6891    1
delta_hs[1,82]  -0.02    0.00 0.03 -0.10 -0.04 -0.01  0.00  0.04  6358    1
delta_hs[1,83]   0.01    0.00 0.03 -0.05 -0.01  0.00  0.03  0.09  5958    1
delta_hs[1,84]   0.02    0.00 0.03 -0.04  0.00  0.01  0.03  0.09  6534    1
delta_hs[1,85]   0.09    0.00 0.05  0.00  0.05  0.09  0.13  0.20  3466    1
delta_hs[1,86]  -0.02    0.00 0.03 -0.10 -0.03 -0.01  0.00  0.04  6409    1
delta_hs[1,87]   0.00    0.00 0.03 -0.07 -0.02  0.00  0.01  0.07  6944    1
delta_hs[1,88]   0.00    0.00 0.03 -0.06 -0.01  0.00  0.01  0.06  8057    1
delta_hs[1,89]   0.02    0.00 0.03 -0.04  0.00  0.01  0.04  0.10  5509    1
delta_hs[1,90]  -0.02    0.00 0.04 -0.11 -0.04 -0.01  0.00  0.03  5830    1
...
delta_hs[1,230] -0.02    0.00 0.03 -0.10 -0.04 -0.01  0.00  0.04  4253    1
delta_hs[1,231]  0.06    0.00 0.05 -0.01  0.02  0.06  0.10  0.18  4293    1
delta_hs[1,232]  0.00    0.00 0.03 -0.06 -0.01  0.00  0.02  0.07  6894    1
delta_hs[1,233]  0.01    0.00 0.03 -0.05 -0.01  0.00  0.02  0.08  8090    1
delta_hs[1,234]  0.02    0.00 0.03 -0.03  0.00  0.01  0.04  0.10  5648    1
delta_hs[1,235]  0.01    0.00 0.04 -0.08 -0.01  0.00  0.02  0.10  5796    1
delta_hs[1,236]  0.04    0.00 0.04 -0.02  0.01  0.03  0.07  0.14  5126    1
delta_hs[1,237]  0.02    0.00 0.04 -0.05  0.00  0.01  0.04  0.11  5830    1
delta_hs[1,238] -0.04    0.00 0.04 -0.14 -0.06 -0.03  0.00  0.03  4261    1
delta_hs[1,239]  0.01    0.00 0.03 -0.06 -0.01  0.00  0.02  0.09  5630    1
delta_hs[1,240]  0.05    0.00 0.05 -0.02  0.01  0.04  0.08  0.15  4533    1
sigma_y          1.94    0.01 0.31  1.33  1.72  1.93  2.14  2.56   752    1

Samples were drawn using NUTS(diag_e) at Sun Apr 12 19:54:47 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

compared to the results of the Spike-and-Slab prior:

Inference for Stan model: 57d13803abba2cf5c2b40ab7d27f69fe.
4 chains, each with iter=20000; warmup=10000; thin=10; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

                mean se_mean    sd    2.5%     25%     50%     75%   97.5% n_eff Rhat
alpha           1.13    0.00  0.11    0.90    1.05    1.13    1.20    1.34  4018    1
delta[1,1]      0.09    0.00  0.01    0.07    0.08    0.09    0.10    0.11  3702    1
delta[1,2]      0.10    0.00  0.01    0.08    0.10    0.10    0.11    0.13  3908    1
delta[1,3]      0.10    0.00  0.01    0.08    0.09    0.10    0.10    0.12  4219    1
delta[1,4]      0.10    0.00  0.01    0.07    0.09    0.10    0.11    0.12  3633    1
delta[1,5]      0.10    0.00  0.01    0.08    0.10    0.10    0.11    0.13  4159    1
delta[1,6]      0.11    0.00  0.01    0.09    0.10    0.11    0.12    0.14  3817    1
delta[1,7]      0.10    0.00  0.01    0.08    0.10    0.10    0.11    0.12  3698    1
delta[1,8]      0.10    0.00  0.01    0.07    0.09    0.10    0.11    0.12  3735    1
delta[1,9]      0.10    0.00  0.01    0.08    0.09    0.10    0.11    0.12  3670    1
delta[1,10]     0.10    0.00  0.01    0.08    0.09    0.10    0.11    0.13  4020    1
...
delta[1,70]     0.10    0.00  0.01    0.08    0.10    0.10    0.11    0.12  4059    1
delta[1,71]     0.11    0.00  0.01    0.08    0.10    0.11    0.12    0.14  3899    1
delta[1,72]     0.10    0.00  0.01    0.08    0.09    0.10    0.10    0.12  4120    1
delta[1,73]     0.11    0.00  0.01    0.09    0.10    0.11    0.12    0.13  3836    1
delta[1,74]     0.09    0.00  0.01    0.07    0.09    0.09    0.10    0.11  3819    1
delta[1,75]     0.10    0.00  0.01    0.08    0.09    0.10    0.11    0.12  3677    1
delta[1,76]     0.09    0.00  0.01    0.07    0.08    0.09    0.09    0.11  3519    1
delta[1,77]     0.11    0.00  0.01    0.09    0.10    0.11    0.11    0.13  4103    1
delta[1,78]     0.11    0.00  0.01    0.08    0.10    0.11    0.11    0.13  3300    1
delta[1,79]     0.11    0.00  0.01    0.08    0.10    0.11    0.11    0.13  4083    1
delta[1,80]     0.09    0.00  0.01    0.07    0.09    0.09    0.10    0.12  3506    1
delta[1,81]     0.00    0.00  0.01   -0.02   -0.01    0.00    0.00    0.01  3915    1
delta[1,82]     0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  4028    1
delta[1,83]     0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  3771    1
delta[1,84]     0.00    0.00  0.01   -0.02   -0.01    0.00    0.00    0.01  4007    1
delta[1,85]     0.01    0.00  0.01   -0.01    0.00    0.01    0.01    0.02  3553    1
delta[1,86]     0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  3818    1
delta[1,87]     0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  3820    1
delta[1,88]     0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  4085    1
delta[1,89]     0.01    0.00  0.01   -0.01    0.00    0.01    0.01    0.02  3783    1
delta[1,90]     0.00    0.00  0.01   -0.01    0.00    0.00    0.00    0.01  4070    1
...
delta[1,230]   -0.01    0.00  0.01   -0.02   -0.01   -0.01    0.00    0.01  3627    1
delta[1,231]    0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  3909    1
delta[1,232]    0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  3715    1
delta[1,233]    0.01    0.00  0.01   -0.01    0.00    0.00    0.01    0.02  4170    1
delta[1,234]    0.01    0.00  0.01   -0.01    0.00    0.01    0.01    0.02  3746    1
delta[1,235]    0.00    0.00  0.01   -0.02   -0.01    0.00    0.00    0.02  4060    1
delta[1,236]    0.00    0.00  0.01   -0.02   -0.01    0.00    0.00    0.01  4028    1
delta[1,237]   -0.01    0.00  0.01   -0.03   -0.01   -0.01    0.00    0.01  3896    1
delta[1,238]   -0.01    0.00  0.01   -0.03   -0.01   -0.01    0.00    0.01  3989    1
delta[1,239]    0.00    0.00  0.01   -0.02   -0.01    0.00    0.00    0.01  3950    1
delta[1,240]    0.00    0.00  0.01   -0.01    0.00    0.00    0.01    0.01  4121    1
sigma_y         0.45    0.00  0.06    0.34    0.41    0.44    0.48    0.57  2769    1
lp__         -543.59    0.48 24.63 -592.70 -560.04 -543.25 -526.75 -496.91  2650    1

Samples were drawn using NUTS(diag_e) at Mon Apr 13 03:02:35 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

Do you have any idea what the problem could be? Thanks!

Did you also change the global_scale value which controls the level of sparsity?

For this simulated data the horseshoe is obviously the wrong model and spike-and-slab is more correct model, although I now start to doubt that I don’t understand what is you spike-and-slab model

How do you implement the spike-and-slab prior in Stan? The spike-and-slab I’m thinking is not feasible in Stan for this many covariates as there is a combinatorial explosion of possible assignments in spike or slab.

For comparison can you run a model with just plain normal prior, only the predictors 1,…,80 and the same sigma and N? You spike and slab results seem suspiciously good.

1 Like