I’ve tried three things now.
- I put a highly informative prior on the sigmas and set the prior to N(0.5, 0.5).
Still all coefficients 0, still all sigmas extremely large. It’s actually the same picture as in the initial post.
- I set adapt_delta to an even higher value of 0.999999 and stepsize to 0.0001. EDIT: Sorry, I didn’t see your edit. I will try it now without stepsize and treedepth.
Dito. Also for the case without stepsize and max_treedepth :(
- I set M and M_0 to even lower values of M=30 and M_0=5, but keep everything as in the initial post (i.e., Cauchy(0, 2.5) for the sigmas).
Inference for Stan model: a770f8efe1edc8442df6f5e6daeed073.
4 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=4000.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
alpha[1] 0.51 0 0.01 0.48 0.50 0.51 0.52 0.54 5149 1
alpha[2] -0.02 0 0.03 -0.08 -0.04 -0.02 0.01 0.05 5027 1
alpha[3] -0.55 0 0.05 -0.65 -0.59 -0.55 -0.52 -0.46 4519 1
beta_hs_y1[1] 0.51 0 0.01 0.48 0.50 0.51 0.52 0.54 3768 1
beta_hs_y1[2] 0.49 0 0.02 0.46 0.48 0.49 0.50 0.52 4114 1
beta_hs_y1[3] 0.46 0 0.02 0.43 0.45 0.46 0.47 0.49 3760 1
beta_hs_y1[4] 0.52 0 0.01 0.49 0.51 0.52 0.53 0.54 3844 1
beta_hs_y1[5] 0.47 0 0.01 0.44 0.46 0.47 0.48 0.50 3848 1
beta_hs_y1[6] 0.00 0 0.01 -0.02 -0.01 0.00 0.00 0.01 4000 1
beta_hs_y1[7] 0.00 0 0.01 -0.01 0.00 0.00 0.01 0.02 4639 1
beta_hs_y1[8] 0.00 0 0.01 -0.02 0.00 0.00 0.00 0.01 5077 1
beta_hs_y1[9] -0.01 0 0.01 -0.03 -0.01 0.00 0.00 0.01 3120 1
beta_hs_y1[10] 0.00 0 0.01 -0.01 0.00 0.00 0.01 0.03 4229 1
beta_hs_y1[11] 0.00 0 0.01 -0.03 -0.01 0.00 0.00 0.01 3991 1
beta_hs_y1[12] 0.01 0 0.01 -0.01 0.00 0.00 0.01 0.03 3050 1
beta_hs_y1[13] 0.00 0 0.01 -0.01 0.00 0.00 0.01 0.02 4610 1
beta_hs_y1[14] 0.00 0 0.01 -0.01 0.00 0.00 0.00 0.02 5496 1
beta_hs_y1[15] 0.00 0 0.01 -0.02 0.00 0.00 0.00 0.01 5171 1
beta_hs_y1[16] -0.01 0 0.01 -0.04 -0.01 0.00 0.00 0.01 2830 1
beta_hs_y1[17] -0.01 0 0.01 -0.03 -0.01 0.00 0.00 0.01 4066 1
beta_hs_y1[18] 0.00 0 0.01 -0.02 0.00 0.00 0.00 0.02 4745 1
beta_hs_y1[19] 0.00 0 0.01 -0.03 -0.01 0.00 0.00 0.01 3814 1
beta_hs_y1[20] 0.00 0 0.01 -0.03 -0.01 0.00 0.00 0.01 4270 1
beta_hs_y1[21] 0.00 0 0.01 -0.01 0.00 0.00 0.01 0.02 4315 1
beta_hs_y1[22] 0.00 0 0.01 -0.01 0.00 0.00 0.00 0.02 5036 1
beta_hs_y1[23] 0.00 0 0.01 -0.01 0.00 0.00 0.00 0.02 4197 1
beta_hs_y1[24] 0.00 0 0.01 -0.01 0.00 0.00 0.01 0.03 4076 1
beta_hs_y1[25] 0.00 0 0.01 -0.02 0.00 0.00 0.00 0.01 5198 1
beta_hs_y1[26] 0.00 0 0.01 -0.01 0.00 0.00 0.00 0.02 4649 1
beta_hs_y1[27] -0.01 0 0.01 -0.03 -0.01 0.00 0.00 0.01 3161 1
beta_hs_y1[28] 0.01 0 0.01 -0.01 0.00 0.01 0.02 0.04 2491 1
beta_hs_y1[29] 0.00 0 0.01 -0.01 0.00 0.00 0.00 0.02 4964 1
beta_hs_y1[30] 0.00 0 0.01 -0.01 0.00 0.00 0.00 0.02 4450 1
beta_hs_y2[1] -0.45 0 0.03 -0.51 -0.47 -0.45 -0.43 -0.39 3654 1
beta_hs_y2[2] -0.53 0 0.03 -0.59 -0.56 -0.53 -0.51 -0.47 3878 1
beta_hs_y2[3] -0.47 0 0.03 -0.54 -0.49 -0.47 -0.45 -0.40 3915 1
beta_hs_y2[4] -0.50 0 0.03 -0.56 -0.52 -0.50 -0.48 -0.44 3724 1
beta_hs_y2[5] -0.55 0 0.03 -0.61 -0.57 -0.55 -0.53 -0.49 4084 1
beta_hs_y2[6] 0.01 0 0.02 -0.02 0.00 0.00 0.02 0.06 3969 1
beta_hs_y2[7] 0.00 0 0.02 -0.04 -0.01 0.00 0.00 0.03 5338 1
beta_hs_y2[8] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.05 4838 1
beta_hs_y2[9] -0.02 0 0.03 -0.09 -0.04 -0.02 0.00 0.01 2787 1
beta_hs_y2[10] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.04 4810 1
beta_hs_y2[11] 0.00 0 0.02 -0.05 -0.01 0.00 0.00 0.03 3881 1
beta_hs_y2[12] 0.01 0 0.02 -0.02 0.00 0.00 0.02 0.05 3505 1
beta_hs_y2[13] -0.02 0 0.02 -0.08 -0.03 -0.01 0.00 0.02 3029 1
beta_hs_y2[14] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.04 4572 1
beta_hs_y2[15] 0.01 0 0.02 -0.02 0.00 0.00 0.02 0.06 4360 1
beta_hs_y2[16] 0.00 0 0.02 -0.05 -0.01 0.00 0.00 0.03 4732 1
beta_hs_y2[17] -0.01 0 0.02 -0.05 -0.01 0.00 0.00 0.02 4091 1
beta_hs_y2[18] -0.01 0 0.02 -0.05 -0.01 0.00 0.00 0.02 5125 1
beta_hs_y2[19] 0.00 0 0.02 -0.04 -0.01 0.00 0.01 0.03 5515 1
beta_hs_y2[20] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.04 4984 1
beta_hs_y2[21] -0.02 0 0.03 -0.09 -0.04 -0.01 0.00 0.01 3181 1
beta_hs_y2[22] -0.01 0 0.02 -0.05 -0.01 0.00 0.00 0.03 4336 1
beta_hs_y2[23] 0.01 0 0.02 -0.02 0.00 0.00 0.01 0.05 4729 1
beta_hs_y2[24] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.05 4964 1
beta_hs_y2[25] -0.01 0 0.02 -0.05 -0.01 0.00 0.00 0.02 4029 1
beta_hs_y2[26] 0.00 0 0.02 -0.04 -0.01 0.00 0.01 0.04 5019 1
beta_hs_y2[27] 0.02 0 0.02 -0.01 0.00 0.01 0.03 0.08 3329 1
beta_hs_y2[28] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.04 5182 1
beta_hs_y2[29] 0.00 0 0.02 -0.03 0.00 0.00 0.01 0.04 4900 1
beta_hs_y2[30] 0.07 0 0.04 0.00 0.04 0.07 0.09 0.14 1737 1
beta_hs_y3[1] 0.60 0 0.05 0.50 0.57 0.60 0.63 0.69 4260 1
beta_hs_y3[2] 0.45 0 0.05 0.35 0.41 0.45 0.48 0.55 3953 1
beta_hs_y3[3] 0.41 0 0.05 0.30 0.37 0.41 0.44 0.51 3856 1
beta_hs_y3[4] 0.54 0 0.05 0.44 0.50 0.54 0.57 0.63 4218 1
beta_hs_y3[5] 0.48 0 0.05 0.38 0.44 0.48 0.51 0.57 3764 1
beta_hs_y3[6] 0.01 0 0.03 -0.03 0.00 0.00 0.02 0.08 3851 1
beta_hs_y3[7] -0.01 0 0.02 -0.07 -0.01 0.00 0.01 0.04 4338 1
beta_hs_y3[8] -0.01 0 0.02 -0.07 -0.02 0.00 0.00 0.03 4255 1
beta_hs_y3[9] 0.00 0 0.02 -0.06 -0.01 0.00 0.01 0.04 4945 1
beta_hs_y3[10] 0.00 0 0.02 -0.06 -0.01 0.00 0.01 0.06 4671 1
beta_hs_y3[11] 0.03 0 0.04 -0.02 0.00 0.01 0.05 0.13 2845 1
beta_hs_y3[12] -0.01 0 0.03 -0.09 -0.03 -0.01 0.00 0.03 3570 1
beta_hs_y3[13] 0.01 0 0.03 -0.03 0.00 0.00 0.02 0.09 4133 1
beta_hs_y3[14] 0.00 0 0.02 -0.05 -0.01 0.00 0.01 0.05 5206 1
beta_hs_y3[15] -0.01 0 0.03 -0.08 -0.02 0.00 0.00 0.03 4232 1
beta_hs_y3[16] 0.01 0 0.03 -0.04 0.00 0.00 0.02 0.08 3868 1
beta_hs_y3[17] 0.01 0 0.02 -0.04 -0.01 0.00 0.02 0.07 4355 1
beta_hs_y3[18] 0.00 0 0.02 -0.05 -0.01 0.00 0.01 0.06 5744 1
beta_hs_y3[19] 0.03 0 0.04 -0.02 0.00 0.01 0.04 0.12 3647 1
beta_hs_y3[20] -0.01 0 0.03 -0.08 -0.02 0.00 0.00 0.04 4726 1
beta_hs_y3[21] 0.01 0 0.03 -0.04 -0.01 0.00 0.02 0.08 5146 1
beta_hs_y3[22] -0.02 0 0.03 -0.11 -0.03 -0.01 0.00 0.02 3137 1
beta_hs_y3[23] 0.00 0 0.02 -0.06 -0.01 0.00 0.01 0.04 5195 1
beta_hs_y3[24] -0.01 0 0.03 -0.07 -0.01 0.00 0.01 0.04 5497 1
beta_hs_y3[25] 0.00 0 0.02 -0.06 -0.01 0.00 0.01 0.05 4898 1
beta_hs_y3[26] -0.02 0 0.04 -0.11 -0.04 -0.01 0.00 0.02 3211 1
beta_hs_y3[27] -0.02 0 0.03 -0.10 -0.03 -0.01 0.00 0.03 3925 1
beta_hs_y3[28] 0.00 0 0.02 -0.06 -0.01 0.00 0.01 0.04 4680 1
beta_hs_y3[29] -0.01 0 0.03 -0.07 -0.01 0.00 0.01 0.05 4816 1
beta_hs_y3[30] 0.01 0 0.03 -0.04 0.00 0.00 0.02 0.09 3914 1
sigma_y[1] 0.20 0 0.01 0.18 0.19 0.20 0.21 0.22 4853 1
sigma_y[2] 0.43 0 0.02 0.39 0.41 0.43 0.44 0.48 4463 1
sigma_y[3] 0.67 0 0.04 0.61 0.65 0.67 0.70 0.74 4312 1
Residual_cors[1,1] 1.00 NaN 0.00 1.00 1.00 1.00 1.00 1.00 NaN NaN
Residual_cors[1,2] 0.19 0 0.07 0.05 0.14 0.19 0.24 0.33 3924 1
Residual_cors[1,3] 0.17 0 0.07 0.03 0.12 0.17 0.22 0.30 5000 1
Residual_cors[2,1] 0.19 0 0.07 0.05 0.14 0.19 0.24 0.33 3924 1
Residual_cors[2,2] 1.00 0 0.00 1.00 1.00 1.00 1.00 1.00 4075 1
Residual_cors[2,3] 0.20 0 0.07 0.06 0.15 0.20 0.25 0.33 4976 1
Residual_cors[3,1] 0.17 0 0.07 0.03 0.12 0.17 0.22 0.30 5000 1
Residual_cors[3,2] 0.20 0 0.07 0.06 0.15 0.20 0.25 0.33 4976 1
Residual_cors[3,3] 1.00 0 0.00 1.00 1.00 1.00 1.00 1.00 3575 1
residual_cors[1] 0.19 0 0.07 0.05 0.14 0.19 0.24 0.33 3924 1
residual_cors[2] 0.17 0 0.07 0.03 0.12 0.17 0.22 0.30 5000 1
residual_cors[3] 0.20 0 0.07 0.06 0.15 0.20 0.25 0.33 4976 1
Samples were drawn using NUTS(diag_e) at Tue Mar 24 17:30:03 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at
convergence, Rhat=1).
So in this case, the results are as expected. So it looks like the code works but for 30 coefficients I don’t need a horseshoe prior. In our actual data set we have 200 observations and at least 160 potential predictors. So it looks like the problem is related to the total number of predictors.