I am taking the credible interval based on my total number of regressions. In my case 1 - (0.05 / G) = 0.9995 (which I guess is really limited by the number samples/iterations I have 250 * 4 chains, )
I will do more experiment with 1. The number 3 was suggested in brms documentation to avoid divergence
I should point out that further experiments with higher non-zero slopes (> than 1.5) result in a better FP rate
Intuitively I would like to pull the 0 slopes toward 0 with more strenght, but I guess is what the parameter hs_par_ratio does.
Just to confirm: in the classification framework (as you modelled in your article) I would have 20K predictors and 13 observations. It seems a desperate situation, but is what often people have.
then I should use real hs_scale_global = hs_par_ratio / sqrt( 13) ?
My goal is not classification by the way, it is ultimately multiple “hypothesis testing”