Solving BFMI issue - could anyone suggest a better prior?

Hi all,

This (hopefully) will be a simple question… I’m attempting to fit a varying effects model based closely on the examples in Statistical Rethinking (my model is below - sorry it’s not in raw stan code! I can of course dig that out if need be). Unfortunately it’s not fitting terribly well (BFMI < .02, ESS problems etc.), I think because of the sigma parameter - it has a mode at 0.3 but a long left tail (and possibly even a small second mode at 0). If I fix sigma at 0.3, all the problems clear up (but obviously I’d like to allow sigma to vary in the final model).

So, my question is, would anyone be able to suggest any alternative priors for sigma? I presume I’m looking for something that pushes the posterior away from 0 but all the usual suspects (exponential, cauchy, half normal etc.) do the opposite. Would gamma, say, be a reasonable choice? My main worry really is that this just seems a bit arbitrary!

Cheers


m_Pm5 <- rethinking::ulam(
  alist(

    logodds ~ dnorm(mu, sigma),
    mu <- b[cond] + a[subjn, cond],

    #adaptive priors
    vector[6]:a[subjn] ~ multi_normal(0, Rho_subjn, sigma_subjn),

    #fixed priors
    sigma ~ dexp(1),
    b[cond] ~ dnorm(1,1),
    sigma_subjn ~ dexp(.5),
    Rho_subjn ~ dlkjcorr(2)
  ),
  data = Pm, chains=4, cores=4, iter=4000, log_lik = TRUE, 
  control=list(adapt_delta=0.99))

Hi,

have you tried to do prior predictive checks? In the last version of the package I believe you have a new function that can help you with that, i.e., extract.prior(). Plotting your priors could help you see the implications of your priors.