Convergence with models with overspecified models, low noise

Hello,

This is a continuation of a different question. I am trying do some simulations to study dose/response curves with variation on blocks. My model is as follows:

stan_glmer(formula(y~logDose*sampleName+(1|block)+(0+logDose|block)),
family=binomial(link=“probit”),
data=dataSet, adapt_delta=0.99)

The simulated data is a logDose/Response dataset with 5 doses and the responses are 0’s and 1’s, with two samples and 2, 4 or 16 blocks, and 20 replicates at each dose.

Sometimes I might generate situations where there is very little variation on block, and also very few blocks (two blocks total). In this situation (low variance, few blocks) it seems that the sampler is having a hard time estimating the variance parameters and converging, exactly why is this? I understand that perhaps this is due to the small number of blocks and very small variance, but wouldn’t one expect this to be reflected on wider credible intervals? What do people do in this situations? Is estimation not possible?

Thank you,
Ramiro

Yes, if there is no information in the data about the cross-block variance, then it is going to be dominated by the prior, which is by default exponential, and it is quite possible that there is posterior mass too close to zero. In addition to specifying QR = TRUE, you could mess around with the prior_covariance argument.

Dear Ben, thank you for your response. Do you have any references about where I can learn more about this. If there is posterior mass very close to zero, why wouldn’t it converge? I am just not clear why wouldn’t it converge if it’s dominated by the prior.

Because if there is posterior mass at 0 in the constrained space then there is posterior mass at -\infty in the unconstrained space and it takes a long time to get to -\infty.