Sorry, I should have put that into more context. The scales of the predictors are so large that the chosen priors lead to very a large prior on the intercept, which is parameterised in a special way. See ?set_prior
for details. I see the following output:
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept -0.13 18.23 -36.15 35.96 4000 1.00
technique -0.02 1.02 -1.99 2.01 4000 1.00
category -0.01 1.00 -1.96 1.95 4000 1.00
subject 0.01 1.00 -2.01 1.98 4000 1.00
Thus, the intercept is the problem. You may put a prior on the intercept directly via
fit2 <- brm(formula = tp ~ 0 + intercept + technique + category + subject,
data = dat,
prior = set_prior("normal(0,1)", class="b"),
family = poisson(link=log),
sample_prior = "only" #To ignore the likelihood and check if the model is sane
)
In this case, the normal(0, 1)
prior really affects the intercept instead of the transformed intercept.