Hi, I’m seeking help understanding what’s happening with my model. I can’t share the specific data, but suppose you had synthetic data that was generated with y = cx + b
, with some gaussian noise N(0, 0.25)
added to each point. So it’s varying around a regression line with slope c
. This is very much what my data resembles, and let’s suppose the true c=0.75
at the population level, and b=0
. This is a multilevel model, but the groups are quite similar.
So I’m fitting a model in brms as follows:
priors <- c(prior(normal(0, 1), class=Intercept),
prior(normal(0.75, 1), class=b),
prior(normal(0, 1), class=sigma),
prior(normal(0, 0.5), class=sd))
brm_int_plus_slope <- brm(y ~ 1 + x + (1 + x|grouping),
data=df_curr, family=gaussian(), prior=priors,
warmup=500, iter=3000, chains=2, cores=2)
The resulting model has a pretty good fit in terms of slope, but the intercept variances are far smaller than I would expect, on the order of +/- 0.01
. This results in a very poor capture of the variance around the origin. I would have expected much larger estimated error in the intercept in order to capture this variation around the slope. Model checks looking at mixing, posterior predictive checks, lagged autocorrelation, all look good. Am I mis-specifying my model in some obvious way? Is there another way to specify the priors to allow the intercept to capture more of the variance? Any help would be much appreciated. Thanks.