Hi!
I’m using brms to fit a GAM with 8 covariates, only one smooth term, and all seems well and good in terms of model performance. But when I plot the resulting smooth term together with my raw data the intercept seems to sometimes (not always!) be too low; see the attached image where the purple lines (model) are very low relative to the points which are binned averages of raw data.
It’s a mixed effects model where there’s a lot of dispersion, and it’s all positive data that is being modelled with a lognormal family.
brms call, simplified and in generic terms:
tmp.mod ← brm(formula = value ~ s(x1, bs = ‘tp’) + x2 + x3 + (1 | Area),
data = dat, family = lognormal(),
warmup = 1000, iter = 4000, chains = 4, cores = 4,
prior = tmp.priors,
control = list(adapt_delta = 0.99, max_treedepth = 12))
priors (not sure if they matter in this case):
tmp.priors ← c(set_prior(‘normal(0, 0.5)’, class = ‘b’),
set_prior(‘normal(-3, 2.5)’, class = ‘Intercept’),
set_prior(‘normal(0.5, 0.15)’, class = ‘sigma’))
It doesn’t matter whether calling conditional_effects or extracting the outcome using epred_draws, when holding x2 and x3 at their mean, and re_formula = NA. I’ve tried using the formula above as well as including “0 + Intercept” before the other terms.
Is there anyone knows why this is, and whether I’m simply ignorant of something that makes this behaviour explainable/logical or whether there’s something going wrong?
Let me know if I can provide additional information (I’m not allowed to share/upload the data)!
R version 4.2.2
brms 2.20.4
rstan 2.32.3