I’m debating whether to plot predictor effects manually or with conditional_effects(), and I’ve made the disturbing discovery that fitted.brmsfit(), which I use to calculate predictor effects manually, produces different expected values than conditional_effects() on the default setting of method = "fitted". It is disturbing because now I don’t know which expected values are correct. It’s a categorical model. Here’s how to reproduce the phenomenon with simple categorical data:
Just a quick thought:
My understanding is that fitted gives you model predictions, given input data, while conditional_effects gives you conditional or marginal effects. The difference is that for the conditional effects all parameters not part of the condition you investigate are set to their mean/reference category.
Besides that, the docu of conditional_effects’ method parameter reads
Method used to obtain predictions. Can be set to "posterior_epred" (the default), "posterior_predict" , or "posterior_linpred" . For more details, see the respective function documentations.
Such that “fitted” doesn’t seem to be a valid option.
I don’t think they are supposed to do the same thing.
ps. Your code doesn’t run:
Error: The following priors do not correspond to any model parameter:
b ~ normal(0, 1)
I also think you can’t use threading and multiple cores per chain simultaneously.
just a quick addition: fitted is an outdated name for posterior_epred actually. however conditional effects uses robust = TRUE so computes the posterior median as point summary by default instead of the posterior mean.
Afterthought: A major source of my confusion was that when I Google conditional_effects(), the first search result (which I mistook for accurate) was this outdated helpfile for the function. In that version, method = "fitted" is specified as the alternative to “predict”…
Not sure why the replication code wouldn’t work though. I’m using brms 2.19.0 with cmdstan 2.32.0 and cmrstanr 0.5.3, and everything runs without issues.