Brms models: methods and tricks for checking under/overfitting

I am doing my first tries with brms package.

Could you please share some methods and tricks to rule out possible under- or overfitting? Only thing I have done is to do extensive plotting of my initial data: the results of the regression models should be relatively in line.

BDA3 is free as a pdf now: http://www.stat.columbia.edu/~gelman/book/BDA3.pdf . The section on posterior predictive checks would probably be interesting. There might be some other model evaluation stuff in there too.

If you’re concerned about overfitting and making bad predictions, you could think about what those predictions would be, and how you might do hold-out on the datasets you have to evaluate this. Check Aki’s cross validation FAQ: https://avehtari.github.io/modelselection/CV-FAQ.html . Check out “How is cross-validation related to overfitting?” and go from there.

You can also consider watching Richard McElreath’s lecture which covers model fit/overfitting with some different visualisations and ways of testing such as cross validation:

These are really good lectures, and the book is also great.

1 Like