Is my model overfitting?

My model has 28 parameters, and 226 observations

p_loo < N (where N is the number of observations) p_loo < p(where p is the number of parameters), so from my understanding my model is behaving well according to LOO package glossary — loo-glossary • loo (mc-stan.org).

Although when I do loo comparison my model isn’t significantly different from any other models.
image
Does this mean that my model is not overfitting, but the uncertainty of my model is high possibly due small sample size observations?

image


@jd_c

It does not appear that your model is overfitting. The results from LOO seem to indicate that in terms of predictive ability, your model may not be much better than a simple intercept only model. In other words, if you had out of sample data and you were going to make a prediction, your model may not make a better prediction than a model that used only the intercept and simply predicted the same mean value of the outcome for every new observation.

@jd_c

If my best model has significance, so some of my predictors credible intervals didn’t include 0, is this still meaningful? Even though this model isn’t significantly better than my intercept model…

This seems like the same question you asked in another thread Model Selection in BRMS - #11 by Hunter24. If that is so, then it would have been better to continue discussion there to keep the relevant discussion in one place. I answered there, and let me know if missed something in your new post here.

1 Like

It looks like @avehtari also answered your question on your other post, so have a look at that.

You aren’t conducting any tests of statistical significance. You don’t want to fixate on whether or not the credible interval crosses zero. In any case, to answer this question, just because some covariates seem associated with the outcome doesn’t always translate into a highly predictive model compared to a model without those covariates. For example, in the simulation below, see where you can have a model m2 that has predictors x1 and x2 that are associated with the outcome y in the model and even have credible intervals that don’t cover zero (for x3), but yet the model m2 isn’t much better from m1 intercept only model via LOO.

set.seed(14873)
n <- 1000
x1 <- rnorm(n)
x2 <- rnorm(n)
x3 <- rnorm(n)
a <- -3
b1 <- 0.25
b2 <- 0
b3 <- 0.5
p <- plogis(a + b1*x1 + b2*x2 + b3*x3) 
y <- rbinom(n, 1, p)
d <- cbind.data.frame(y, x1, x2, x3)

library(brms)
m1 <- brm(y ~ 1, family=bernoulli, data=d, cores=4)
m2 <- brm(y ~ 1 + x1 + x2 + x3, family=bernoulli, data=d, cores=4)

m1
m2

loo(m1, m2, cores=1)

If you are trying to model some causal structure, it can still be meaningful to do so. If you are just trying to make good predictions and don’t care about inference, then maybe it’s not such a great model.

I think perhaps you have had many different posts about this same model, some of which I have responded to? If so, and if I remember correctly, the standard errors for some parameters were quite high in those results - so high that it seemed maybe something was wrong, and I think I pointed that out at the time. Since this thread is a bit of a duplicate of your other post, perhaps you should post your results over there in that other thread that Aki responded to. He’s definitely a lot smarter/more experienced than I am :) I think you may have a lot of uncertainty in these models and maybe your outcome is sparse in your data.

I fixed the error by adding a polynomial function (it worked better than the spline).

My error is much smaller now, see below.