It does not appear that your model is overfitting. The results from LOO seem to indicate that in terms of predictive ability, your model may not be much better than a simple intercept only model. In other words, if you had out of sample data and you were going to make a prediction, your model may not make a better prediction than a model that used only the intercept and simply predicted the same mean value of the outcome for every new observation.

If my best model has significance, so some of my predictors credible intervals didn’t include 0, is this still meaningful? Even though this model isn’t significantly better than my intercept model…

This seems like the same question you asked in another thread Model Selection in BRMS - #11 by Hunter24. If that is so, then it would have been better to continue discussion there to keep the relevant discussion in one place. I answered there, and let me know if missed something in your new post here.

It looks like @avehtari also answered your question on your other post, so have a look at that.

You aren’t conducting any tests of statistical significance. You don’t want to fixate on whether or not the credible interval crosses zero. In any case, to answer this question, just because some covariates seem associated with the outcome doesn’t always translate into a highly predictive model compared to a model without those covariates. For example, in the simulation below, see where you can have a model m2 that has predictors x1 and x2 that are associated with the outcome y in the model and even have credible intervals that don’t cover zero (for x3), but yet the model m2 isn’t much better from m1 intercept only model via LOO.

If you are trying to model some causal structure, it can still be meaningful to do so. If you are just trying to make good predictions and don’t care about inference, then maybe it’s not such a great model.

I think perhaps you have had many different posts about this same model, some of which I have responded to? If so, and if I remember correctly, the standard errors for some parameters were quite high in those results - so high that it seemed maybe something was wrong, and I think I pointed that out at the time. Since this thread is a bit of a duplicate of your other post, perhaps you should post your results over there in that other thread that Aki responded to. He’s definitely a lot smarter/more experienced than I am :) I think you may have a lot of uncertainty in these models and maybe your outcome is sparse in your data.