I am using LOO for model comparison, with increasingly complex models. The estimation of the most complex model, which in theory should fit the data best, struggles to estimate well, often giving divergences and rhats above 1.1. However, when I use LOO for model comparison, it still says my complex model is preferred. Can I trust this result of saying the complex model is preferred? Additionally, how could LOO prefer this model if the estimation was bad?
Not really.
This says that the MCMC approximation to the posterior that you obtained from your more complex model yields higher log-likelihood over the data. But the poor convergence diagnostics tell you not to trust that this MCMC approximation is actually giving you the correct posterior from this model.
2 Likes