Handeling divergent warnings

It’s a general question, I know that adjusting adapt_delta and max_treedepth can help resolve the divergent warnings. But what does it tell us when after adjusting for those, the divergent warnings still persist? Does it suggest something inherently wrong with the model(like unreasonable prior, or model doesn’t reflect the true relationship in the data). In general, I came across those problems a lot, and I am trying to see what the next step should be after adapt_delta and max_treedepth fail.

Also, if after adjusting for those, the divergent warnings vanish, can we trust the data samples and believe that it fully represents the posterior distribution without areas unexplored?

Check out the resources linked in the divergences section here:
https://mc-stan.org/misc/warnings.html

Yes. And if they persist, they almost always require reparameterization and/or rethinking priors and/or constraints.

I’m afraid that our diagnostics are like hypothesis tests in that they only reject. If the \widehat{R} statistics is much greater than 1, we know sampling failed but if it’s near 1, sampling might still have failed. The best thing to do is test using simulation-based calibration—that will validate your algorithm. Or test the end-to-end system using posterior predictive checks or cross-validation.

For a long discussion on how to followup on divergence warnings see Identity Crisis.

Seeing no divergences after tweaking adapt_delta puts you in the same circumstance as if you saw no divergences with the default sampler configuration. The realized Markov chains didn’t encounter any problems, but there is technically no guarantee that there are not pathological behaviors hiding that you just didn’t see. That said for sufficiently long Markov chains, especially when looking at an ensemble of Markov chains, the probably of a hidden pathology like this does become less and less likely.