Handeling divergent warnings

It’s a general question, I know that adjusting adapt_delta and max_treedepth can help resolve the divergent warnings. But what does it tell us when after adjusting for those, the divergent warnings still persist? Does it suggest something inherently wrong with the model(like unreasonable prior, or model doesn’t reflect the true relationship in the data). In general, I came across those problems a lot, and I am trying to see what the next step should be after adapt_delta and max_treedepth fail.

Also, if after adjusting for those, the divergent warnings vanish, can we trust the data samples and believe that it fully represents the posterior distribution without areas unexplored?

Check out the resources linked in the divergences section here:
https://mc-stan.org/misc/warnings.html

Yes. And if they persist, they almost always require reparameterization and/or rethinking priors and/or constraints.

I’m afraid that our diagnostics are like hypothesis tests in that they only reject. If the \widehat{R} statistics is much greater than 1, we know sampling failed but if it’s near 1, sampling might still have failed. The best thing to do is test using simulation-based calibration—that will validate your algorithm. Or test the end-to-end system using posterior predictive checks or cross-validation.