Testing for Identifiability

Just that there is a unique “best” solution to the fitting problem. I.e. a most likely posterior. This makes it possible to interpret parameters; it would not be so if there’s multiple equivalent solutions.

If there are multiple sets of parameters that yield the same posterior; simulating data can’t reveal this.

I think the crux of my question is, if convergence and mixing are not necessary and sufficient criteria for identifiability, and if we don’t know a priori that our model is identifiable, what should the diligent Bayesian do?

PS: Here is a problem of this kind on the forum. The model works fine 4/5 times. What if it was 999/1000 times? Would he have ever spotted it? In his case, the model fits generated data, but what if the data wasn’t generated? What guarantee is there that the most frequently fitting parameters are the true ones?