Model comparison - lp__

Hi everyone,

sorry to revive this topic, but I sort of have the same issue.
I am running a Bayesian ANN, and I am having the issue that I get multiple answers while running the same model on the same data. In other words, I get a very low MC error per chain, but the chains do not converge (i.e. high Rhats). I actually recently made a post about it;

In that post I was forwarded to the following link;

https://www.martinmodrak.cz/2018/05/14/identifying-non-identifiability/

In that link you can see under “Neural network: When ordering is not enough”, it says;

This means that to identify the model we need have to somehow choose one of those modes, but there is clearly not a “best” (much higher lp__) mode.

However, in my case, I actually can see that there is a difference in lp between the “solutions”. (Sorry if I mix up terminology here…)

If I run 8 chains, is there a way to “pick” the best chains automatically based on the lp_, or are there better ways to do it?

(PS: my goal is to mix machine learning with structural time series, hence I use Stan for ANN.)