treedepth__ 1.0e+01 ...
n_leapfrog__ 1.0e+03 ...
This means that NUTS is hitting its default internal limit (n_leapfrog of 1024 or treedepth of 10) of how many HMC steps to take to search for a new sample. It’s a bad sign.
what exactly is an non-identified parameter?
If you have the model:
y ~ normal(a * b, 1.0);
If y was generated as
normal(0, 1.0), then there’s a lot of different values of a and b that will produce a suitable fit. If you look at the posterior density on a plot of a vs. b, anything near one of the axes would probably be an okay fit (if either a or b are near zero, then then likelihood is high). a + b would do the same thing, but then posterior samples would be on the line a = -b.
This sorta stuff makes it really difficult for the sampler to explore space. Here’s a Gelman blog on it: http://andrewgelman.com/2014/02/12/think-identifiability-bayesian-inference/
Long story short it’s really easy to accidentally add stuff like this to complicated models, and usually the easiest way to find and remove them is by looking at posterior pairplots and looking for correlations. To get to this point though your model will need to finish running. How much data are you using? Is there any way to do a small scale problem? You might be able to find out what’s wrong without having to use everything. You can also use simulated data for something like this (might be a better idea, if you aren’t sure how well your model actually fits your data).