Thanks to both of you.

Iâ€™m actually running 4 chains in parallel on a Core I7. A maximum likelihood estimate is easily calculated for this model and with that in hand I am rescaling the data to make the ML estimates for all parameters between 0 and 1 â€“ nonnegativity is a hard constraint in the model and most ML parameter estimates will be zero. The ML estimates are fed to stan as inits with some jitter and with the zero bound parameters perturbed away from 0. Iâ€™ve run variations of the model with random inits that converge to the same posterior, but those take longer.

Right now Iâ€™m looking at simulated data. The model and â€śpredictorsâ€ť Iâ€™m using to generate the fake data is the model Iâ€™m fitting, so thereâ€™s no question of model mis-specification. I donâ€™t know what the posteriors should look like in detail, but I do know what the parameter means should be provided the prior isnâ€™t biasing things too much, which is basically what Iâ€™m investigating right now.

The run I originally queried about hit a treedepth of 10 and used the maximum number of leapfrog iterations for every transition. I re-ran the model setting the maximum treedepth to 12, and this time it hit a treedepth of 11 for every transition, so except for the fact that it took over twice the wall time everything looks good.

~~A couple of trace plots of model parameters first:~~ This first one is the parameter that had the maximum ML estimate. The max_treedepth=10 run is on top. By the way this is why I got excited about Stan: no other sampler Iâ€™ve tried gets to a stationary distribution in a remotely tractable number of iterations.

OK, thatâ€™s not helpful. Discourse will only allow me to include one image per message as a new user, and I had 5 with various diagnostic plots. I guess Iâ€™ll have to break them up.