I have a complicated model I’m trying to debug, but sparing you the details of the model for the time being, this project has brought up a more general question about model diagnostics and posterior geometry.

This question is:

What can cause treedepth issues, given that a pairplot of the parameters looks almost like a set of independend normal distributions?

I mean, usually if there’s a treedepth issue it’s either because the curvature is very variable, or because of some kind of nonidentifieability. Both of which should show up in a pairplot. Of course, if the problematic structure is rotated in such a way in the parameter space that it can’t be properly visualized by using just two of the posterior dimensions at a time (as a pairplot does), than such a thing might be present in the posterior, yet undetectable with the usual tools.

Maybe I have such a case?

So my question extends to the following:

How can I check for such rotated problematic geometries?

Is there anything else that could cause treedepth issues that isn’t necessarily visible in a pairplot?

I’d LOVE to be able to see the actual paths of the leapfrog steps. Wonder if that would be of help?

Just as an example, here’s the pairplot of my model:

The pairplot includes all scalar parameters and one dimension of each vector-parameter. The other dimensions look similar, althout some are not quite as nicely behaved (but still fine, I think).

And since Stan doesn’t put out the internal representation here’s a transformation of the samples to what i think the internal representation looks like. For some parameters I used off-center parametrisations, so that’s also shown here.

For some of the parameters I put in temporary very tight gaussian priors, expecting that this would solve the treedepth issue and allow me to narrow in on it’s cause. This didn’t work, however, as you can see in the plot: It’s now all nice and gaussian but still has treedepth issues.

I capped treedepth at 9, so you can see in the plot that every iteration hits the maximum treedepth. Given how gaussian and mostly uncorrelated the pairplot looks, treedepth should be way lower. There is also no variable that is correlated with lp__, so I think this probably rules out a neal’s funnel type problem.

The pairplot includes the iteration number as a variable, so you can see the traceplots of the parameters. For some parameters the trace looks very ugly.

My question is in principle independend of any particular model, but, just as a background, here’s some info about that particular example model just to give you an idea about what type of model I’m having these problems with:

This model has a log-link-function and a poisson distribution. It has an observation level error term (called “datapointEffect”) with its dispresion parameter beeing “datapointSd”. It has two gaussian procces terms, one beeing used as a nonparametric predictor of something that varies over time (over two non-connected timeframes, parameters with the word “season” in them), the other one to account for temporal autocorrelation. It also has a single parametric predictor (“yearEffect”).

I already tried all sorts of reparametrisations (e.g. combining the observation level error term with the autocorrelation gaussian field), but so far nothing solved or even just enabled me to find the issue.