I have a model (in fact the one I presented in NYC) that relies on the incomplete and complete gamma functions and for some data sets (that look mostly like my simulated data but clearly aren’t) the acceptance rate never quite gets up to 0.8 (or whatever) so the stepsize tanks even though there are no divergences. If I decrease the target acceptance rate to 0.6 I do get divergences but maybe there’s some region in there where things would be ok. Probably not.
My understanding of this is that the culprit here is probably the Gamma function and gradient approximations are not good enough so error accumulates and the accept rate stays too low so the stepsize keeps decreasing. Any other way (besides accumulation of numerical error) that this scenario can happen with our NUTS implementation?