Hi, I am writing up an analysis involving MCMC estimation of a model using Hamiltonian Monte Carlo with the No U-Turn Sampler. I am wondering, is there a description of the theoretical running time of the algorithm using Big O notation somewhere? I’m referring to the running time to conduct a hypothetical draw of a single parameter from its posterior distribution. I can give some insight to the reader regarding the number of parameters my model has, the number of warmup iterations, etc., but I am stuck at the point that I try to explain how long it takes for NUTS itself to do its sampling. The discussion that I have seen online all refers to the number of minutes a particular model takes to run, but I am hoping for a more theoretical description than that.

I recognize that this may be a somewhat naive question because the sampler is non-deterministic – after all, a worst-case running time would be that the algorithm would run forever, with samples improbably getting rejected all the time. But nonetheless, there should still be some notion of theoretical running time for a stochastic algorithm, right?

Thanks very much,

Richard