All these (including NUTS) are part of GPstuff and I have run lot of experiments, but with not much published. Elliptical slice sampling (ESS) works only on conditional posterior given hyperparameter values, and is then used by alternating ESS sampling of latents and alternating sampling of hyperparameters with some other method. ESS is very good for latents, but the alternating sampling destroys the efficiency as there is usually a complicated dependency between latents and hypers. There is a surrogate-ESS which does also joint updates, which help a little but not enough to be popular. I’m not aware GP specific software which would have dynamic HMC sampling jointly latents and hypers and thus Stan results are interesting in that sense. For log-concave likelihoods INLA is much more efficient with the similar accuracy (except can be slightly worse for Bernoulli) as long MCMC .
4 Likes