Cook et al spike at 0

I think someone also mentioned thinning, not sure exactly where? It seems that a little bit of thinning could be a good idea but I don’t think any amount of thinning makes the autocorrelation go away entirely (ie. more thinning gives diminishing returns).

Super unsure about this @betanalpha wrote the HMC paper I read maybe he could give a more concrete answer?

Sorry for the 3 posts in a row. I had another question, I thought i’d post it here in case anyone else found it helpful.

I don’t really understand the rationale behind binning if we are not using a chi-squared/hypothesis test?

I would really appreciate if someone could elaborate on that. Thanks!

Kolmogorov–Smirnov statistic

Wasserstein metric

Corresponding tests assume independent draws which explains thinning.

Aki

One thing I didn’t see in the discussion is the possibility of a RNG artifact (not sure how that would happen, but you never know). So I tried with multiple RNGs, including the recent xorshift1024 and the PDFs generated from @seantalts look almost exactly the same regardless of RNG. Think this is good to know.

A sidenote: Some time ago I tried replacing Stan’s RNG with xorshift1024 with puzzling results - while xorshift1024 generated numbers much faster than Ecruyer when ran alone, the whole inference was slower with xorshift1024. Didn’t have time to dig in further, but it is on my “hopefully get to sometime” list.

1 Like

Once the PRNG is reasonable, there’s not much to improve except in very extreme cases.

You need to run multiple trials to deal with inter-run variance. We’re pretty careful to always pass by reference to maintain the same RNG state, so it’s probably not copying.