Hey all,

I’ve been doing the Cook, Gelman, and Rubin software validation method to jointly validate my data generating and fitting models and I noticed that in many cases I was seeing a spike at 0 on the histogram of `sum(posterior_thetas <= theta0)/length(posterior_thetas)`

which should look like a uniform(0, 1) distribution. I discovered some bugs in my models, but eventually realized that to make that spike go away I had to do two separate things:

- replace
`<=`

with`>`

in the above sum - In places where I was generating a positive value (typically for scale parameters), it was not sufficient to do e.g.
`fabs(normal_rng(0, 5))`

, instead I had to do a while loop and iterate until drawing a value greater than or equal to zero.

#1 above was implemented in R, so I suspect some kind of weird R-specific rounding or coercion but would love to know what is happening. #2 above was implemented in Stan and when I attempted to test the distributions of random numbers drawn with both techniques they seemed to line up within 2 decimal places for quantiles and to have extremely similar histograms at several levels of granularity.

Any ideas?

cc @betanalpha and @jonah who have been helping me with this so far.