When I use stan example models to do tests (ADVI evaluation, SBC, et.al,), I realize some of the data are randomly generated (for example). The randomness of data generation may make it hard for comparison. Though it is easy to control randomness from the user’s side to dump the data first, I am wondering if it makes sense that a fixed random seed is added in those randomly generated data file.
Also, several data file names do not match the corresponding model names, particularly those in misc/.
For ages, I’ve been wanting to just throw away all those old flakey tests and start building up ones we can trust, following what @betanalpha is doing in the test repo.
I don’t think anyone cares otherwise what happens to all those models in
example-models other than the ones that are the bases of case studies, so feel free to modify however you want.
You can either work in your own GitHub branch to make changes, or we can give you permission to create branches on
stan-dev if you don’t already have it.