I got some strange results when running the same Stan model with the same data using cmdStan compared to previous PyStan runs.
Here’s a run of the same thing with each interface side by side (and ignoring the lack of convergence/mixing of this multi-channel GP model, about which I posted before):
It looked like the cmdStan runs were not changing the proposal for hundreds of iterations, but the traces of the model parameters looked “normal”, in that they were changing and exploring parameter space. Additionally, this only happened when scaling up from 20 to 40 channels in the model.
I initially thought it was a more serious problem with the inference itself, but now I’m convinced it is just how cmdStan is logging the values. The .csv output has values of
21725800 when loaded back into Python), while PyStan has ones like
21725161.65147942 so the latter looks normal while the former looks like big steps and flatlines for a large number of iterations.
My question is whether it is possible to make cmdStan use greater precision for logging the traces. Alternatively, to be sure, could this be caused by anything else (assuming the sampler is working properly and the HMC proposals are actually exploring parameter space as expected)?
Thanks in advance.