Hello,
I’m throwing this, but it may not make sense at all for the core devs.
Would quantization (i.e. decreased floating point precision) make sense in Stan at estimation, storage, or posterior generation time? Would it improve speed and storage requirements?
I’m afraid that lower precision may be problematic for the HWN steps, with incremental error, but usually the sampler doesn’t do many steps, no?
For posterior manipulation (e.g. summarisation, diagnosis, visualization, etc), especially when one has many parameters, it may bring speed improvements at virtually no cost.
Where I’m wrong?