So I recognize that discrete parameters have been asked for many times, and I’m fully convinced that most models would be much more efficiently computed by marginalization/Rao-Blackwellization and that many models that can’t be marginalized are intractable anyways. However, I’m seeing a bunch of other bayesian inference packages offer mixture type methods for this problem. For example, Turing.jl offers “Compositional inference” that mixes Gibbs samplers and NUTS to sample from both discrete and continuous variables, and PyMC3 lets you do something similar.
Given the quality of governance I’ve seen on this project, I’m assuming there’s a reason it’s not in Stan. As someone with a passable understanding of HMC and essentially no understanding of how Gibbs works, is there a reason I shouldn’t trust these methods?