I have been reading about discrete mixture models, and I know that in STAN it is necessary to marginalize over the latent categorical variables. It seems, however, that one can easily retrieve the latent discrete probabilities (or sample them with a random number generator) using the transformed parameter (usually called
lp in the user manual) that must be computed anyway.
My question is whether there is any disadvantage from inferring discrete latent parameters this way, as opposed to sampling them directly with other MCMC algorithms that allow for that.
Thank you for your attention
So the sampler in Stan (a variant of HMC) only allows for continuous parameters. I think early in Stan there were experiments to mix in other samplers for discrete parameters, but the chains didn’t mix well.
For the most part we expect that if you can integrate out a random variable (be it continuous or discrete), it will make your posterior easier to sample. So if you can integrate out these discrete random variables, do.
Mixture models themselves can be quite difficult: Identifying Bayesian Mixture Models
There’s a new function to make HMMs easier to work with: 26 Hidden Markov Models | Stan Functions Reference