That’s one of the reasons we’re trying very hard to reach out to grad students and new users among the scientific community. Which in itself plays into the next comment.
I can’t stress that enough. It helps that a lot of people besides the core Stan developers are publishing books, tutorials, and papers.
If you do have the resouces to learn it, the tool itself becomes important. It plays into things like being able to debug, evaluate, etc. Lots of that is tied into the ecosystem in R, Python, etc. We still feel like we’re playing catch up in getting all of our tooling up to the standards we’d like ourselves in our applied work.
Lots of things like developing more than one model in sequence is pretty painful with naming, cut-and-paste, etc.
We’ve been pushing pretty hardline evaluation of models plus software. We’re going to have a public paper on how to do all this soon based on scaling up the Cook-Gelman-Rubin diagnostics and making them more robust while retaining sensitivity.
In higher dimensions, neither Gibbs nor Metropolis is going to mix well.
It’s still not something we’ve been thinking about, but we should come back to it with Stan 3. The problem we have is that we don’t require users to write directed graphical models, so it’s hard for us to infer structure from a Stan program. That makes it hard to do discrete Gibbs efficiently. The reason we’re not motivated to literally add discrete sampling is that it’s horribly inefficient. And usually can’t recover the parameters from simulated data. What we would very much like to be able to do is automatically marginalize them out of a model (we could add samples if people want to do inefficient inference, or we can calculate expectations organically if people want to do efficient inference).