I think you need to step back and reconsider how you’re approaching your generative model. I had the same kinds of questions when I first started learning Stan & Bayesian generative modelling, and it’s took me a long time to grasp how they were reflecting an inaccurate fundamental understanding.

None of your parameters can be discrete, so a parameter cannot be distributed as a bernoulli outcome. If you have a latent variable that you’re thinking of as discrete, it’s more likely that it’s actually a continuous thing (ex. a probability, or a propensity) that we can only observe discretely. If you don’t have those observations, and instead have some other variable that you hypothesize that the latent variable influences, the proper thing to do is leave the latent variable as continuous and model the influence of this continuous latent variable on the outcome you can observe.

Similarly, there is no random sampling permitted in the model or transformed parameters sections in Stan because this would reflect a model with a stochastic gradient that would generally break HMC. I think there’s been work on adding a degree of stochasticity, but so far as I understand these are for very complicated models where the compute time for the non-stochastic representation of things is extraordinary. More typically (at least, this was the case when I had similar questions), wanting to grab a random sample of-or-from something in a Stan model usually reflects misunderstanding of what’s actually going on in a Stan program. For me, the key insight was that while the `x~distribution(...)`

syntax *looked* like it was doing random sampling, it was in fact expressing model, and that behind the scenes `x`

is not being sampled from `distribution`

.