Generating random numbers in the model

Sampling doesn’t optimize the log density, but the optimizer does.

Yes, at the moment I’m mainly interested in ML/MAP.

What you’re suggesting is often done for things like multiple imputation, where it’s too costly to jointly analyze. […] But we don’t support these things in Stan because they’re not based on a model plus full Bayesian inference. Instead, it’s procedural.

Yes, it is multiple imputation in a way. Now I see what you were suggesting above when you told me to make it a parameter — thanks. But this amounts to a very different computational method. I’d be curious to see how the two compare — but in order to compare them, I’d still need to implement the original simulated likelihood, probably by passing in random numbers as part of the data.