Generated quantites block

I have a more general query regarding the generated quantities block and how it carries out its sampling. I have a parameter that simulates from a normal distribution in the generated quantities block, now this model is mis- specified, as the parameter being simulated in the block cannot be negative, so normal distribution isn’t a good approx for its distribution, therefore when it returns negatives, they are subsequently rejected( as i set lower = 0). My question is why when I remove this calculation from the generated quantities block entirely the autocorrelation improves between those other parameters, I’m just a little confused as to why this is, I think I have some confusion behind how the generated quantities block is doing the iterations, does it affect the autocorrelation between the other samples, i.e. roughly speaking, when the sampler rejects samples from the generated quantities block, does it mean that those samples that are being used for that iteration are then rejected too, therby affecting n_eff in my other parameters( forgive my lack of understanding, a beginner to stan and bayesian statisitics).

Nothing in the generated quantities block should have any effect on anything outside the generated quantities block. The generated quantities block is only evaluated after a new proposal has been chosen from the leapfrog process. I think what you are seeing is just random differences, but you shouldn’t be relying on lower = 0 in the generated quantities block in the first place. If you need truncated normal draws in generated quantities, use a while loop until you get a positive realization or use the inverse CDF method after calculating the lower bound on the uniform draw.

Thanks for the reply!

Do you know of where I could possibly see some examples of using these methods rather than use lower=0? Also, what difference does this make?

~ Many thanks

It is pretty much the same for the normal distribution in the sense that you first have to figure out what is the lower bound on the uniform draw which is Phi( ( 0 - mu ) / sigma ). Then inv_Phi(uniform_rng(lower_bound, 1.0)) * sigma + mu will be positive and there won’t be any rejections.

Thank you that was really helpful, I will definitely try to code that in, as the rejections occur a lot.

As a side note, I feel like i should’ve mentioned the difference in the n_eff after removing the parameter that was causing issues in the generated quantities block, was striking- it went up by thousands which makes me think it is not random and something else is going on. The parameter i was simulating in the generated quantities block looks something like this:

Pred[j] = normal_rng(theta*weight[j]agacoef * agacov[j], thetaweight[j]*agecoef * agecov[j]*omega[1])

and when removed caused n_eff to go up by thousands in the parameters theta, agacoef and some others, which was very strange. Could it be my parameterisation of Pred?

This does create (pretty sure) a ledge that the HMC integrator can “fall off” of and that would mess up adaptation among other things.

But it only evaluates the generated quantities once per iteration, after not evaluating the generated quantities at each leapfrog step. So, I am not seeing how it should make any difference to NUTS.