In a previous topic, @bbbales2 helped me to understand that the generated quantities block is run for each sample draw.

I interpreted this as essentially a for loop.

```
for (i in 1:"Number of sample draws") {
generated quantities {
some_post_pred = some_distribution_rng(parameters[i])
}
}
```

I then went ahead and coded up a more complex generated quantity.

```
generated quantities{
vector[N] y_new;
int b_new;
vector[N] x_new;
b_new = 1000;
while(b_new > 20) {
for (i in 1:N) {
x_new[i] = -1; // Rejection sampling to truncate normal at 0.
while(x_new[i] < 0) {
x_new[i] = normal_rng(mean(x), sd(x));
}
}
b_new = poisson_rng(sum(x_new) * lambda); // works replacing lambda with true 0.012.
}
for (i in 1:N) {
y_new[i] = normal_rng(beta * x_new[i], sigma);
}
}
```

What I want the generated quantities block to produce is two posterior predictions: `b_new`

for some randomly generated `x_new`

such that `b_new`

is less than 20, and `y_new`

for the same `x_new`

that made `b_new`

less than 20. How’s that for a tongue twister?

When I fit the model as written, it stalls (at least it takes > 24 hours). But when I replace lambda with it’s true value 0.012, it does what I want. Any idea why this would be the case?

It might help me, and others, if someone could explain how Stan generates the generated quantities. Should we think of it as iterating over sample draws? Or something else.

If this quantity can’t be calucated in Stan, my backup approach would be to iterate over each draw after extracting the model fit in R. Of course my preference would be to run everything in Stan as I can be sure the distribution parameterizations are identical.

I’ve attached a reproducible example below (.R and .stan). Thanks for all the help.

gen_quant_test.stan (697 Bytes)

gen_quant_test.R (1.1 KB)