But this brings up a more general question of where to do posterior summarization.
It’s a mix. Do it wherever it is easiest/makes the most sense. But usually if you already wrote the model in Stan it’s easy to just copy-paste a few things and compute whatever you want in the generated quantities.
generated quantities{
vector[10] y_pred;
real E_y_pred; // Expectation of posterior prediction
for (i in 1:10)
y_pred[i] = normal_rng(beta * x_new[i], sigma);
E_y_pred = mean(y_pred);
}
}
You could do this, but it isn’t the biggest use case. You do your statistics on the output of generated quantities, mostly. You can do it inside, but the simpler case is outside (if that makes any sense…). More like this:
generated quantities {
real y_pred = normal_rng(beta * x_new, sigma);
}
Then whenever the fit finishes, you can just print the fit (using whatever interface you’re using) and see the posterior statistics on that (mean of y_pred, sd, confidence intervals), or make plots of samples of y_pred directly.
But it seems strange that you’d only have one y value you’re modeling.
Generated quantities usually mirror the model section, in that you use the parameters from the sample you’re in to generate data that (hopefully) looks like the stuff you measure. It’s all about posterior predictive stuff.
So if model (not using vectorization) looks like:
for(n in 1:N) {
y[n] ~ normal(mu, sigma);
}
Then the generated quantities looks like:
vector[N] y_pred;
for(n in 1:N) {
y_pred[n] = normal_rng(mu, sigma);
}
That’ll leave you with 4000 copies of your dataset you can plot and analyze and whatever. It’s a lot of data. If you have anything other than a small model, you probably gotta be tactful with the generated stuff to not overload yourself.