You’re only going to be able to write down the posterior for the variance if you have a conjugate prior, which is an inverse gamma for variance (or gamma for precision). So you’re still going to need to code up the sampling for that.
@mike-lawrence provides an example here: PSA: using stantargets for SBC(-esque) checks, & big speedups for big gaussian likelihoods
Your scheme looks right other than that Stan uses a scale parameterization, not a variance parameterization. So that means replacing
y ~ normal(mu, sigma);
target += -2 * log(sigma); // p(mu, sigma) propto 1 / sigma^2.
with
mean(y) ~ normal(mu, sigma / sqrt(n));
variance(y) ~ gamma((n - 1) / 2, (n - 1) / 2 / sigma^2);
target += -2 * log(sigma);
Here, it’s better to just compute mean(y)
and variance(y)
and (n - 1) / 2
once and store them. Also, I’d be careful to check that the variance required here is the sample variance (divides by n - 1
) rather than the MLE (divides by n
).