In for a penny, in for a pound, as they say. If you’re incrementing target
manually then you have to do the appropriate increment for truncations. But I would only do that where they matter. In this case,
y ~ normal(0, 1) T[-1, 1];
because the parameters to the normal distribution are constant here, the truncations have no effect assuming y
is appropriately constrained to (-1, 1)
. As far as sampling goes, the above is equivalent to the following.
y ~ normal(0, 1);
Now if you have normal(mu, sigma)
and mu
and/or sigma
are parameters, then the truncation is no longer constant but depends on the non-constant parameters. In that case, you need to add the effect of truncation manually, which just standardizes by a difference in cdfs.
y ~ normal(mu, sigma) T[-1, 1];
is equivalent to:
target += normal_lupdf(y | mu, sigma);
target += log_diff_exp(normal_lcdf(1 | mu, sigma),
normal_lcdf(-1 | mu, sigma));
The full set of rules are in the reference manual. In math, if
y \sim \text{normal}(\mu, \sigma) \ T[-1, 1],
then
p(y \mid \mu, \sigma) = \dfrac{\text{normal}^\text{pdf}(y \mid \mu, \sigma)}{\text{normal}^\text{cdf}(1 \mid \mu, \sigma) - \text{normal}^\text{cdf}(-1 \mid \mu, \sigma)}, .
where \text{normal}^\text{pdf} is the probability density function for the normal and \text{normal}^\text{cdf} is the cumulative distribution function. We’re just working above on the log scale and trying to keep things stable, where
\text{logDiffExp}(a, b) = \log \left(\exp(a) - \exp(b)\right).
If you want the normalization constants in the normal, then replace _lupdf
with _lpdf
.