Sampling from truncated lognormal

I think rejection sampling is the easiest to code, though things can get stuck in infinite loops if you’re unlucky:

generated quantities{
  real<lower = 0, upper = 20> y_hat_level[NUMBER_OF_TRAINING_POINTS];

  for(training_point in 1:NUMBER_OF_TRAINING_POINTS){
    real generated = lognormal_rng(mu[training_point], sigma);

    while(generated > 20) {
      generated = lognormal_rng(mu[training_point], sigma)
    }

    y_hat_level[training_point] = generated;
  }
}

The other option is computing an inverse CDF of the truncated distribution. Then if you generate a random number [0, 1] you can.

Yet another option is add y_hat_level as a parameter and add a statement to the model block to sample them with MCMC.

y_hat_level ~ lognormal(mu, sigma);

I suspect in this case you’d get mixing problems though. You could probably sample this on the log scale and then do an exp transformation and do a constrained non-centered parameterization there (Non-centered parameterisation with boundaries).

2 Likes