# Log normal distribution in Stan

Hi,

I am wondering how to specify the log-normal distribution for priors in stan, for example are the following three expressions equivalent in Stan?

parameters{
real<lower = 0> theta;
real<lower = 0> sigma;
real mu;
}
model{
theta ~ lognormal(mu, sigma);
}


and

parameters{
real<lower = 0> theta;
real<lower = 0> sigma;
real mu;
}
model{
log(theta) ~ normal(mu, sigma);
}


and

parameters{
real<lower = 0> theta;
real<lower = 0> sigma;
real mu;
}
transformed parameters{
real<lower = 0> theta_transformed = log(theta);
}
model{
theta_transformed ~ normal(mu, sigma);
}


Thx!

Hey there!

No, these are not equivalent. 1) is the correct way to do it. 2) is missing a Jacobian correction: Since you are doing a non-linear transform of one of the parameters (in the parameter block), you need to add the log determinant of the Jacobi matrix. See, this chapter of the user guide:

A transformation samples a parameter, then transforms it, whereas a change of variables transforms a parameter, then samples it. Only the latter requires a Jacobian adjustment.

When you take care of that, the results of 1) and 2) should be identical. Otherwise, they’re probably still close, but not quite the same.

Alternative 3) is almost the same as 2), but the lower bound on theta_transformed is wrong. If \theta \in (0,\infty), then \log\theta \in (-\infty,\infty), so in this case no (lower) bound is needed.

Cheers,
Max

2 Likes

Hi Max,

I am wondering how should I incorporate the Jacobian adjustement in 2) and 3) . For example to correct 2), should I write instead:

model{
log(theta) ~ normal(mu, sigma);
}
target += -log(theta);


Also for 3), according to the definition of transformation and change of variables, it seems that 3) is actually the transformation (since I define the transformed parameter) so in this case do I also need to incorporate the Jacobian adjustment or just need to simply correct the lower bound statement?

Thx!

1 Like

Hey hey! :)

The line target += -log(theta); should be inside the model block. Other than that it’s correct.

I see how this can be confusing. But read this quote again:

So, a transformation is sampling the variable and then transforming it. In code:

parameters{
real log_theta;
real<lower = 0> sigma;
real mu;
}
transformed parameters{
real<lower = 0> theta = exp(log_theta);
}
model{
log_theta ~ normal(mu, sigma);
}


Here, you don’t need a Jacobian adjustment.

If you transform the variable and then sample it you have a change-of-variable and you do need a Jacobian correction:

parameters{
real<lower = 0> theta;
real<lower = 0> sigma;
real mu;
}
transformed parameters{
real log_theta = log(theta);
}
model{
log_theta ~ normal(mu, sigma);
target += -log(theta);
}


The easiest way to remember this is looking whether you assign the distribution to a parameter defined in the parameter block (no adjustment needed), or you apply the distribution on a (non-linearly) transformed parameter in the parameters block (adjustment needed).

Cheers!
Max

3 Likes

Hi Max,

Thanks so much and that’s very clear!

1 Like