Edited: (This is marked as solution but see Aki’s post below as well)
In the case that you’re putting a lognormal distribution over a parameter (I’m assuming y is data here) you need the Jacobian adjustment that is discussed in the manual. For data it’s not necessary [Edited: this is the bit that Aki points out isn’t always true] (cause data is constant and the Jacobian adjustment ends up being constant too), but you may as well use lognormal anyway (usually these things are coded so that Stan will skip the computation if it’s not necessary).
y ~ lognormal(log(mu + X * a), sigma); should be fine as a bit of code, but you’ll have to make sure whatever goes into log stays positive. Most obvious way to do that is pass stuff through an exp, but then your predictors are back on a log scale, which isn’t what you wanted. If you think X is a predictor for y on the original scale, then why not
y ~ normal(X * a + mu, sigma);? Or just take the log of the elements of X and do things on the log scale?
The be-careful watch-for-your-Jacobian stuff comes up when you need to evaluate the probability of the transformed variable itself, which happpens whenever you put it on the left hand side of a sampling statement.
Fantastic, thanks for the quick response. This is really helpful.
It’s necessary for data, for example, if you use LOO with log score (elpd in loo package) to compare models. For example, in survival analysis, Weibull
y ~ weibull(alpha, sigma) is common alternative to log-Normal. If you would compute
log_lik[n] = normal_lpdf(log_y[n] | mu[n], sigma);
you would get a wrong result in the LOO comparison. You could add the Jacobian adjustment yourself, but it’s more clear to have
log_lik[n] = lognormal_lpdf(y[n] | mu[n], sigma);
Eek, thanks Aki. I’ll adjust my answer.
@tmalsburg, I think the conclusion is to just use lognormal if you’re working with a lognormal. When and where you need these Jacobian adjustments is a function of what you’re doing and how you’re doing it. But this one is free :D.