I mistakenly elaborated with the multivariate version. For the univariate case where you want
\mathrm{logit}(\alpha) \sim \mathsf{Normal}(\mu, \sigma),
it looks like this:
parameters {
real<lower = 0, upper = 1> alpha;
...
model {
logit(alpha) ~ normal(mu, sigma);
target += log_inv_logit(alpha) + log_1m_inv_logit(alpha); // log Jacobian
The better way to code this is to make logit_alpha
the parameter and define alpha
as a transform; that way you don’t need the Jacobian adjustment because there’s no change of variables in a distribution.
parameters {
real logit_alpha;
...
transformed parameters {
real alpha = inv_logit(logit_alpha);
...
model {
logit_alpha ~ normal(mu, sigma);
@Aslak_Grinsted’s case is confusing, because it’s essentially putting a lot of distributions on ymax
—one for each N
. I don’t see how you could define a parameter logit_y_div_ymax
and hope to recover ymax
from it.
parameters {
vector[N] logit_y_div_ymax;
...
transformed parameters {
vector[N] ymax = ... some function of y and logit_y_div_ymax ... ???
P.S. Also, I’d suggest not taking any chances and defining yobsmax
in transformed data as:
transformed data {
real<lower = 0> yobsmax = max(y);
or even just defining the bound directly in the declaration:
real<lower = max(y)> ymax;
P.P.S. You can also vectorize the loop:
(y .* ymax) ./ (ymax - y) ~ lognormal(mu, sigma); // drops constant terms, but is more efficient
target += log(y) + N * log(ymax) - log(ymax - y);
The .*
and ./
are the elementwise vector multiplication and division. And of course, given that ymax
is a parameter, you need a Jacobian here (though it may simplify given that y
is data).