If my model has some positive parameter is it better (in terms of sampling efficiency, avoiding divergences, etc) to put a bound on that parameter or to use something like `exp(par_unbounded)`

in my code (where `par_unbounded`

is an unbounded parameter)?

It depends where you want to put the prior. If you want to put the prior on the unconstrained parameter, you an do that and then transform; if you want to put a prior on the constrained parameter, you need to apply the Jacobian constraint. Here’s 3 ways to implement parameters with lognormal priors.

```
parameters {
real log_alpha;
}
transformed parameters {
real<lower=0> alpha = exp(log_alpha);
}
model {
log_alpha ~ normal(mu, sigma);
}
```

```
parameters {
real<lower=0> alpha;
}
model {
alpha ~ lognormal(mu, sigma);
}
```

The above two programs put the same distiribution on `alpha`

. The one below does *not*, because it’s missing the change of variables adjustment.

```
parameter {
real log_alpha;
}
transformed parameters {
real alpha = exp(log_alpha);
}
model {
alpha ~ lognormal(mu, sigma); // WRONG! NEED JACOBIAN ADJUSTMENT
```

To fix this third model, you need to implement the change of variables by hand, which can be done by adding this statement (log absolute derivative of inverse transform) to the model block,

```
target += log_alpha; // = log(abs( d/d.log_alpha exp(log_alpha))
```

Give it a try for some fixed values of `mu`

and `sigma`

.

1 Like