Hi,

Recently I am looking at the Stan User’s Guide and I am thinking about the following two situations:

Scenerio 1: Suppose theta is the paramter in my model and I know theta should be constraint to be positive. If I would like to put diffuse normal prior on it, should I write as following:

```
parameters{
real <lower = 0> theta;
}
model{
log(theta) ~ N(0, 100^2)
}
```

Or I should better directly define the parameter as log_theta and write the following, then manually tranfer log_theta back to theta in generated quantities:

```
parameters{
real <lower = 0> log_theta;
}
model{
log_theta ~ N(0, 100^2);
}
generated quantities {
real <lower = 0> theta = exp(log_theta);
}
```

And in both of the two ways, do I need to consider the Jacobian for paramter transformation?

Scenerio 1: Suppose Y is the observed data in my model and I know Y should be constraint to be positive (all the observed Y are positive). In the model I know Y is normal:

```
data{
real <lower = 0> Y[N];
parameters{
real mu;
real sigma;
}
model{
Y ~ N(mu, sigma^2);
}
```

But I am curious that even if I let mu to be always positive, for a normal distribution there is still some possibility to generate negative Y, which contradicts my restriction that Y should be positive. In this scenerio, should I constaint it like Scenerio 1? (I should not use log-transformation here since Y itself is normal, but maybe a truncated normal should work?)

In summary, I am wondering what is the best solution under Scenerio 1 and Scenerio 2 accordingly and especially for Scenerio 2 I think I am confused between Bayesian Linear Regression and Bayesian Truncated/Censored Linear Regression.

Thx!