If the data i am using is extreme, and therefore causes a parameter in my model to be too low for it to work properly, can i transform the variable so that it can work? For instance something like this here

```
transformed parameters {
vector<lower=0>[I] nu = exp(log_nu);
}
model{
mu_nu ~ normal(0,5);
sigma_nu ~ cauchy(0,5);
log_nu~ normal(mu_nu, sigma_nu);
target += sum(log_nu); //jacobian i believe i need
}
```

The problem is, while this makes my model run well, the log_lik in the generated quantities seems to be off from what it should be. How come this transformation does not seem to be properly applying through the log_lik for model comparison sake? I assumed that if nu is now different than what it should be, that the other parameters in my model would adjust and compensate for the exponential transformation of nu.

First off, you donâ€™t need a Jacobian if you are putting a prior on `log_nu`

rather than on the transformed parameter `nu`

. This is probably where your problem is.

Itâ€™s usually easier to let Stan handle the transforms. The following three programs are equivalent:

```
parameters {
vector<lower=0>[I] nu;
...
}
model {
nu ~ lognormal(mu, sigma);
...
}
```

```
parameters {
vector[I] log_nu;
...
}
transformed parameters {
vector[I] nu = exp(log_nu);
...
}
model {
log_nu ~ normal(mu, sigma);
...
}
```

```
parameters {
vector[I] log_nu;
...
}
transformed parameters {
vector[I] nu = exp(log_nu);
...
}
model {
target += sum(log_nu);
nu ~ lognormal(mu, sigma);
...
}
```

In the last program you need the Jacobian because you transform `log_nu`

to `nu`

and then put a distribution on `nu`

.

2 Likes

Thanks. If i were to go ahead and use this setup:

```
parameters {
vector[I] log_nu;
...
}
transformed parameters {
vector[I] nu = exp(log_nu);
...
}
model {
log_nu ~ normal(mu, sigma);
...
}
```

Would the log_lik for the model be roughly equivalent in terms of LOO if i were to do

```
parameters {
vector<lower=0>[I] log_nu;
...
}
model {
log_nu ~ normal(mu, sigma);
...
}
```

I ask since really what i am trying to do is make nu larger, since the data wants the original or log_nu to be .003 sometimes which wonâ€™t work with my model. But if i can make the nuâ€™s atleast .5 then it would work well. So if i take the exp of log_nu, then in theory it should work the same (as if my model were actually able to handle log_nu of .003)? I suppose a simple way to put it is does doing an exponential transformation on nu or any parameter heavily affect the log_lik output in generated quantities for LOO comparison?

Iâ€™m afraid I couldnâ€™t follow the question because the two things you wrote look identicalâ€“one just defines `nu`

in addition to `log_nu`

.

Whether you do the transform yourself or let Stan do it, the results will be the same.

Transforming the parameters wonâ€™t make them fit to larger valuesâ€”for that you need to change the likelihood and/or prior. On the other hand, transforming will lead to different expectations if the transform is non-linear (e.g., log(mean(x)) isnâ€™t the same as mean(log(x))).