Hi everyone,

Suppose I have a lognormally distributed variable `y`

defined as follows:

```
parameters{
real mu;
real<lower=0> sigma;
real x;
}
transformed parameters{
real y;
y = exp(x);
}
model{
x ~ normal(mu, sigma);
}
```

Suppose I also have some additional information on `y`

(ie the value on the real scale), eg a normal prior, and I want to add this, eg

```
model{
x ~ normal(mu, sigma);
y ~ normal(mu_prior, sigma_prior);
}
```

(Obviously you can solve this problem directly by just defining the parameter on the real scale rather than the log scale, but that’s not possible to do in the broader problem I have that this is embedded in).

My two questions:

- Do I have to increment the log probability for the sampling-after-change-of-variables? (I think the answer is yes.
- Assuming this is yes, do I then need to add
`target += y`

, given that `dy/dx = y`

?

Thanks in advance!

Have you considered specifying `y`

as lognormal directly? Like:

```
y ~ lognormal(mu, sigma);
```

As for your question more broadly, It doesn’t make sense to have a prior on both `x`

and `y`

in your situation since one is just a transformation of the other. Otherwise, `y`

is no longer `exp(x)`

. In other words, all of your prior information should be for `x`

.

If you’re interested in a model where `y`

is equal to `exp(x)`

and a little ‘extra’, an alternative would be for `y`

to be the sum of two parameters:

```
parameters{
real mu;
real<lower=0> sigma;
real x;
real x_add;
}
transformed parameters{
real y;
y = exp(x) + x_add;
}
model{
x ~ normal(mu, sigma);
x_add ~ normal(mu_prior, sigma_prior);
}
```

For more info about a change-of-variables with a lognormal distribution specifically, the Stan manual has a great section that covers the specifications and adjustments: https://mc-stan.org/docs/2_22/stan-users-guide/changes-of-variables.html

Thanks for the fast reply. Unfortunately I can’t do this.

The issue is that I actually have a time series process that I have assumed follows a lognormal random walk (because I know that the parameter must be positive). However, I also have an informative prior for the process on the real scale.

So the problem looks more like this:

```
data{
int n;
vector[n] y_obs;
}
parameters{
vector[n] x;
real mu;
real<lower=0> sigma;
real<lower=0> sigma_obs;
}
transformed parameters{
vector[n] y;
y = exp(x);
}
model{
for(t in 2:n){ // Evolution
x[t] ~ normal(x[t-1], sigma);
}
for(t in 1:n){ // Observation equation
y_obs[t] ~ lognormal(y[t], sigma_obs);
}
}
```

Now, I have an additional prior I want to include but it’s only easily expressable on `y`

, not on `x`

.

Thanks for the pointer to the Stan manual - have read this and my question still remains after this.

So if I’m reading the model correctly, `y[t]`

is log-normal with mean `y[t-1]`

and SD `sigma`

? What additional prior information are you trying to include?

Suppose some prior research has supplied a constraint that `y[t] ~ normal(6, 2)`

for all `t`

, and I want to add this to the model. How would I do this?

I’m uncertain whether I can just literally add `y[t] ~ normal(6,2)`

to the code due to the change of variable considerations.

(Note that this is a placeholder - I can say that my additional prior is a standard function but it might not be normal)

This sort of thing was discussed on Gelman’s blog (see also Lakelands followup)

The short answer is that yes, you can just literally add `y[t] ~ normal(6,2)`

to the code.