I just want to double check that I am not thinking about change of variables incorrectly. I’ve got a Poisson model where I am trying to numerically marginalize over so-called unobserved data (xnobs). To do this, I have to put a distribution on these data over which I marginalize. Now, if they were uniformly distributed, the chan of variables would add nothing onto the log prob. However, I need to put a non constant distribution (here a lognormal) on them and then in the marginalization ( buried in the log_sum_exp) I put a different distribution on them.

Does this require also adding on the change of variables? The marginalization has to be perfectly normalized… so I am thinking the answer is *yes.*

```
parameters {
real mu;
real<lower=0> sigma;
//real<lower=0> Lambda_raw;
//real log_Lambda0;
real<lower=0> Lambda;
real<lower=0> Lambda0_raw;
vector<lower=0>[N] xobs_true;
positive_ordered[M] xnobs_true;
vector<lower=0>[M] xnobs;
}
transformed parameters {
real Lambda0 = 0 + Lambda0_raw * M;
// real Lambda = 0 + Lambda_raw * 100;
}
model {
/* Priors */
mu ~ normal(0, 10);
sigma ~ normal(0, 10);
Lambda ~ gamma(10, .06);
//Lambda_raw ~ normal(0, 1);
//log_Lambda0 ~ normal(0, log(M));
Lambda0_raw ~ normal(0, 1);
/* Observed likelihood */
xobs_true ~ lognormal(mu, sigma);
flux_obs ~ lognormal(log(xobs_true), flux_sigma);
target += lognormal_lpdf(xnobs | log(boundary), 10);
target += log_static_prob;
target += N*log(Lambda);
/* Non observed likelihood. */
// xnobs_true ~ lognormal(mu, sigma);
target += lognormal_lpdf(xnobs_true | mu, sigma);
for (m in 1:M) {
target += log_sum_exp(log(Lambda0) + lognormal_lpdf(xnobs[m] | log(boundary), 10),
log(Lambda)
+ lognormal_lpdf(xnobs[m] | log(xnobs_true[m]), flux_sigma)
+ log_p_ndet_int(log10(xnobs[m]),boundary, strength));
}
target += -Lambda - Lambda0;
}
```