Question on change of variables

I just want to double check that I am not thinking about change of variables incorrectly. I’ve got a Poisson model where I am trying to numerically marginalize over so-called unobserved data (xnobs). To do this, I have to put a distribution on these data over which I marginalize. Now, if they were uniformly distributed, the chan of variables would add nothing onto the log prob. However, I need to put a non constant distribution (here a lognormal) on them and then in the marginalization ( buried in the log_sum_exp) I put a different distribution on them.

Does this require also adding on the change of variables? The marginalization has to be perfectly normalized… so I am thinking the answer is yes.


parameters {
  real mu;
  real<lower=0> sigma;
  //real<lower=0> Lambda_raw;
  //real log_Lambda0;

  real<lower=0> Lambda;
  real<lower=0> Lambda0_raw;

  vector<lower=0>[N] xobs_true;

  positive_ordered[M] xnobs_true;
  vector<lower=0>[M] xnobs;
}

transformed parameters {

  real Lambda0 = 0 + Lambda0_raw * M;
  // real Lambda = 0 + Lambda_raw * 100;

}


model {
  /* Priors */
  mu ~ normal(0, 10);
  sigma ~ normal(0, 10);
  Lambda ~ gamma(10, .06);
  //Lambda_raw ~ normal(0, 1);
  //log_Lambda0 ~ normal(0, log(M));
  Lambda0_raw ~ normal(0, 1);

  /* Observed likelihood */
  xobs_true ~ lognormal(mu, sigma);
  flux_obs ~ lognormal(log(xobs_true), flux_sigma);

  target += lognormal_lpdf(xnobs | log(boundary), 10);

  
  target += log_static_prob;
  target += N*log(Lambda);

  /* Non observed likelihood. */
  //  xnobs_true ~ lognormal(mu, sigma);

  target += lognormal_lpdf(xnobs_true | mu, sigma);
  
  for (m in 1:M) {
    target += log_sum_exp(log(Lambda0) + lognormal_lpdf(xnobs[m] | log(boundary), 10),
			  log(Lambda)
			  + lognormal_lpdf(xnobs[m] | log(xnobs_true[m]), flux_sigma)
			  + log_p_ndet_int(log10(xnobs[m]),boundary, strength));
  }

  target += -Lambda - Lambda0;
}


1 Like

I don’t think I understand what you are trying to do here, so I am not completely certain my answer is relevant. It would help if you could share the math (or other description) of what the model represents, I am unable to understand it from the code alone. I also don’t understand what log_p_ndet_int is doing.

You would need to include the Jacobian for the change of variables whenever your target involves densities/probabilities of random variables that are derived from parameters. Unless it is hidden in log_p_ndet_int this is not something the seems to be happening in the model - but as I said I don’t understand what is happening exactly, so I might be wrong.

Best of luck with your model!

1 Like

You are correct. I was thinking about this very incorrectly.

The log_p_ndet_int is just an inv_logit that acts as a software selection boundary for each object.

The issue was not actually a CoV but was because I was using ordering to make objects distinguishable and when I added those objects density onto the target in a vector form, I destroyed that property.

Instead, I need to add the density on in the log_sum_exp individually. I will post a corrected model soon,

1 Like