A lot of DTs when using offset/multiplier vs manual NCP

Hi,

I’m using cmdstan v2.36.0 via {cmdstanr}. I’ve been seeing an unexpected difference between using the off/multiplier notation for NPC compared to doing it manually. Below is simple model that doesn’t use any data (just doing prior prediction) and it results in 75% of iterations with divergent transitions. Below you see me doing NCP manually, which works fine, but if I comment out the raw_pr lines and uncomment the lines needed for offset/multiplier I get the DTs.

This is how I run the model

mm$sample(list(), parallel_chains = 4, seed = 453678)
transformed data {
  int n = 200;
}

parameters {
  real m;            
  real<lower=0> s;
  // vector<offset = m, multiplier = s>[n] pr;
  vector[n] raw_pr;
}

transformed parameters {
  vector[n] pr = m + s * raw_pr;
}

model {
  m ~ normal(-0.55, 1.0);
  s ~ normal(0, 0.5);
  // pr ~ normal(m, s);
  raw_pr ~ std_normal();
 }

This is a real problem. It’s not a bug in the sense of an implementation error, it’s just a failure of our current initialization and warmup procedure. See the following current thread for an explanation of what’s going on: Offset multiplier initialization - #44 by spinkney

1 Like

Ok, thanks! Sorry for creating a new thread for the same problem. I thought that one was mostly about initialization. I’ll keep an eye on that thread to see how things develop.

It is—the initialization turns out to be the problem. If you sample from the stationary distribution to start, everything should be fine.