I have just noticed a bit of curious behaviour, and I would be interested to know what is going on.

I have a model (for fitting a Poisson process) that runs cleanly implemented one way but generates a slew of warnings (though the final result is no different) when implemented another so far as I can see trivially different way.

The core of the model is W, a vector of Gaussians, with variance W_s.

If I implement this as:

transformed parameters {

…[W[i] * W_s…

}

model {

…

W_s ~ normal(A, B);

W ~ std_normal();

…

}

Then everything is fine. If instead, I implement

transformed parameters {

…[W[i]]…

}

model {

…

W_s ~ normal(A, B);

W ~ normal(0, W_s);

…

}

I get a slew of warnings (Bayesian Fraction of Missing Information was low, Bulk effective samples size too low, tail effective sample size too low), and obvious linear correlation between W_s and energy__ in the pairs plot.

As far as I can see, the two implementations should be identical, so can anybody tell me what I am missing.