The parameters muP
, P
and O
are additive. That means you can add to muP
and subtract from P
and get the same likelihood. That’s pretty much the definition of non-identified.
Also true for HMC—there is the issue of rounding (not so bad near zero as near one), but also the fact that we transform parameters, so it really turns into overflow. I don’t recall what “parameter expansion” is. We just suggest rescaling if you really have variables with very tiny posterior scales (0 plus or minus some small epsilon or some very large epsilon).
Non-centered parameterization helps for hierarchical parameters which aren’t well identified by the prior or data. You’re doing that with the P_std
and O_std
thing already.
Now that I look at it, your non-centering of the scale isn’t the usual approach. Usually you’d do something like to get a non-centered lognormal distribution:
parameters {
real sigmaP_unc_std;
transformed parameters {
real sigmaP_unc = sigmaP_unc_std * sigmaP_scale;
real sigmaP = exp(sigmaP_unc);
model {
target += sigmaP_unc; // Jacobian adjustment
sigmaP_std ~ normal(0, 1);
What you’re doing is scaling after the fact. If you really want sigmaP_scale
to be the lognormal scale, then you need to work on the right scale and recompute as above, or let Stan do the Jacobian and work on the positive scale, with the declarations you have and definition
real<lower = 0> sigmaP_std;
real<lower = 0> sigmaP = pow(sigmaP_std, sigmaP_scale);
There’s nothing wrong with what you have, there’s just not a standard interpretation for your sigmaP_scale
, whereas above is the lognormal scale parameter.