I know that Pystan’s log_prob
functionality has been discussed before. But in all those cases (as I hope in mine), it appears that there is something to do with the unconstrained vs constrained scale not being accounted for. In my case, I don’t see how this is the issue (but happy to be proved wrong).
Suppose I specify a model:
data {
int<lower=0> N;
real y[N];
}
parameters {
real mu;
}
model {
y ~ normal(mu, 2);
}
If I use Pystan to compile then run the model above using data:
{'N':1, 'y':[0]}
I obtain:
stanfit.log_prob([1])
which returns:
-0.125 = -(0-1)^2/(2 \times 2^2)
which makes sense.
I then change the model so that sigma is an (unconstrained) variable:
data {
int<lower=0> N;
real y[N];
}
parameters {
real mu;
real sigma;
}
model {
y ~ normal(mu, sigma);
}
Then I run:
stanfit.log_prob([1, 2])
which now returns -0.8181471805599453
, which does not equal that in the original model where sigma has the same value.
This seems odd to me since sigma is unconstrained so (I think?) it is not transformed to the unconstrained scale by Stan. This is confirmed using Pystan’s functionality for unconstraining parameters:
stanfit.unconstrain_pars({"mu":1, "sigma":2})
which returns array([1., 2.])
.
I have also tried all the below.
stanfit.log_prob([1, 2], adjust_transform=True) [=-0.8181471805599453]
stanfit.log_prob([1, 2], adjust_transform=False) [=-0.8181471805599453]
stanfit.log_prob([1, np.log(2)], adjust_transform=True) [= -0.6741715699211395]
stanfit.log_prob([1, np.log(2)], adjust_transform=False) [= -0.6741715699211395]
I am using Pystan version 2.19.1.1.
Anyone got any ideas?