Dear Stan community
First of all thanks for the great job! Stan really nails it!
My question regards the use of lp__ for model comparison.
If I got it well, if I have written my model with the syntax
x ~ normal(m,s) instead of
target += lpdf_normal(x | m,s) then my lp__ is the log-joint minus a constant, is that correct?
I want to compare an approximate inference algorithm - some complex stuff that I did not code with Stan - with NUTS, so I would like to know the value of the constant for model comparison.
Is there a way to get that back without re-running everything?
I am using cmdstan on a distant cluster (I have to run lots of simulations with lots of data) and I would like to keep the runs that I have done - if possible.
Thanks for your precious help
Only if your parameters block contains unrestricted parameters exclusively. Otherwise, the
lp__ column of the output is going to have Jacobian terms.
No. Although a more principled way of evaluating approximate inference algorithms is dropping any day now.
Ok great. thanks for the fast answer. If I change the syntax to
target +=, will that solve my problem?
That will mean that the results match what you would calculate outside of Stan (if you adjusted for the change-of-variables), but I don’t think
lp__ is much use for comparing models / algorithms. ELPD is good, but that just uses the likelihood and only for one observation at a time in the case of the PSIS correction. The marginal likelihood has its proponents, but that does not correspond to
True! I do not intend to use lp__ to select the best model, but rather to measure a KL divergence between an VB ELBO and the true posterior probability. Just to know how bad (or good) I’m doing with VB. If I’m not wrong, with an accurate lp__, I can estimate the log-marginal ‘log p(y)’ as
Would you advice some other way to do that?
I would wait a couple of days.
Yes, but Did It Work?: Evaluating Variational Inference. [1802.02538] Yes, but Did It Work?: Evaluating Variational Inference