After @mitzimorris feedback and following the design doc readme and considering that (excepting the ServerStan
) @jonah said
and mirroring the original gq post I wanted to advertise my opened cmdstan issue.
From the description:
Currently there exists no good method (using cmdstan(py)) that takes a set of parameter values and (re)computes
lp__
and its gradient. See Log_prob_grad via csv files for cmdstanThis would be good to have to calibrate the ODE solver configurations.
Changes required are:
- Add
stan/services/sample/standalone_lpg.hpp
- Modify
cmdstan/command.hpp
Eventually this method should be exposed via CmdStanPy or its R equivalent.
(Tests are of course also required. And documentation. Almost forgot the documentation).
Just like the standalone_generate
function, the standalone_lpg
function
Stan fork: GitHub - funko-unko/stan at feature/issue-1012-add-log_prob_grad
Cmdstan fork: GitHub - funko-unko/cmdstan at feature/issue-1012-add-log_prob_grad
So far code contains preliminary tests and a (hopefully) working implementation. I still have to do some more tests myself, but in principle this is the direction this would take.
Edit:
As this method goes especially well together with the soon to be released adjoint ode solver, for which we don’t really know what good configurations are, we might consider speeding up the review process such that this feature makes it into the next release. I’ll prepare a (nonadjoint) use case which can also be used in the documentation asap.
Also, I checked and the gq method does not appear to have a design doc, maybe this doesn’t need one either?
As @betanalpha has voiced concerns about the adequacy of the adjoint configuration options and this feature provides users with the tool needed to ensure that both lp__
and its gradient are computed to good enough precision, I have tagged you here as well.
Note that we don’t actually appear to know what “good enough precision” really means. For draws from the posterior of the planetary motion test case, I think I observed relative (elementwise) errors in the gradient of up to 100%.
Edit2: I think I’ll also tag @rok_cesnovar