I’m cc-ing Dustin and Andrew in case they’re not
Frank and crew are building marginal maximum a posteriori (MMAP)
estimation into Anglican:
under the name “Bayesian optimization for probabilistic programs”
I can’t actually tell what they’re doing algorithmically
because I don’t understand the Anglican code examples:
and can’t find a paper outlining an algorithm anywhere—they cite
it but don’t link it.
The paper was linked from the issue you closed:
Always good to see others working on this sort of thing. Once we have it working in Stan, that will be cool, because Stan is open-source, so for any problems where users don’t want to wait forever, Anglican can just call the Stan program and do the solution in finite time.
i chatted shortly with the authors during NIPS. i think it’s great work for generic inference. my high-level understanding is that it is a form of marginal optimization using techniques from Bayesian optimization/sequential experimental design. they do not use any gradients, which GMO is all about.
Thanks much for the summary in language I can follow.
That wasn’t clear to me from the top-level write ups.