This came up during my presentation at Aalto University on the project. It seems a method to do MCMC on latent Gaussian models is elliptical slice sampling (https://arxiv.org/pdf/1001.0175.pdf). Has this been compared to HMC or INLA?
It seems that your laplace approximation functionality will need higher order functions. Have you considered to delay things a bit wrt to the stan language parts? I mean, stan3 should make this a lot easier to deal with and the new parser appears to be around the corner.
(Sounds like nice progress…looking forward to it).
It’s true higher-order functions are a pain, but it’s not too bad. Whether I wait or not depends on when Stan 3 comes out, and what I need to run experiments and do research. Likely, I’ll continue with a prototype and save the release version for Stan 3.
There’s some stuff but it’s all pre-NUTS.
All these (including NUTS) are part of GPstuff and I have run lot of experiments, but with not much published. Elliptical slice sampling (ESS) works only on conditional posterior given hyperparameter values, and is then used by alternating ESS sampling of latents and alternating sampling of hyperparameters with some other method. ESS is very good for latents, but the alternating sampling destroys the efficiency as there is usually a complicated dependency between latents and hypers. There is a surrogate-ESS which does also joint updates, which help a little but not enough to be popular. I’m not aware GP specific software which would have dynamic HMC sampling jointly latents and hypers and thus Stan results are interesting in that sense. For log-concave likelihoods INLA is much more efficient with the similar accuracy (except can be slightly worse for Bernoulli) as long MCMC .
@fabio asks about the roadmap for the embedded Laplace approximation on another thread. I’m replying here given the topic of this tread.
I’m curious about the roadmap to have the Laplace approximation in Stan…
Currently, we have:
- A paper on coupling HMC with the Laplace approximation, acting as a proof of concept.
- A Stan Con notebook with demo code in Stan and instructions on how to install Stan with the right branch to use the prototype embedded Laplace.
- A branch that generalizes the differentiation algorithm. In the paper we wrote code to handle any user-specified covariance matrix; this generalization additionally handles user-specified likelihoods, provided the code is amiable to forward-mode autodiff. This is required to do higher-order autodiff. So most Stan functions will work here, except those that use special differentiation techniques (e.g. ODEs). Not only is this code more flexible, it turns out it is also faster.
What we want next:
- An easy-to-use interface, at least for common likelihoods, for which we have either empirical or theoretical evidence that the approximation is sound.
- A special routine to handle diagonal covariance matrices. Eventually we’ll generalize this to something that handles sparse matrices.
- Broadening the class of optimizers we support so that we can handle less conventional likelihoods efficiently (there are some likelihoods where we get accurate results but pretty slowly). This is a big one, if we want to take advantage of the flexibility afforded by the general differentiation algorithm.
- Improved diagnostics based on importance sampling.
All this works is happening in Stan-math and we’ll write careful documentation so that folks can access the prototype code. I’m tempted to include an experimental feature to Stan, similar to what other packages such as Eigen or TensorFlow Probability do. Getting this into Stan’s release requires some additional steps, including code review, discussion of the algorithm’s scope, and meeting certain documentation standards. To be clear, we’re not simply implementing a well-known algorithm, we’re expanding and experimenting new methods.
Thanks for your work and for the work of those who contribute to advances on this!
Hello, I tried out code in GitHub - charlesm93/StanCon2020: Code for the talk "Approximate Bayesian inference for latent Gaussian models in Stan" for Laplace approximation and it uses rstan which is a bit problematic. Is there any Laplace approximation code that uses cmdstanr framework?
The notebook uses cmdstanr to run Stan. The package Rstan is loaded to use convenience functions, such as
extract, etc. I agree this is not ideal. I would prefer to do this using json files and the posterior and bayesplot packages (I wasn’t too familiar with them back when I wrote the StanCon notebook). I could maybe wipe an updated notebook over the weekend.
Still you don’t need any particular version of rstan to run the R scripts, as long as it supports basic utility functions. Would that be fine, or does that still cause problems?
Following up on a related installation issue raised by @emilija, see here: Can not install cmdstan with prototype functions · Issue #1 · charlesm93/StanCon2020 · GitHub.
The install script I wrote currently requires users to run ocaml, which is not ideal. I believe @rok_cesnovar worked out a way to precompile stanc3 for specific branches to make it easy to install, and that we had something worked out for the embedded laplace prototype. @rok_cesnovar can you help me dig this out?
@stevebronder maybe you have some ideas too
(let’s aim to update the StanCon notebook with pure cmdstanr version and using the variadic arguments you implemented)
it should be very easy to make a testing tarball that users could simply install by calling:
cmdstanr::install_cmdstan(release_url = "path/to/tar.gz")
I took the liberty of merging recent develop/master branches in the two branches on stanc3/math. Hopefully you are fine with that. Will post instructions as soon as its built (about an hour or so - may post it tomorrow if it takes longer).
EDIT: There were a few changes that broke the branch somehow - so might take me a bit more time to prepare this.
Me, too! @Jonah: How hard would it be to move copies of these functions into
cmdstanr? I have the same problem with my notebooks and I’ve taken to avoiding the
RStan dependncy by pulling arguments out by name or by position, which is not ideal. The only one of these I use is
extract and I really miss it from RStan.
@rok_cesnovar Following up on this. @stevebronder and I are working on a new branch with a prototype embedded laplace approximation (it’s faster and it uses variadic arguments). Maybe it makes sense to focus on this, once we’ve developed it a bit more and then rewrite the case study with the new function.
@emilija Is there a timeline on your end we should be mindful of?
Yeah, I think that might be more sensible. I updated the branches with the latest develop/master, but I think there are additional things we need to update there, as the example models failed to compile. Might give it another go later, but if you have a newer branch, that sounds even better.
Is it a question of days, weeks or months? I am currently working on project that requires quite complicated multilevel SEM model on a very big data set. Thus, I would like to try using nested Laplace approximation algorithms in order for these models to calculate as quickly and precisely as possible. Do you see the potential in using Laplace approximation for these models?
Is there any documentation or examples of using Laplace on optimizing different parameters?
I am really interested in trying ILA too. I would try to reproduce some of the models in Blangiardo & Cameletti
I’m hoping this week to get the compiler stuff up and running so hopefully two weeks or so
btw, the topic of this thread should be Integrated Laplace approximation (ILA), as we’re doing Laplace inside some other integration (HMC in case of Stan), and not doing Laplace inside Laplace (which is what the nested Laplace originally means)
i don’t want to seem pushy, but… any news on ILA? It’s just that I do not have a fourm-surfer skills and sometimes I loose pieces of information…