Any relevance of this recent paper to Stan users / the development of Stan?

I am curious about how the development team and other Stan users think about this recent paper

“Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale”

It sounds like an important development.

Anything to learn from this development? Can the development of Stan be benefited?

Unfortunately there isn’t much to be exploited in what they present. One of their main argument is that gradients aren’t technically feasible for the existing particle physics simulators (which is true if you’re not willing to do any work to update the code which I find to be a poor argument, but then again there are reasons I’m no longer a particle physicist) and so they proceed to implement a random walk Metropolis sampler which we know scales terribly. Unsurprisingly they offer no validation or critique of that random walk Metropolis fit so the issue is never addressed.

In other words this is coming from the perspective of “let’s build a probabilistic programming language around an existing math library and use any algorithm that can work within that scope, even if the available algorithms are poor” where as in Stan we have taken the “let’s figure out the scope of models that we can fit with the best algorithms available and then build a language and math library around that even if that might exclude existing software”.

As applied fields evolve their internal software, and the Stan math library incorporates more and more features, then the approaches will converge a bit and we’ll be able to incorporate the overlap into the scope of Stan.

3 Likes