Reverse mode SDE idea

(This is mostly for Stan math devs)

@James_Savage sent me the tweet thread below

https://twitter.com/DavidDuvenaud/status/1215347970159382534?s=19

Var doesnt have any templates so we would need a new type, but could we just store the gaussian noise in a var and keep doing something similar to what we do now?

3 Likes

When we move up to C++17 adding this would be a lot easier because class templates are automatically deduced by the constructor so

template <typename Arith, typename NoiseArith = double>
class var {
using Scalar = Arith;
using Noise = NoiseArith;

// stuff here

explicit var(Arith x) {
// ...
}

explicit var(Arith x, Noise x) {
// ...
}

};
var<double> a(10.0); // fine!
var a(10.0); // fine!
var<double, float> a(10.0 , -1.96); // also fine!
var a(10.0 , -1.96); // also fine!

Gives us full backwards compatability!

3 Likes

That looks really interestring. I’ve been reading through how autograd works in Dougal Maclaurin’s thesis.

I think this stuff is now getting implemented in JAX. Here’s their Autodiff cookbook

Also, this is very relevant for the kinds of econometric equilibrium models we want to fit.

Neat! I’ll have to give that a read-through

The main bummer is the C++17 wait for this, but once R bumps up we can look at how Python can handle 17 and go from there

I’m not sure what the need for augmenting vars is. The SDE method is basically an adjoint ODE method with implicit marginalization over the diffusion, and it can be implemented in a similar was to our current ODE solvers. On the forward pass the Brownian bridge realization reconstructor is set up and stored, and then on the backwards pass the Jacobian-vector product is computed using a reverse-time SDE solve and the fixed Brownian bridge realization.

Oh, it needs to be Jim to get your attention (cf. Adjoint sensitivity method for stochastic differential equations)

I met David Duvenaud before Xmas and he said he would be happy to help to get this in Stan.

although it seems Mike already knows what to do.

1 Like

LOL sorry about that

1 Like

Coincidentally @charlesm93, @vianeylb, and I have been working through old adjoint methods (and a shiny new, albeit somewhat obscure one). The first step is to expose CVODES adjoint method which will require storing enough of the checkpointing and interpolation during the forward solve to support the reverse solve. Charles has expressed interest in working this out with me.

Once that’s been demonstrated we can then think about this SDE method. The real novelty of the adjoint method is that it defines a well-posed derivative of the SDE solution, allowing us to have SDEs at all let alone with efficient Jacobian-vector products.

3 Likes