Is it currently possible to generate random numbers in the model block to implement particle filters for sequential models (instead of using ODEs), or are there other ways of implementing Sequential Monte Carlo within each HMC step?
Alternatively, is there an implementation of the (determinstic) filters like the Unscented Kalman Filter (or at least the Extended Kalman Filter)?
No. The evaluation of the
target must be deterministic in Stan.
Just the regular Kalman filter in
I’d think that despite the stochastic method for approximating the likelihood, it should be smooth wrt to the parameters, so in principle HMC would still work. But thanks for the answer.
I don’t think there’s much else in the UKF or EKF, so it should be feasible to implement?
@maxbiostat, you’ve seen anything like this for SIR models?
for ctsem, I implemented an extended kalman filter in stan. In the continuous time form the ukf was too costly. https://cdriver.netlify.com/post/ctsem-quick-start/
There are some nonlinear examples in the latent growth curve post and the manual.
No, but then again, I haven’t been looking into this. If you get an implementation working, please share. In fact, share even if it doesn’t work and we can try and get it working together.
Cool. I don’t know the size of your system, but general particle filters have been implemented and were usable in R/Python, so UKFshould be efficient in Stan at least in some cases. Thanks.
I haven’t been working on this either, but I thought it would be straightforward enough to implement a simple S(E)IR model, but with the limited data maybe a mode accounting for stochasticity would be a little better than a simple ODE (and I would get to have an implementation of a filter in Stan that I think I may be using at some point anyway).
sure, in some cases it is fine, but while the discrete time ukf is not much more costly than the ekf, the repeated integration in continuous time cases makes it a lot more costly so I decided it wasn’t worth it.
It would be kind of neat if the restrictions regarding random number generation were not fixed in stan, but rather only imposed with respect to the fitting method.
Right. In continuous time it will be very costly, particle filters even more so.
I’m not sure if the concern is misuse of random numbers in the model block, or if the idea is that it is not needed at all – because this is one example where it could be.
That and it would mess up the leapfrog integrator for the Hamiltonian ODEs.
I have to think more about how that’s different from the deterministic filters, but I’d think it could be done with some clever way of evaluating likelihoods along the HMC path, maybe using a single draw of random uniforms as cumulative densities. That could allow leapfrogging through parameter space without requiring different (and costly) random draws for each leapfrog step.
Probably not trivial, but it’s an interesting methodological problem, and it would be good to further remove limitations from HMC.