I am using PyStan to attempt to fit a model to some time-series data, all of which I have attached below.

My model is comprised of 3 ODEs, and I am using integrate_ode_rk45 to numerically integrate them.
One peculiarity of my model is that I have a step change happening in two of its parameters, which occur at different times during an experiment. I have implemented these step changes using simple if statements in the function.

When I go to fit this model I regularly get error messages along the lines of “Exception: normal_lpdf: Location parameter is -nan”, which (by investigating with print statements) is indeed true.

By adding a lot of print statements to my code I have managed to figure out that issues are arising at (or just after) the discontinuity points at which the parameters change (i.e. those that undergo step changes). I assume that the cause for the errors is therefore that the gradient estimation in the ODE_RK45 algorithm is being thrown off by the step change.

Does anyone know how I might go about overcoming this issue? I expect forcing the ode solver to use a smaller time step might help, but I couldnt see an option for this in the Stan reference manual.

It may be of interest that I previously had a similar model (which included step changes in parameters) working fine; when I changed the structure of the ODEs this problem emerged.

Skimming over your code it does look ok. Giving discontinuities to the ODE solver is always hard. Could you please try the integrate_ode_adams solver? This one is using the non-stiff solver from Sundials… at the moment the documentation is missing, but it has the same interface as the usual ode functions (and we already have an open issue for documenting this).

You seem to have time-points around the jump which should help the integrator to adaptively slow down around the jumps, so that should be good; just make sure you don’t output exactly at the jumps.

Also note that there are “advanced” ode function calls where you can specify the absolute and relative tolerances as well as the maximal number of steps between time-points. This does allow you to implicitly set the step sizes used (the default call you are using is very strict, though).

Other than that - maybe plot the decay rates which you model and also do some prior predictive checks which means to sample from your prior and evaluate that. Maybe the prior admits too extreme parameters?

Assuming you’re integrating interval [t_0, t_1], and having discontinuity at t_d, try to split it and integrate two intervals [t_0, t_d], and [t_d, t_1].

I have just tried to install the cvodes package in order to use the ode_adams solver, but the installation has crashed my PC twice now. Will try to figure out a workaround.

Yizhang… very good suggestion! Disappointed in myself for not thinking of that sooner!