Is it possible to solve delay differential equation?


I am currently working on parameter inference of ODE models in Stan and was wondering if it is also possible to use delay differential equations in Stan. I know there is no explicit solver for DDE in Stan but is it feasible to implement it manually?

Furthermore, I was wondering if people would consider different Bayesian inference solution if the bulk of one work will be based on time series data and ODE models.

I know that I am in a stan forum but since I am a beginner in the field of Bayesian statistics/inference and I am happy about any advice, I thought I would just ask.

Thanks for your help and advice.


If you can reduce it to an ODE with unknown initial conditions and parameters, then yes. I’m not familiar with DDEs myself—@wds15 should know.

Not sure what you mean by different Bayesian inference solution. Different than MCMC?


Sorry, I am not familiar with these either.

From glancing over wikipedia what DDEs are, I would say that you are very limited in implementing these “manually”. The only case I can think of is whenever you use forcing functions. These forcing functions can be evaluated at any given time point within your ODE functor. However, I am not sure if people would call this a DDE anymore.

Hmm… maybe (really maybe) … you could emulate a DDE by using a nested ODE integration scheme? Should be numerically a nightmare to do and maybe this is non-sense.


If you can formulate it such that the delay is an integer number of steps of the time step of a, say, Euler or Heun discretisation, and you don’t need complicated stuff like event handling, discontinuity propagation (e.g. dde23 in MATLAB), then it works OK. It’s numerically similar to an AR model with non-zero coefficients with large time lags.


Historically these were the first implementations of DDE solvers until users need fancy things like discontinuity propagation.

If the equations aren’t efficiently integrated by low order scheme, it can be useful to use a nice ODE integrator like Stan’s to do the time stepping, passing the matrix of states over time as a parameter, and reference the state history from the ode func.

The main difficulty in Stan is that interpolation of history isn’t going to work (well).


True. It works and I have done it, but it is a hack and we need better solutions here. Maybe a self-made splines approach could work… not sure, but this was a thought of mine recently which I haven’t followed up yet.


I think we’ll have some free cycles over the next year or two for various kinds of pharma-related extensions. As usual, we need to start with some kind of functional spec for what is needed.


This is on my to-do list.


What specifically?

One use case from my angle are forcing functions such as dosing inputs to PK/PD systems or PK functions as inputs to PD models, etc.

I guess you plan something general rather than a specific pharma thing and I am curious what your plans are.


Most of the PKPD systems enjoy the simplicity of constant delays, so we can just do the explicit RK + method of steps. That’ll be the first thing to do. Non-constant delays can be done in a similar but more messy way.

For state-dependent delays we need to calculate the location of discontinuity on the fly. So the cost is much higher as each discontinuity would incur a solution if an algebraic system. I haven’t thought through the details yet.


If I understand it correctly, one basically solves the ODEs twice and you use the result from the first run to determine the values of the state variables at lagged timepoint when solving the ODEs the second time?
Has anyone tried that before to get an idea how exactly that would look like?

For now, I have luckily found a way to drop the delays and I will be able to apply ODEs to the problem.

I will give @maedoc suggestion a try though since I will certainly be working with DDE in the future.

I meant it in the sense, that if one works mainly with ODEs and potentially SDEs in the future, if you or others would recommend a different environment due to potential limitations of the Stan environment when it comes to dynamical systems?

Regardless, I am pretty happy with Stan and I like how intuitive it is to use and how easy it is for me to further and further expand my model.


Not twice, I’d hope. I meant something like

real f(real y, real y_delayed);

real dt;
int delay_steps; // delay in units of dt
real y[T + delay_steps];

y[1:delay_steps] = history_func(dt, delay_steps);

for (t in (delay_steps:T+delay_steps))
  y[t + 1] = y[t] + dt * f(y[t], y[t - delay_steps]);