Forced ODEs, a start for a case study?

Yes, I definitely want to take a closer look! The idea, if I get it right, is to approximate a Dirac delta function.

Correct, that’s the idea … normals approach a dirac delta for sigma getting small. The trick is to find a sigma which is small enough for the problem, but still manageable for the ODE integrator. Moreover, you have to force the ODE integrator to output a value at the dosing time-point which ensures that the dose is not stepped over which can easily happen.

1 Like

So… I’ve used this trick where I had an analytical integral representing cumulative risk with discrete event data in continuous time but… could I use the integrator in that situation? … this is sort of mind blowing.

Maybe… too little detail for me to tell, but as I keep saying its possible with some extra care to let ODE integrators handle dirac delta spikes. However, one has to stress that this is a hack which requires care to make sure the spike is not stepped over (I have seen this happening). The trick to output a value at the discontinuity is documented in the CVODES wikis or manuals. So it is something what people seem to do.

And yes, I like this approach a lot as this allows to

  1. always integrate with a data-only initial condition
  2. I can introduce delays of the dosing event in a fully gradient friendly and smooth way; so no more issues with lag times!

Sebastian