Autodiff w.r.t time lags

In a pharmacometrics model, certain compartments can have an associated lag time. For example, if I give a patient at time t = 0 a drug in compartment 1, which has tlag = 1, the drug amount in that compartment increases not a t = 0, but t = 1.

Issue 1
A bolus dose causes an instantaneous increase. So there is a discontinuity with respect to time, but also to the parameter tlag. The derivative is ill-defined and the function is only half-continuous.

Issue 2
The way lag times are handled is by augmenting the event schedule. A dose at t = 0 with lag time tlag is really a dose at t = tlag with no lag time. So I create another event if (tlag != 0). While the derivative with respect to tlag is ill-defined at the dosing time, it should be defined at other times, i.e. when the body clears the drug. No problem with finite-diff, but autodiff does not handle the boolean tlag != 0 when differentiating with respect to tlag. The IF statement doesn’t get “auto-diffed”.

Is the derivative mathematically ill-defined when evaluated at tlag = 0? I don’t think this is a major issue. When tlag = 0, it is usually fixed, and rarely a parameter. Then again, who knows what a user may come up with…

Reasons for using the IF statement
I could remove the if statement and create a new dosing event at t + tlag even when tlag = 0. The auto-diff then agrees with the finite diff. However, the new dosing event occurs after the original event, so the predicted amount at the original event is wrong. If I change the order of the events, I create a conditional statement that doesn’t get auto-diffed.

If I define a function with

if (x == 0) return exp(x);
else return x^2;

then its derivative for x == 0 is exp(x), and its
derivative for x != 0 is 2 * x. Isn’t that what you
want?

  • Bob

@Bob_Carpenter I don’t think derivative at x = 0 is defined. First of, auto-diff and finite-diff would disagree. Evaluated at 0, df/dt = 1, according to auto-diff and it can be any number (and I mean any) with finite-diff depending on your step size.

I also think the derivative is plain ill-defined at x = 0. Recall the definition of the derivative:

df/dx = lim(h -> 0) [f(x + h) - f(x)] / h

In your example, f(x +h) - f(h) is never an infinitesimal number, while h is an infinitesimal number. So df/dx blows up.

In the case of lag times, what we really have is:

if (tlag == 0) return f(t);
else return f(t - tlag);

Mathematically speaking, this is simply f = f(t - tlag) and what I want is df(t - tlag) / dtlag at all times. However at tlag = 0, I get df(t) / dtlag (which is 0 and wrong).

See the Reasons for using the IF statement in my original post. I’m happy to clarify if needed.

How about a minimal Stan program that illustrates the problem?

Why would you ever write this:

if (tlag == 0) return f(t);
else return f(t - tlag);

instead of just writing f(t - tlag). The first clause
intentionally drops the tlag component.

  • Bob