ODEs with time dependent parameters

Are there any examples of solving ODEs with time dependent parameters (e.g. y’(t) = a(t)*y(t)+b(t)) using integrate_ode_rk45(function ode, real[] initial_state, real initial_time, real[] times, real[] theta, real[] x_r, int[] x_i) in STAN?

Assuming a prior for a(t)~normal(10,5), how can we adjust the input theta?I am confused as it is required to be of type real[], but shouldn’t theta[1] = a be an N array (corresponding to the N time points)?

If the problem you want to solve has a parametric representation for a(t) and b(t), then just push the parameters through in the theta variables and then evaluate the specific values of a(params, t) and b(params, t) in the ode function you supply.

If you’re trying to non-parametrically estimate a(t) and b(t), you’ll have to do something different.

The problem with trying to think about just passing in values of a(t) and b(t) evaluated at certain time points is that the ODE solvers are variable step size so you don’t know what time points to evaluate them at.

This’ll honestly be easier with a parametric form of a(t) and b(t) probably. Definitely both are possible though, just non-parametric stuff is harder.

Hope that helps!

edit: rather, I didn’t really give any examples here. So maybe that’s not that useful. Do you have a particular example in mind? Maybe I could be more specific then.

1 Like

You get time as an argument, so you can use it in the system definition.

real[] is the type of real-valued arrays. It’s up to you to pack what you need into theta and then unpack it in the system function.

Thank you both for your answers.
Borrowing the ODE example from the stan documentation, I have an ODE system with two time dependent parameters to be inferred. I just assume a normal prior over the parameters theta. I have changed the code as below but cannot make it work:

functions {
real[] sho(real t,
real[] y,
real[] theta,
real[] x_r,
int[] x_i) {
real dydt[2];

dydt[1] = y[2];
dydt[2] = -theta[1:t]y[1] - theta[(t+1):(2t)]* y[2];

return dydt;
}
}
data {
int<lower=1> T;
real y[T,2];
real t0;
real ts[T];
}
transformed data {
real x_r[0];
int x_i[0];
}
parameters {
real y0[2];
vector<lower=0>[2] sigma;
real theta[2,T];
real a[T];
real b[T];
}
model {
real y_hat[T];
sigma ~ cauchy(0, 2.5);
a ~ normal(0, 1);
b ~ normal(0, 1);
y0 ~ normal(0, 1);
theta[1:T] = a;
theta[T+1:2*T] = b;

y_hat = integrate_ode_rk45(sho, y0, t0, ts, theta, x_r, x_i);
for (t in 1:T)
y[t] ~ normal(y_hat[t], sigma);
}

I know my syntax is wrong but was hoping that you can point me to the correct solution.

That would involve way too much guesswork given our time constraints. Your system equations aren’t even close to well formed; I’d suggest learning Stan’s syntax by starting with simpler examples and building up to what you’re trying to do.

I don’t know what you expect to happen when multiplying a slice of an array by a scalar and then assigning it to a scalar. There’s a chapter in the manual on arrays vs. vectors.

The following code is syntatically correct. However, I am not sure that it is doing what I want it to do, which is assume a different theta for each time point that has a normal prior. If I try to call it from R it keeps rejecting the initial value.

functions {
real[] sho(real t,
real[] y,
real[] theta,
real[] x_r,
int[] x_i) {
real dydt[2];

dydt[1] = y[2];
dydt[2] = -theta[1]*y[1] -  theta[2]* y[2];

return dydt;
}
}
data {
int<lower=1> T;
real y[T,2];
real t0;
real ts[T];
}
transformed data {
real x_r[0];
int x_i[0];
}
parameters {
real y0[2];
vector<lower=0>[2] sigma;
real a[T];
real b[T];

}

transformed parameters{
real theta[2];
real y_hat[T,2];

  for (i in 1:T){
    theta[1] = a[i];
    theta[2] = b[i];
    y_hat = integrate_ode_rk45(sho, y0, t0, ts, theta, x_r, x_i);
}  
}
model {
sigma ~ cauchy(0, 2.5);
a ~ normal(0, 1);
b ~ normal(0, 1);
y0 ~ normal(0, 1);


for (t in 1:T){
y[t] ~ normal(y_hat[t], sigma);}
}

[edit: escaped code so symbols would show up]

which is assume a different theta for each time point that has a normal prior

You need more than this. Your question is clear, it’s just really hard to answer. Honestly, the type of modeling you’re talking about is hard. Not a Stan starter problem, really.

ODEs in general are a more difficult breed of model to get working right.

With time varying constants it’s going to be harder.

The easiest way to do this is just assume a parametric form for your a(t). Something like:
a(t) = a0 * t + a1 * t^2

maybe? Then try to estimate the parameters a0 and a1 (put normal priors on them or whatever).

That function probably doesn’t make sense for your application – but what problem are you working on? Is there any functional form that makes sense here? The time varying component is scary anyway. Usually you want these things to be a function of the state of your system (which itself is time varying).

If you want to work with non-parametrics, you’re probably going to end up working with things that look like Gaussian Processes or whatnot and writing them out with some sort of Mercer expansion thing so you can handle the issue that the ODE solver will need to evaluate your function a(t) and times which might not be in your output (edit: decided this bit wasn’t necessary). But you’re going to have a better time doing the parametric thing first.

I’m not super experienced with this stuff either, but you’ll just end up stubbing your toes if you try to go too far too fast on this :P.

(edit: for instance you almost definitely don’t want the values of a(t) to be distributed as independent normals for all t – you probably want some covariance so things are super discontinuous)

I edited your post to escape the code as the operators were being supressed.

There’s no time-dependence in your ODE.

In order to initialize properly, everything needs to be the right size.

I’m not sure why you keep resetting y_hat in a loop—that’s almost certainly a bug.