# Change from global model fit to one-step-ahead prediction

Hi all!

I’m trying to build a fitting model with an ODE solver, wherein the data I’m feeding the model is a stochastic time series that includes oscillations, while the process model is a deterministic time series that begins with dampening oscillations before reaching an equilibrium value. For clarity, the stochastic data was generated using the same process model included in my fitting model, but was made stochastic via the inclusion of demographic stochasticity.

Because of the mismatch between the stochastic data and the deterministic model, I’ve had a lot of trouble with model-fitting. It was then suggested to me that I may be able resolve my fitting issue by going for one-step-ahead prediction, rather than a global model fit. Essentially, I want estimation at a given time step to be dependent on the previous time step, rather than on the model’s estimation history.

My initial thought is that this could be done using a loop in the transformed parameters block, but I’m really not sure.

Conceptually, this all makes a lot of sense to me, but coding it? Another story entirely.

I’ve included my model below, and would desperately appreciate any advice people may have! :)
(P.S. I’m using rstan)

functions{
real[] dZ_dt(
real t, // Time
real[] Z, // System state {Parasite, Host}
real[] theta, // Parms
real[] x_r, // Real data; below is integer data
int[] x_i){
real P = Z; // System state coded as an array, such that Z = (P,H)
real H = Z;

real r = theta;  // Parameters of the system, in order they appear
real O = theta;
real h = theta;
real b = theta;
real c = theta;
real u = theta;

real dP_dt = P*r - H*(O*P/(1 + O*P*h)); // Mechanistic model
real dH_dt = b + H*(c*(O*P/(1 + O*P*h))-u);
return({dP_dt,dH_dt}); // Return the system state
}
}

data{
int<lower=0>N; // Define N as non-negative integer
real ts[N]; // Assigns time points to N
real y0; // Initial conditions for ODE
real t0; // Initial time point
int<lower=0>y[N,2]; // Define y as real and non-negative
}

parameters{
real<lower=0>r;
real<lower=0,upper=1>O;
real<lower=0>h;
real<lower=0>b;
real<lower=0>c;
real<lower=0,upper=1>u;
}

transformed parameters{
// Stiff solver for ODE
real Z[N,2]
= integrate_ode_bdf(dZ_dt,y0,t0,ts,{r, O, h, b, c, u},rep_array(0.0,2),rep_array(0,2),1e-10,1e-10,2e4);
for (i in 1:N) {
Z[i,1] = Z[i,1] + 1e-6;
Z[i,2] = Z[i,2] + 1e-6;
}
}

model{ // Ignore the means/variances for now...
r~normal(2.5,1); // r
O~beta(2,2); // O; bounded between 0 and 1
h~normal(0.06,1); // h
b~normal(35,1); // b
c~normal(0.2,1); // c
u~beta(2,2); // u; bounded between 0 and 1
for (k in 1:2){
y[ ,k]~poisson(Z[ ,k]);
}
}

generated quantities {
int y_rep[N, 2];
for (k in 1:2) {
for (n in 1:N)
y_rep[n,k] = poisson_rng(Z[n,k]);
}
}",
"Stan_Model_TypeII.stan")

stanc("Stan_Model_TypeII.stan") # To check that we wrote a file

Stan_Model_TypeII <- stan_model("Stan_Model_TypeII.stan")

# Squeezing the data into a form that Stan gets
N <- length(Stoch_Data_TypeII\$t)-1 # df is 2923; N is 2922
ts <- 1:N
y_init <- c(10,350) # Initial states, P = 10; H = 350
y0 <- array(y_init) # For the ODE solver
t0 <- 0.1
y <- as.matrix(Stoch_Data_TypeII[2:(N+1),3:4])
y <- cbind(y[ ,1],y[ ,2]); # This worked, sick; where y[,1] is P, and y[,2] is H
Stan_StochData_TypeII <- list(N=N,ts=ts,y0=y0,t0=t0,y=y)

# Fitting the data to the model
Stan_Model_TypeII <- stan_model("Stan_Model_TypeII.stan")
fit9 <- sampling(Stan_Model_TypeII,
data = Stan_StochData_TypeII,
warmup = 200,
iter = 400,
chains = 2,
cores = 2,
thin = 1,
control = list(max_treedepth=15,
seed = 123,
check_data = TRUE,
diagnostic_file = "/Users/mjarviscross/Desktop/GitHub/FcnalRespProj/Fit9.csv",
show_messages = TRUE,
verbose = TRUE)

1 Like

Hi, sorry for letting your question sit there for so long.

To be clear - this is the same model as discussed in Model fitting and sampling issue: Only 1 chain sampling properly ? And the “one-step ahead” prediction we speak about is from this suggestion?

Anyway, this looks like quite a model, you’ve chosen a formidable foe to conquer :-) Before going deep into guesses about potential technical solutions, I think it might be useful to take a few steps back and really get to understand what each of the parameters in the ODE is doing (you might have this intuition already). The point is that maybe the problem is mathematical and it is easy to spend a lot of time fiddling around implementation only to discover that the model is not identifiable or has other issues. The choice of priors you have looks like some of the parameters might be problematic for sampling.

Could you generate a bunch of plots on how the trajectories of the ODE (ignoring the Poisson noise for now) change when you change each of the parameters? Do the parameters have biological meaning you could briefly explain? Especially, as @stijn suggested before (if I understood him right) look for degenerate cases where you get a very similar ODE solution with very different parameters (something like what I did for the sole ODE model I actually understand at https://www.martinmodrak.cz/post/2018-04-18-stancon/stancon.html#identifiability )

Best of luck with your model!

1 Like

Hi @martinmodrak––thanks for the response!

This is the same model as the the one in the discussion you linked. By one-step-ahead prediction, I mean making a change to the way the ODE solver generates solutions. As is, the model uses all previous estimates to inform a given current estimate. It was suggested to me that, given the dynamical complexity of my process model, this method may be inappropriate, and that asking the ODE solver to solve one time step at a time, may allow the model to make more accurate estimates.

I was thinking of doing this in the transformed parameters block, or by changing this model from a continuous time model to a discrete time model, but haven’t figured out a way of doing either. I’m coming back to this project today, so will hopefully be able to provide an update soon.

Thank you again! :)

1 Like

OK, I think I see what you are aiming for - the “one step ahead” or moving to discrete time would then be a way to introduce stochasticity into the model dynamics (it looks like an approximation to solving a stochastic differential equation). Is that correct? Still, I think getting a good understanding of the base ODE is IMHO crucial - if there is a fundamental non-identifiability, it may quite possibly not go away after introducing this additional stochasticity.

Yes––here’s an updated transformed parameters block, which will hopefully show you what I’m trying to do:

transformed parameters{
// Stiff solver; backwards differentiation formula
for (i in 2:N){ // Start at 2nd time step because giving the first
real Z[i,2]
= integrate_ode_bdf(dZ_dt, // Function
y[i-1], // Initial value (empirical data point at previous time step)
0, // Current time step; keep this at 0
ts[i], // Next time step (time step to be solved/estimated)
{r, O, h, b, c, u},
rep_array(0.0,2),rep_array(0,2),1e-10,1e-10,2e4);
}
}


And yes, I agree with you RE the non-identifiability issue. I’m working through @stijn’s suggestion now!

The ODE describes the population dynamics of a parasite ( P) and and of the infected host’s immune cells, wherein:
r = the intrinsic growth rate of P, the parasite
O = the rate at which H, the immune system, correctly identifies an individual P
h = the time it takes for an individual H to destroy an individual P (the time it takes for an immune cell to destroy a parasite)
b = the base immigration rate of immune cells to the infection site in the absence of infection
c = the proliferation rate of the immune system in response to phagocytic signalling
u = the natural mortality rate of immune cells

I was thinking, I could provide the model with a couple parameters, eg. u, that would likely be known.

Thanks so much for sticking with me!

So not sure if this solves any problems, but I wonder if this might be a more appropriate way to format the for loop? Maybe? Because here you’re defining the Z array outside the loop and then assigning it inside the loop.

transformed parameters{
// Stiff solver; backwards differentiation formula
real Z[N-1,2];
for (i in 2:N) { // Start at 2nd time step because giving the first
Z[i-1,] = integrate_ode_bdf(dZ_dt, // Function
y[i-1], // Initial value (empirical data point at previous time step)
0, // Current time step; keep this at 0
ts[i], // Next time step (time step to be solved/estimated)
{r, O, h, b, c, u},
rep_array(0.0,2),rep_array(0,2),1e-10,1e-10,2e4);
}
}


However, unsure about how to make the ts[i] thing work because I tried running it this way but it threw the following error:

SYNTAX ERROR, MESSAGE(S) FROM PARSER:
Fourth argument to integrate_ode_bdf must have type real[ ];  found type = real.
error in 'model36181be04908_OSA_Cont_FAIL_Poisson' at line 49, column 51
-------------------------------------------------
47:     ts[i-1], // Next time step (time step to be solved/estimated)
48:     {r, O, h, b, c, u},
49:     rep_array(0.0,2),rep_array(0,2),1e-10,1e-10,2e4);
^
50:   }
-------------------------------------------------

PARSER EXPECTED: ")"
Error in stanc(filename, allow_undefined = TRUE) :
failed to parse Stan model 'OSA_Cont_FAIL_Poisson' due to the above error.


Unsure how to solve that, as ts is defined as a real array, but someone else might have some thoughts on that?

Best of luck!

Reposting the model formulation for reference.

So I can already see some possible non-identifiabilities (you should however verify those with simulations, e.g. using the deSolve package, I am just guessing here):

• if \theta is close to 0, h can take almost any value (this should be visible when plotting posterior of \theta against h as some sort of “funnel” or other weird shape). In plain words: if the immune system doesn’t recognize the parasite at all we can’t learn how quickly it kills it.
• if h is large, \theta can take almost any value. In plain words: if the immune system takes forever to kill the parasite we can’t learn how quickly it recognizes it.
• \theta and h only ever influence the results via g(P,\theta,h) = \frac{\theta P}{1 + \theta P h}, so, especially when P ends up not changing much, \theta and h can vary a lot while giving the same g(P,\theta,h). In plain words - if we don’t observe the parasite population changing, we can’t learn whether it is because it is quickly recognised but killed slowly or slowly recognized but killed quickly (or something in between)
• when the observed H stays close to zero for most of the time, c, \theta, h, \mu end up not really influencing the likelihood (and can thus take basically any value). In plain words, if we observe very few immune cells, we can’t learn much about their dynamics.
• when the observed P stays close to zero most of the time, r, \theta, h, c end up not really influencing the likelihood. In plain words, if we observe very few parasites, we can’t learn about their dynamics.

If those are indeed the problems, it is still hard to figure out what to do about it. Reparametrizing via q = \theta h might help a bit. Restricting the priors for \theta and h to avoid the degenerate cases might also help (but only if you can be sure those values really don’t occur in your experiment)

Also avoiding data that don’t demonstrate all of the dynamics can help: e.g. if P is well fit by an exponential curve, it admits the solution with \theta = 0 and the experiment thus can’t inform the other parameters. Similarly when P is roughly constant, the experiment just can’t inform both \theta and h.

One way to approach this in a bit more systematic way would be to try a set of models of increasing complexity, something like:

a)

\frac{dP}{dt} = Pr

b)

\frac{dP}{dt} = Pr - Hg_{const} \\ \frac{dH}{dt} = b + H(cg_{const}) \\

and c) the original model. (more intermediate steps can also be conceived, e.g. b), but ignoring the c parameter or the full model but ignoring \mu)

If one of the simpler models fits the data well, it would mean that you probably can’t really learn much new information by fitting a more complex model, as the more complex models have many ways to emulate the same dynamics as the simpler models.

Does that make sense?

1 Like

Very sorry for the late response, and thanks very much for the advice!!

I think your idea about O and h is likely part of the problem! I tried estimating O and h together, as you suggested, with no luck so far, but I’m still fiddling.

I’ve also considered simplifying the model, so thank you so much for suggesting a way forward in that regard!

Thank you for all your help with this model––I’ve learned a lot.

I’m still trying to get a one-step-ahead forecasting model working, and will update this thread with that if I figure it out!

1 Like