# ODE system unable to solve exclusively one time point past initialization

Hello, all,

I’m fitting a simple ODE system in Stan, where multiple ODEs with different initial conditions (but the same generating parameter value) are fit simultaneously to obtain posteriors for the ODE parameter(s). This works great when there are multiple time points, but I’m getting a dimensionality error when trying to fit to a single time point, i.e., fit to a single iteration of the ODE system (not including the initialization). I’m using data simulated in R, and I’ve posted the R and Stan code below. I’ve simplified the ODE model for this post, and the same error manifests.

Here is the error:

``````Error in mod\$fit_ptr() :
Exception: mismatch in number dimensions declared and found in context; processing stage=data initialization; variable name=th; dims declared=(1); dims found=()  (in 'model395849d81bbc_ExpDecay_StanExample' at line 23)
``````

As you can see, the error centers around the dimensionality of `th[Tm]`. `th` is the list of times to solve the system, with a length = `Tm`. The errors manifests if `Tm=1`. I don’t think there’s any mathematical reason why an ODE system cannot be solved for a single time point.

Note that in the R code `R0 <- seq(50, 300, length.out=5)` are the initial conditions of the ODE system, and `t.list <- seq(1, 50, length.out=2)` is the list of times. `t.list` of length > 2 works just fine (with one exception, stated next). `t.list <- c(1, 1.5, 100)` works, but `t.list <- c(1, 1, 100)` does not. For all of these `t.lists`, the first time point is extracted to serve as `t0`, the initializing time point for the ODE system.

Thank you very much in advance for any insight you all may have!

Here is the full R code to simulate the ODE system:

``````## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## ~~~~~~~~~~~ Simulate an ODE to fit in STAN ~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## open ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rm(list=ls())
library(rstan)
library(dplyr)
library(deSolve)
library(ggplot2)
library(reshape)
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## Simulate ODE ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## params
params <- c(k=.025)

## create a list of initial conditions
R0 <- seq(50, 300, length.out=5)

## specify ODE function
resourceLoss <- function(t, R0, params) {
with(as.list(c(R0, params)),{
dR <- R0 * -k
list(c(dR))
})
}

## set vector of times
t.list <- seq(1, 50, length.out=2)

## run ODE
out <- ode(y=c(R0), times=t.list, func=resourceLoss, parms=params)

## save as data frame
loss <- round(as.data.frame(out),3)

## save list without Times for later use
dRdt.list <- round(out[,-1],3)

## create and add prefix for col names
loss.dat <- as.data.frame(dRdt.list)
colnames(loss.dat) <- paste("T", colnames(loss.dat), sep="")

## plot all dRdt
all_dRdt <- melt(loss, id.vars="time")

tplot <- ggplot(all_dRdt, aes(time, value, col=variable,group=variable)) +
geom_point() + geom_line(color="black") +
xlab("Time") + ylab("Density") + ggtitle("Remaining density = R0 - dRdT")

print(tplot)
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## set dims for matrix and set sigma
rws=length(t.list)  ## number of rows ie data points to simulate
cls=ncol(dRdt.list)  ## number of columns ie separate FR runs
sd=5

set.seed(10)

## apply rnorm noise to deterministic data
mult_ODE <- function(n, dat, noise){
noise.out <- rnorm(n=length(dRdt.list), mean=dat, sd=sd)
return(matrix(noise.out, nrow=rws, ncol=cls, byrow=FALSE))
}

noisy.dat <- round(mult_ODE(n, dat=dRdt.list, noise=sig),digits=3)

## replace noisy row 1 with deterministic R0
noisy.dat[1,] <- dRdt.list[1,]

## create data frame for plotting, add time as a column
noise.dat.df <- as.data.frame(noisy.dat)
noise.dat.df <- cbind(loss\$time, noise.dat.df)
names(noise.dat.df)[1]<-"time"

## plot
melt_noise <- melt(noise.dat.df, id.vars="time")

noise_plot <- ggplot(melt_noise, aes(time, value, col=variable, group=variable)) +
geom_point() + geom_line(data=all_dRdt, aes(time, value, group=variable), col="black") +
xlab("Time") + ylab("Density") + ggtitle("Density remaining = R0 - dRdT + noise")

print(noise_plot)
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## configure data list for STAN ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## extract deterministic list of initial conditions
R0.list <- as.numeric(loss.dat[1,])

## list for Stan
resourceLoss.dat <- list(Tm = nrow(loss.dat)-1,
t0 = t.list[1],
th = t.list[-1],
nLev = ncol(noisy.dat),
R0_obs = R0.list,
R_obs = noisy.dat[-1,])
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## Stan ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#setwd("")

## run STAN
fit1 <- stan(
file = "ExpDecay_StanExample.stan",
data = resourceLoss.dat,
chains = 1,
warmup = 1000,
iter = 2000,
cores = 1,
refresh = 10)
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## basic posterior check ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print(hist <- stan_hist(fit1, pars = c("k")))

## print point estimates for parameters
print(fit1, c("k"))
## ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

``````

And here is the full Stan script:

``````functions {
real[] resourceLoss(
real t,              // time
real[] R,            // state
real[] theta,        // params
real[] x_r,
int[] x_i) {
real dRdt[x_i[1]];
real k;
k = theta[1];

for (i in 1:x_i[1]) {
dRdt[i] = R[i] * -k;
}
return dRdt;
}
}
data {
int <lower=1> Tm;                // Tm = length(t.list - initial t point)
int <lower=1> nLev;              // number of different R0, i.e. # of ODEs
real R0_obs[nLev];               // Intial conditions R0 list
real <lower=1> t0;               // starting time t = 1
real <lower=t0> th[Tm];          // array of times for ODE, length Tm
real R_obs[Tm, nLev];            // simulated dat
}
transformed data{
real x_r[0];
int x_i[1];
x_i[1] = nLev;
}
parameters {
real <lower=0> k;
real <lower=0> sigma;
}
transformed parameters {
real theta[1];
theta[1] = k;
}
model {
real R[Tm, nLev];

R = integrate_ode_rk45(resourceLoss, R0_obs, t0, th, theta, x_r, x_i);

k ~ normal(0,5);
sigma ~ normal(0,5);

for (t in 1:Tm) {
for (i in 1:nLev) {
R_obs[t,i] ~ normal(R[t,i], sigma);
}
}
}

``````

Thanks, everyone!

Sounds like a data export problem due to r making your vector of length 1 into a scalar?

Thanks @wds15, that seems likely. Can you think of a workaround where a vector of length 1 isn’t considered a scalar? Or is there some other way of approaching this problem that I’m not seeing?

I can sidestep the problem by setting a time point just after the initialization, such that `t.list -> c(1, 1.5, 100)`, and where I provide the state values of the ODE system as being virtually identical at `t=1.5` relative to those at initialization when `t=1`, but that seems like a somewhat inelegant way of approaching this problem.

You could do the splitting into t0 and th in Stan, then you would always pass a vector of at least length two.

A fix of the underlying problem would of course be better…

Edit: Also, for unfathomable reasons, the integration time points have to be strictly larger than the initial time. This may be the reason why the other error occurs.

1 Like

Make it an „array“ of length 1 in R.

1 Like

Thank you both, @Funko_Unko and @wds15!

Making `th` an array in R did the trick, once I also made `R_obs` an array, such that my data list in R is:

``````## list for Stan
resourceLoss.dat <- list(Tm = nrow(loss.dat)-1,
t0 = t.list[1],
th = array(t.list[-1], dim=c(1)),
nLev = ncol(noisy.dat),
R0_obs = R0.list,
R_obs = array(noisy.dat[-1,], dim=c(1,5)))
``````

Thanks again you two, much appreciated :-)

1 Like

I don’t know R at all, but it transforms a list to a scalar if it has length one? Or only if it’s the result of an operation as above? How counterintuitive…

@wds15 could likely provide a more precise response, but my understanding is that because `th[Tm]` had a length of `1` as `Tm` was an integer valued at `1` (for the single time point), it defaulted in Stan (not in R) as a scaler. I don’t know exactly why transforming `th` into an array in R gets around that, but it did.

No…R is guilty here! Check what class in r has to say about your objects…r can be tricky with types.

ahh okay, thank you for the clarification!