Expecting time to be greater than 0


I am working with the following model, and I get the following error message when I run it:

functions {
  real[] sldecay(real t, real[] y, real[] theta,  real[] x_r, int[] x_i) {
    real dydt[3];
    dydt[1] = -theta[1]*y[1];
    dydt[2] = -theta[2]*y[2];
    dydt[3] = theta[1]*y[1] + theta[2]*y[2] - theta[3]*y[3];
    return dydt;
data {
  int<lower=1> N;
  int<lower=1> nsubjects;
  int<lower=1> subj_ids[N];
  real <lower=0> y[N];
  real ts[N];
  real t0;
transformed data {
    real x_r[0];
    int x_i[0]; 
parameters {
  real logps;
  real logpl;
  real logmuab;
  real logAB0;
  real logps0;
  real logpl0;
  real <lower= 0> sigma;
transformed parameters{
  real inits[3];
  real mu[N, 3];
  real theta[3];
  real ps = exp(logps);
  real pl = exp(logpl);
  real muab = exp(logmuab);
  real AB0 = exp(logAB0);
  real ps0 = exp(logps0);
  real pl0 = exp(logpl0);
  inits[1] = AB0;
  inits[2] = inits[1]/ps;
  inits[3] = inits[1]/pl;
  theta[1] = ps;
  theta[2] = pl;
  theta[3] = muab;
  mu = integrate_ode_rk45(sldecay, inits, t0, ts, theta, x_r, x_i);
model {
  real new_mu[N,3];
  real eval_time[1];
  //prior distributions
  logps ~ normal(0, 1);
  logpl ~ normal(0, 1);
  logmuab ~ normal(0, 10);
  logAB0 ~ normal(5, 10);
  logps0 ~ normal(0, 1);
  logpl0 ~ normal(0, 1);
  sigma ~ normal(1, 10);

    for (obs_id in 1:N){
      new_mu[obs_id,1] = mu[obs_id,1];
      new_mu[obs_id,2] = mu[obs_id,2];
      new_mu[obs_id,3] = log(mu[obs_id,3]) - pow(sigma, 2.0)/2;
    //likelihood function for AB levels
    y[obs_id] ~ lognormal(new_mu[obs_id,3], sigma); 
generated quantities {
  real z_pred[N];
  real sigma2 = sigma;
  for (t in 1:N){
   z_pred[t] = lognormal_rng(log(y[t] + 1e-5), sigma2); 

I have sorted my date (so my times are always increasing). The times that I have in my data are from 0 to 96. Some of the values like 0 are repeated across individuals because it is indicating the time 0 of blood sampling. Also, for the value of t0 I have indicated that is equal to 0, which I do not know if it can be causing any issues. Any ideas on how could I solve this?

There seem to be two separate issues: that there are repeated values in the array of time points and that t0 is the same as that of the first observation – if the second doesn’t raise an error I suspect it will once the first is fixed.

If I understand correctly you have repeated times in your array of times because there are multiple observations at the same time (for instance for different individuals assumed to produce observations under the same model). Still, the ODE solution appears to require a range of unique, increasing time points; instead, what you can do is to compute this solution once for a range containing all time points for which you have observations, and afterwards just pick out the solution values at the time you need and compute the likelihood.

As for the choice of t0, it doesn’t seem to be causing this issue, but you should simply choose a value smaller than your first observation. If you are estimating the initial conditions this will affect the actual values you obtain, but is not supposed affect the other parameters in any meaningful way. If instead you are using fixed initial conditions, you should use values that match your choice of initial time (see this thread for some discussion on that).

1 Like


Thanks. Very useful it worked if I created an array of time that is not containing the 0. However, these 0 values which are for 1/3 of my data indicate the response on the baseline, and then it jumps to time = 50 or time = 90, would this be affecting the result of the ode? Is there a way that I can just implement the normal times? Could I specify that t0 is below 0 for instance?

Sure. Just specify t0=-1e-6 or any other negative value, just making sure your initial conditions make sense.

No, it won’t affect the result, you just need to make the ODE solution match the data at specific time points, that may require an auxilliary function. Suppose your data is observed at times [0, 50 ,90] time units with multiple observations at each time point. Internally the solver will have to generate a solution approximating the continuous solution y between (at least) 0 and 90; for convenience solvers will allow you to extract the solution only at the time points you want (i.e. y_0, y_{50}, y_{90}), but not repeated values. With those values, you can just create a function to put those values in a vector/array y_{obs} that matches those of the observation, e.g. y_{obs} = [y_0, y_0,y_0,y_0, y_{50}, y_{50}, y_{90}] and put in in a statement like data ~ distribution(y_obs).

1 Like