Normal_lpdf: Location parameter[1] is nan, but must be finite! (forced ODE)

Hello everyone!

I am trying to run a simple ODE forced model that represents microbial inactivation as described by Albert and Mafart. The ODE is as follows:

\frac{dN}{dt}=-\frac{\ln(10)}{\delta^p}\,pt^{p-1}\,\left(1-\frac{N_{\text{res}}}{N}\right)\,N

Where \delta, p and N_{\text{res}} are unknown parameters and N is the number of cells.

When trying to estimate these parameters with my small model, I get the following error:

Rejecting initial value:
  Error evaluating the log probability at the initial value.
Exception: normal_lpdf: Location parameter[1] is nan, but must be finite!  (in 'model1738310b51c4_weibull' at line 53)

I have no idea exactly why is this occurring, my first guess was that I had to assign initial values to my x_hat's quantities that I am willing to estimate, but that seems odd, and didn’t work. I found a similar question, but it does not help me with my problem.

Below are my Stan code and the toy dataset that I am using for testing the model.

Thanks in advance

PS. Is the first time that I am dealing with forced ODE’s, so I will be really glad if you have comments about my code in that sense


functions {
  real[] forced_ode(real t, real[] x, real[] theta, real[] y_r, int[] y_i) {
    real dxdt[1];
    real ft;
    ft = t;

    dxdt[1] = -log(10)/(theta[1]^theta[2]) * theta[2] * ft^(theta[2]-1) * (1-theta[3]/x[1])*x[1];

    return(dxdt);
  }
}

data {
	int<lower=1> T;
	real<lower=0> x[T,1];
	real t0;
	real ts[T];
	real x0;
}

transformed data {

	real y_r[0];
	int y_i[0];

}

parameters {
  real <lower=0> sigma;
  real delta;
  real p;
  real nres;
}

transformed parameters {
  real x_hat[T,1];
  {
    real theta[3];
    theta[1] = delta;
    theta[2] = p;
    theta[3] = nres;

    x_hat = integrate_ode_rk45(forced_ode, rep_array(x0, 1), t0, ts, theta, y_r, y_i, 1.0E-6, 1.0E-6, 1.0E7);
  }
}

model{
  delta~normal(0.5,0.5);
  p~normal(0.5,0.5);
  nres~normal(0.5,0.5);
  sigma~cauchy(0,2.5);
  for (t in 1:T)
    x[t]~normal(x_hat[t],sigma);

}


rm(list = ls())

library(rstan)

T = 14
xi_r = 63679552.09 #initial number of cells
t0 = 0
ts = c(6,12,18,24,30,36,42,48,54,60,66,72,96,120)
x = structure(c(15922087.27,3019951.72,680769.3587,4405548.635,550807.6964, #time series
                36982.81798,17864.87575,400866.7176,146892.6278,25061.09253,
                59292.53246,63241.18514,7345.138682,7030.723199),.Dim=c(14,1))

for (i in 1:length(x)){ #scaling the time series to have maximum of 1
  x[i]=x[i]/xi_r
}

x0 = 1 #scaled initial value of the time series


ini = function(){
  list(delta=abs(rnorm(1,0.5,0.2)),p=abs(rnorm(1,0.5,0.2)),nres=abs(rnorm(1,0.5,0.2)),sigma=abs(rnorm(1,0.5,0.5)))
}

fit = stan("weibull.stan",
            data=c("x0", "t0", "ts", "x", 'T'),
            control=list(adapt_delta=0.95,
                         stepsize=0.01,
                         max_treedepth=10),
            warmup = 150,
            init = ini,
            refresh = 2,
            #cores = min(4, parallel::detectCores()),
            chains=1, iter=300, seed=100785)

save(fit, file="test.Rsave")

Starting your model at t0 = 0 evaluates to -Inf. It seems to run ok with t0 = 1.

thanks for the suggestion, changing t0=0 for t0=1 makes the program run with no errors. However, taking into account that my data starts at t=0, that change would change the interpretation of the parameter estimates? do you know if there is a way to tell Stan to do not evaluate the log probability at the initial value, or if that’s correct?

t0 = 0 only causes a problem when p < 1 in t^{p-1} because raising 0^{-x} = \frac{x} {0} = undefined. You could try changing your initial values or the bounds on p. You will have a better idea than me of what are sensible values.