Integrator_1d relative tolerance argument error

I am trying to use integrator_1d in Rstan but it fails to run and keeps asking for the seventh argument, which I think is relative tolerance and according to the manual definition is optional.

y_solve = integrate_1d(integrand_func, 0.0, 1.0, parms, x_r, x_i);

The error message I get for this line of code is:

PARSER EXPECTED: “,”
Error in stanc(model_code = mc, model_name = “User-defined functions”, :
failed to parse Stan model ‘User-defined functions’ due to the above error.

Any comments?

1 Like

I think this was a bug in some version of Stan that has been fixed, but the default tolerance should be about 1e-8, so just use that.

I have the latest version of stan (2.19.1) and the issue still exists.
Also, when I put in 1e-8 as the 7th argument to the integrate_1d function I get the following error:

Exception: Exception: integrate: error estimate of integral 8.22352 exceeds the given relative tolerance times norm of integral

I can integrate this function in R or python perfectly fine without any error. Not sure why its an issue here. Is it just because the integrator in R is more robust?

Hi, @bbbales2 thanks for your response. Not sure if you read my previous comment but the method you suggested did not work for me. Upon further digging, I found out that I can parse a custom function using a c++ file within my code. Is there a nicer way to call adaptive trapezoidal (or any other quadrature for that matter) from the Boost library and use it in User-defined functions block?
Thanks in advance! :)

1 Like

Aaah, sorry, I missed your last message! Can you post your model/code to run it? I’ll have a look.

@bbbales2
Hi Ben, thanks for the interest! here is the code:
stan_tester.stan (4.5 KB)

For now, I am calling the stan function using the expose_stan_functions in Rstan.
stan_expose.R (278 Bytes)

If math behind it may help… Its a solution of 3 dimensional PDE,

U(k, a, t) = \theta(t-a) * \kappa(k * exp(-\beta * a), t - a) * exp(\beta * a) * exp(- \int_0^a \alpha(x) dx) \\ U(t) = \int_0^t \int_{\bar{k}}^1 U(k, a, t)

where, \bar{k} = 1/exp(1).

\kappa is called as kdist in the code and a = age, t = time, k = ki.

Let me know if I am doing something horribly wrong or if anything is not clear.

The error you’re getting basically means that Boost wasn’t able to do the integral and gave up.

The final error estimate it got was not less than tolerance * norm_of_integral (see: https://www.boost.org/doc/libs/1_73_0/libs/math/doc/html/math_toolkit/double_exponential/de_tol.html).

I think what is happening is there are discontinuities in some of these integrals and that is blowing up the quadrature (which assumes some number of derivatives – I’m not sure how many, but dig around in the Boost docs).

I added a print statement to U_total_integrand to see what values of age for which this is failing:

real U_total_integrand(real age, real ac, real[] parms, real[] rdata, int[] idata){
  real time = rdata[1];
  real value;
  print(age);
  value = U_total_age(age, time, parms);
  return value;
}

The output looked like:

> U_total_time(50, param_vec)
25
48.7842
1.2158
Error in U_total_time(50, param_vec) : 

And then I made a plot of U_total_kat:

x = 0.5
time = 50
age = 1.215850

xs = seq(1 / exp(1.0), 1.0, length = 1000)
ys = sapply(xs, function(x) {
  U_total_kat(x, 1 - x, c(param_vec, time, age), c(0.0), c(1))
})
plot(xs, ys, type = "l")

And got this plot:
discontinuity

You’ll need to do integrals of functions with discontinuities in parts.

Also, and in some of these functions there are things being passed in x_r that I think need appended to parms and passed like that.

Like in the U_total_kat function, unless you can guarantee that time and age are always not-parameters, you might want to add them to parms and do something like:

real U_total_kat(real ki, real ac, real[] parms, real[] x_r, int[] x_i){
  int t0 = 5;
  real time = parms[5];
  real age = parms[6];
  real value;
  if (age < time - t0){
    value = U_theta_akt(age, ki, time, parms);
  } else if (age >= time - t0){
    value = 0;
  }
  return value;
}

Edit: There were some out of date comments at the bottom here that I wrote a few days ago and didn’t post. Just editing them out since they aren’t so relevant now!

@bbbales2 Hi Ben,
Thanks so much for your response and for taking the time to solve this issue. I will incorporate your suggestions and work on the stan code tomorrow.
But I am not sure why the integration is failing. Since I can integrate the U_total function in R and python perfectly fine.
I also wrote the same code in C++ and could successfully integrate the U_total function using the adaptive trapezoidal quadrature from the Boost library.
Attaching the C++ code here for you to look at.
asm_functions.hpp (4.3 KB)
asm_call.cpp (505 Bytes)

I compile the asm_call.cpp using locally downloaded Boost library (boost_1_73_0) and can integrate at all possible ‘times’ and ‘ages’.

c++ -I boost_1_73_0 asm_call.cpp -o out_asm
Enter time: 300
The value of U_total_time for time 300 is 305001

The C++ values match with the ones I get from R and Python.

Follow up questions:

  1. Why does the stan integrator_1d function still ask to input the rel_tol? Am I on a wrong stan version?
  2. What quadrature method it uses? is it not adaptive?
  3. Can I just use my C++ code to parse the functions in the Stan model?

Best,
Sanket.

We’re just using Boost quadrature.

Can you check the convergence of your C++ implementation? Search for “However, for applications with high-reliability requirements” on Trapezoidal Quadrature - 1.73.0.

When this message gets printed:

error estimate of integral 3.7523e-05 exceeds the given relative tolerance times norm of integral

It comes about cause the after-the-fact convergence checks fail. The Boost integral will exit fine either way and you gotta check this manually.

rstan is a bit behind cmdstan. In cmdstanr the argument is optional, but then you don’t get expose_stan_functions, which is probably pretty important for debugging here.

1 Like

@bbbales2 That makes so much more sense now. Thanks for clarifying!
I checked the error estimate manually and indeed it was exceeding the tolerance*L1.

I figured that increasing max_refinements option makes evaluation very slow but for max_refinements = 17 I am getting integrals without exceeding the error_estimate (for all the time points that I require).
It’s not ideal but I am not sure if there is any other way at the moment. I am definitely looking into simplifying the model. Still working on it.

I checked in the Stan manual and the intergator_1d function currently doesn’t have the max_refinement argument. Do you recommend an alternative way of working this into Stan?

Probably the way to do this is work out why the discontinuities appear and then break the integral into two pieces.

It’s probably caused by one of the if statements.