I just checked this model throws an error in rstan for me (in R 3.6):

> model = stan_model("stan_tester.stan")
Error in compileCode(f, code, language = language, verbose = verbose) :
Compilation ERROR, function(s)/method(s) not created! file7e7217b0649.cpp:842:37: error: no matching function for call to 'integrate_1d'
stan::math::assign(y_solve, integrate_1d(N_total_integrand_functor__(), 0.0, time, parms, static_cast<std::vector<local_scalar_t__> >(stan::math::array_builder<local_scalar_t__ >().add(0.0).array()), static_cast<std::vector<int> >(stan::math::array_builder<int >().add(0).array()), *pstream__, 1E-4));

It compiles in cmdstanr though which makes me think this is a bug that has since been fixed. rstan is a little behind cmdstanr right now – hopefully it catches up soon but can you use cmdstanr for your analysis for now: https://mc-stan.org/cmdstanr/ ?

(Also I suspect this wasn’t a problem in 2.19, but when you upgraded to R you had to rebuild rstan which upgraded it to 2.21 and this is why the problem popped up now)

You might also try downgrading to rstan 2.19 for the time being if that makes life easier. I imagine expose_stan_functions is quite handy for the model you’re working on.

@bbbales2
Downgrading to rstan 2.19 didn’t solve the issue but I am happy with cmdstan for now.
However, I am running into some serious error messages and warnings while fitting the models in which I use the integrate_1D function suggesting that the metropolis proposal is about to be rejected. I checked my code thoroughly and verified that none of my functions were discontinuous or running into singularities.
To troubleshoot further I wrote a stan program that numerically integrates a simple exponential decay,

This is the error message I get for every chain and quite repetitively (4 long pages).

Chain 3 Informational Message: The current Metropolis proposal is about to be rejected because of the following issue:
Chain 3 Exception: Exception: Error in function tanh_sinh<double>::integrate: The tanh_sinh quadrature evaluated your function at a singular point and got inf. Please narrow the bounds of integration or check your function for singularities. (in '/tmp/RtmpuljRnk/model-5e7132d4f9ef.stan', line 7, column 4 to column 17) (in '/tmp/RtmpuljRnk/model-5e7132d4f9ef.stan', line 7, column 4 to column 17)
Chain 3 If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
Chain 3 but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
Chain 3

Can you say more about what makes it hard to troubleshoot in CmdStanR? If you have suggestions we can hopefully work on them to make it easier. Thanks!

Because they happen early in adaptation and go away later, they’re probably fine to ignore if you have to. They do mean something is a little fragile in the model though.

Sometimes it’s not possible to get rid of these and you just have to live with it. For this model though, if you put an upper bound on log_phi the errors go away.

Something that happens a lot in Stan is exponentiated things blow up.

I didn’t dig all the way down to find what was happening exactly, but if phi0 is something crazy like 1e200 (which might happen early in adaptation), maybe the integrals blow up and NaNs appear and the integrator gets confused.

This integral might work better by either modeling phi0 on a linear scale or moving it outside the integral. I realize this is a test though and so the second thing might not be realistic.

Also you can make time data in your function:

real phi_time(data real time, real[] parms){
real y_solve = integrate_1d(phi_integrand, 0.0, time, parms, {0.0}, {0});
return y_solve;
}

It didn’t change anything for me but may as well code it this way.

That makes sense! Irrespective of the warnings my models ran fine and gave reasonable fits. I was under the impression that the algorithm doesn’t wander far off from the priors in the parameter space.

What’s the difference between data real versus real? Is it explained in the updated manual? I will check.

It might be hard to search for, but it should be in there somewhere. An argument marked data is a special indicator that basically means we’ll never need to take a derivative with respect to (like things in the data block or transformed data blocks).

Variables without data might either be something we need to compute a gradient with respect to or otherwise we need to pass autodiff information through to compute gradients of something else. There’s some extra hidden work on these things.

In my experience, the warning and error messages that I got from cmdstanr were a bit ambiguous. For example, I had forgotten to define a parameter but the program did not point to it but just showed that all chains had failed. In some other cases, it did point out what was wrong but associated it with a wrong line in the Stan code. I understand its gonna improve.

I really like it. Its quite neat and clean. Allows for customization. I definitely see a great potential. Especially, when integrating custom functions written in CPP.

I’ll add to that the lack of expose_stan_functions hits hard for stuff like this.

It can be hard to debug stuff like this in R itself – to have fancy functions like an integrator nested inside another fairly complicated thing like Stan makes life even harder.

Oh yeah that’s something we need to work on. For now you can get more information for a particular chain using fit$output(chain_id). So fit$output(1) for chain 1.

@jonah
Is it possible to recover the parameter values and initial conditions for an iteration for which the sampler is failing?

Chain 3 Rejecting initial value:
Chain 3 Error evaluating the log probability at the initial value.
Chain 3 Exception: Exception: Exception: Exception: integrate: error estimate of integral above zero 5.00586e-09 exceeds the given relative tolerance times norm of integral above zero (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19) (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19) (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19) (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19)
Chain 3 Exception: Exception: Exception: Exception: integrate: error estimate of integral above zero 5.00586e-09 exceeds the given relative tolerance times norm of integral above zero (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19) (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19) (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19) (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19)
Chain 3 Initialization between (-2, 2) failed after 100 attempts.
Chain 3 Try specifying initial values, reducing ranges of constrained values, or reparameterizing the model.
Chain 3 Initialization failed.
Warning: Chain 3 finished unexpectedly!

I am running into a very strange error while sampling in CMDStan.

Exception: Exception: Exception: Exception: gradient_of_f: The gradient of f is nan for parameter 4 (in 'models/asm/asm_working.stan', line 439, column 4 to column 17)

Here is the full model and fake data used for fitting – asm_working.stan (15.2 KB) artf_data.csv (1.4 KB)
Of course I use .json file when sampling in CMDStan.

Do you know what may cause this? All the functions evaluate fine when used with expose_stan_functions. In fact, the artf_data.csv file is generated using this model in R by exposing the stan functions, evaluating them at the given time points, and adding some error to it. All the functions work with a wide range of parameter values. I fail to understand where exactly is the sampler encountering error.

Thanks for all the help so far. I need to nail this down.

var fx = f(x, xc, theta_var, x_r, x_i, msgs);
fx.grad();
gradient = theta_var[n].adj();
if (is_nan(gradient)) {
if (fx.val() == 0) {
gradient = 0;
} else {
throw_domain_error("gradient_of_f", "The gradient of f", n,
"is nan for parameter ", "");
}
}
return gradient;

So if the function is f(\theta_1, \theta_2, ...), \frac{\partial f}{\partial \theta_4} is evaluating to NaN (probably due to an overflow) and f is evaluating to something non-zero. It’s pretty common for gradients to be NaN at limits of integration, but as long as f is zero we ignore it. My first reaction was to look for integrations with infinite limits, but since this is nested finite integrals I don’t know where to start looking either.

One of your comments says PDE solution. What PDE is this solving? Is there a way to discretize this spatially and solve it with the ODE solver?

Otherwise it’s probably rather crude but just do the integrals with the ODE solver. This problem happened yesterday: Difference in behavior between integrate_1d and integrate . I think we need to do something different for our integration. This quadrature just blows up too much and it’s nigh impossible to debug.

I think the problem is in the solution to one of the boundary conditions.

Otherwise it’s probably rather crude but just do the integrals with the ODE solver.
I managed to use the ODE solver to do the integration and it works like magic. Everything is working fine now and quite fast too. Thanks for all the help. The solution to integrals matches with what I get in R and python.

I think the integrate_1d function in stan is not correctly interpreting the error messages from boost library integrators.

Thanks again @bbbales2 for taking the time and helping me solve this!!!

There’s been a small pile of people reporting problems with integrate_1d. It feels like something isn’t quite right. (Edit: and small pile for a feature that isn’t used a lot seems like a lot)

We investigated and found a problem with how the tolerances were working in the integrator. So this error:

integrate: error estimate of integral above zero 5.00586e-09 exceeds the given relative tolerance times norm of integral above zero (in '/tmp/RtmpAnWvKa/model-3fad4361b76ae.stan', line 254, column 4 to column 19)

will hopefully go away in the future (there’s a patch in develop now but it’ll be another couple months before it goes out in the a release, see here for details: https://github.com/boostorg/math/issues/449).