I recently received a private email regarding numerical integration [so it wasn’t just @Bob_Carpenter; my monthly private email counts has increased \infty %: from 0 to 1, but with such a high overdispersion I’m unsure there’s a model to decide if it was caused by Discourse]. Since it’s a recurrent topic, I it’s worth sharing a summary of the points (was going to forward the email some days ago, but Discourse just messed up with the inline threads and I gave up) even if it might not be useful to the OP:
If you want to use the implementation I wrote (and you know how to compile a custom version of Stan for the interface you use) it’s in branch GitHub - stan-dev/stan at feature/integrate-function-parser-doc for stan-dev/stan and branch https://github.com/stan-dev/math/tree/feature/issue-210-numerical-integration for stan-dev/math. However there’s two problems with such implementation: first the parser doesn’t allow the function at all, and I don’t know the reason since I know very little about the parser. The function generator however works and therefore, all you need to do is force the parser to allow invalid code to proceed to the compiler. The code in the branch already does that for you, but you have to be careful because other invalid code that you mistakenly write will also pass. The second problem is lack of relative tolerance (only absolute tolerance).
On the other hand, one potential advantage that such implementation have is letting the user set the gradient function manually, I believe this might make a difference on some cases. I don’t about the difference between its algorithm and the ODE solver one, though (would love opinions of experts here).
However, I recently discovered, thanks to @wds15 that it’s possible to do one dimensional integration using the ODE solver of Stan and has since used it in a Python package I’m working with. Here’s the relevant part:
functions {
real lkk(real x, real beta) {
real result_;
result_ = 0;
for (i in 1:num_elements(beta)) {
if (i % 2 == 0)
result_ = result_ + beta[i] * cos(i * pi() * x);
else
result_ = result_ + beta[i] * sin((i + 1) * pi() * x);
}
return exp(result_ * sqrt2());
}
real integ(real x, real f, real beta, real x_r, int x_i) {
real dfdx[1];
dfdx[1] = lkk(x, beta);
return dfdx;
}
}
data {
int<lower=1> nmaxcomp;
}
parameters {
vector[nmaxcomp] beta1;
}
transformed parameters {
real lognormconst1;
lognormconst1 =
log(integrate_ode_rk45(integ, {0.0}, 0, {1.0},
to_array_1d(beta1),
x_r, x_i, 1.49e-08, 1.49e-08, 1e7)[1,1]);
}
If you want to use the implementation I wrote, then here’s an example:
functions {
real ftft(real x, vector param) {
return exp(x) + param[1]^2 + 2param[2];
}
real gradff(real x, vector param, int indic) {
if (indic==2)
return 2;
else
return 2param[1];
}
}
parameters {
vector[2] p;
}
model {
p ~ normal(0, 1);
integrate_1d(ftft, 0, 1, p);
integrate_1d_grad(ftft, gradff, 0, 1, p);
}
You should be able to compile and use it with the master (not the develop!) branch of RStan and PyStan.
integrate_1d
is very slow and should be used only for preliminary testing, after that you should stick with integrate_1d_grad
(you can use test_grad of Stan to check if you set the gradient function correctly).