The integral is over x and goes all the way to zero. The quadrature will evaluate points veeeery close to zero to get the integral.

If the sampler got to values of alpha that small, you might also have problems, but that isn’t guaranteed in the same way the integral evaluations will be (like, your posterior could be a long way away from zero, or more likely, the sampler never lands within like 1e-17 of alpha = 0.0 or whatever small number would trigger underflow).

Yes, from that documentation, I infer that, if both endpoints are finite (assume [a, b]), then for arguments closer to a: xc = x-a, and arguments closer to b: xc = b-x.

So the Stan docs example could be rewritten as:

real beta(real x, real xc, real[] theta, real[] x_r, int[] x_i) {
real alpha = theta[1];
real beta = theta[2];
real v;
if(x > 0.5) {
v = x^(alpha - 1.0) * xc^(beta - 1.0);
} else {
v = xc^(alpha - 1.0) * (1.0 - x)^(beta - 1.0);
}
return v;
}

Although I don’t (yet) know what the behavior is of xc when the distance to the endpoints is equivalent. (There must be a tie-breaking logic in there…)

Yeah the model you wrote looks right. I think for a = 0 the complement logic isn’t necessary. x == xc for x < 0.5.

Regarding the switching point, I think that’s pretty arbitrary. The precision difference for this integral only matters close to the endpoints. You could probably change the logic to if(x > 0.01) and it wouldn’t matter.

Well, nevermind lol. That’s definitely wrong cause the definition of xc is controlled by Boost. But I think Boost could move that threshold and it wouldn’t matter.