Incomplete beta function causing hypergeometric gradient error

I have a normalising factor for a probability distribution that requires me to evaluate an incomplete beta function. When I run the model I get the error “grad_2F1: k (internal counter) exceeded 100000 iterations, hypergeometric function gradient did not converge”. Is this a well-known error? The bounds on the parameters are 0<R<\inf, 2<K<10.

functions {
    real log_funcF0(real VIN, real VOUT, real K){
        real R = VOUT/VIN;
        return -log(VIN)+log(inc_beta(1.0/K,1.0-1.0/K,R^K/(1.0+R^K)))-log(K*R);

I managed to get round this bug with some mathematical trickery. You can re-write the incomplete beta using this identity and then you don’t get the non-convergence error messages in this area of parameter space.

inc_beta(a,b,x) = beta(a,b)-inc_beta(b,a,1-x)

Edited: I had to edit the identity to use Stan’s definition of the incomplete beta function.

this is something we can do internally. It would help if you could print out some parameter values ( parameters you call inc__beta with) in your program that cause the error.

Here are a couple examples, where R and K refer to the parameterisation above.

R = 10.4216, K = 4.97921
R = 12.2444, K = 6.14438

I can generate more examples if needed!

1 Like

This is likely good enough since the rest of the calculation is deterministic. The underlying functions are calculated by a few different algorithms depending on the part of parameter space and it’s helpful to have a starting point