Something related to Gamma functions hangs

I recall this issue popping up where Gamma function calculations would hang Stan but I can’t recall if there was any more info than that on the issue. I can’t find it on the list now and I’m hoping somebody else remembers better. I have a model that works great except for the occasional hang and I rely on the gamma functions quite a bit so I’m assuming that’s the problem. Just checking if anybody knows about this before I dig in?

This one?

I believe I wrote a couple tests on branch:
feature/issue-27-internal_math_tests

It has to be way out of date now. But hopefully you can just copy whatever tests I wrote, if they are useful. (I don’t know if they are – it was an old me that wrote them.)

It’s likely that the hypergeometric functions will have to be rewritten so that their derivatives are more stable, as I did for the regularized incomplete beta function. To test this simply have the current implementations printout the current value when the loop goes beyond a certain length and you should see that the problem is limited to certain extreme values. Unfortunately it’ll be a mess of math, but I can share the analytic results from the beta case in case anyone wants to try.

Thanks guy, that’s super helpful. I’ll give it a shot in the next few
days. Michael, it would be useful to see what you did for the beta.
Krzysztof

better_incomplete_beta.pdf (73.6 KB)

The important next step is computing the ratio the S and F by pulling out common terms in the two power series.

Michael, which functions did you already implement these improvements for?

This test fails (by a very slim margin) and I think it relies on the grad_2F1 function (I tested the exact input values grad_2F1 gets called with and grad_2F1 is fine at that point) so I think this is just a previously hidden inaccuracy being tickled. Just want to know if I can rule out some functions to check on a first pass.

TEST(ProbInternalMath, inc_beta_ffv_3rddderiv1) {
  using stan::math::fvar;
  using stan::math::var;
  fvar<fvar<var> > a_ffv = 1.0;
  fvar<fvar<var> > b_ffv = 1.0;
  fvar<fvar<var> > g_ffv = 0.4;
  a_ffv.d_ = 0.0;
  b_ffv.d_ = 1.0;
  b_ffv.val_.d_ = 1.0;
  g_ffv.d_ = 0.0;

  fvar<fvar<var> > z1 = stan::math::inc_beta(a_ffv,b_ffv,g_ffv);

  AVEC y1 = createAVEC(b_ffv.val_.val_);
  VEC grad1;
  z1.d_.d_.grad(y1,grad1);
  EXPECT_NEAR(0.079976746033671442,
              grad1[0],1e-6);
}

Oh never mind, this is auto-diff-ing grad_2F1 more than once, I just adjusted the 1e-6 to 1.05e-6 since the way to fix it is to write a gradient for grad_2F1

Yes, there’s no reason to expect that autodiff’ing through these functions will yield accurate gradients.

I actually also tried out making the incomplete gamma approximation on log-scale and that makes it fail it’s higher-order auto-diff gradient tests as well. It’s possible that log-scale power series calculations are just less auto-diff friendly in general, at least the way I’m implementing them.