I’m using finite differences to automatically test autodiff vs. double-based implementations. I’m running into problems with the functional `stan::math::finite_diff_gradient`

(in `stan/math/prim/mat/functor/finite_diff_gradient.hpp`

) when inputs are small or large.

Our finite differences algorithm uses a default `epsilon`

of `1e-3`

and evaluates `f(x)`

at `x`

, `x +/- epsilon`

, `x +/- 2 * epsilon`

, and `x +/- 3 * epsilon`

. I can configure the `epsilon`

per call, but I’d rather have something more automatic so that the tests remain simple.

**QUESTION 1:** Would it make sense to use an `epsilon`

for finite differences that is defined relative to the input value `x`

, say something like `epsilon * abs(x)`

?

**QUESTION 2:** Would it make sense to build that directly into the finite differences functionals (there’s also the Hessian and gradient of Hessian), or should I do it from the test framework? There’s the issue of backward compatibility if the meaning of `epsilon`

changes from absolute to relative.

**QUESTION 3:** (Extra credit) Any hints about evaluating at `x = 0`

? What I do now is just default to an absolute error test; all the other tests use relative error to compare autodiff gradients and finite difference gradients.