Relative delta in finite difference approximations of derivatives?

I asked and I received. Dan was right. My original version underflows for small x. No wonder all the numerical analysis people are so worked up about subtraction! It was worth dusting off Numerical Recipes as they have a useful decomposition of error into rounding and approximation error. Dan already included all the relevant conclusions.

I wound up using h = \epsilon^{1/3} * \max(1, |x_n|) for the stepsize for differencing for input dimension x_n (where \epsilon is the machine precision, or about 10^{-16} for the double-precision floating point we’re using). I took the cube root because it was recommended for the symmetric case of evaluating f(x + h) and f(x - h).

The de Levie paper has a section on choosing step size which seems to look like what Numerical Recipes is saying, but I’m not a native speaker of applied math(s).