Disclaimer: I just taught myself some numerics in the past few months, I might easily be very wrong
I’ve been thinking a bit about this, and now I most like to make the tolerance depend on the average as:
auto avg = 0.5 * (fabs(x1) + fabs(x2));
tol_final = max(tol_min, tol * avg);
EXPECT_NEAR(0, x1 - x2, tol_final);
here both tol
and tol_min
can be set by the user and the default for tol_min
is tol_rel ^ 2
.
The proposed approach gives the same relative tolerance as the current approach when avg >= tol
, but then increases the tolerance more gradually.
Graphically, the relative tolerance implied by the current and the proposed approach when tol = 1e-8
looks like this:
The code I’ve written for my recent pull request passes this more stringent standard, but some other tests fail - I am not sure if this is cause for concern or simply means that the tolerances need to be better adjusted.
Does that make sense as a recommended practice? Or am I missing something obvious?
Code for the plot
library(ggplot2)
x <- 10^(seq(-15,5, length.out = 100))
tol_current <- ifelse(x < 1e-8, 1e-8, x* 1e-8)
tol_proposed <- pmax(x*1e-8,1e-16)
df = rbind(
data.frame(x = x, type = "current", tol = tol_current),
data.frame(x = x, type = "proposed", tol = tol_proposed))
df$relative_tol = df$tol / df$x
ggplot(df, aes(x = x, y = relative_tol, color = type)) + geom_line() + scale_x_log10("0.5 * (fabs(x1) + fabs(x2))")+ scale_y_log10()