There's no MH step in NUTS. There's an MH step in static HMC, but that's not what we're usually doing. The original NUTS paper was slice sampling to draw a state from the Hamiltonian trajectory (biased toward last doubling of steps); the way it works now is with a categorical draw from the path with probabilities proportional to density (again with bias toward the last doubling).
If the Hamiltonian were simulated perfectly, MH would always accept for static HMC and we'd always choose a new state from the second half of a trajectory in NUTS.
What happens with the leapfrog without divergence is that it wobbles around the true Hamiltonian value going up and down (you can see that, for example, plotted in Radford Neal's paper in the MCMC handbook or in some of the books and papers on symplectic integrators).
When we hit a divergence, we spin out and never come back.
Now technically, doing what we do with NUTS would still work in theory---if we ran the thing forever, we'd get there. What happens in practice is that we wind up with a random walk that has a very very hard time getting into the neck of a funnel, for example. And when it does, it stays there a very long time.
So it's not that there's asymptotic bias so much as bias in any sample drawn in the time we have.
This is the same problem for Gibbs and Metropolis-Hastings itself. Sure, they converge asymptotically. But we don't have asymptotically long to wait! (I think that'd make a good blog title, by the way.)