Request for Volunteers to Test Adaptation Tweak

Were you up for doing a review or should I look for someone else? Thanks!

The one last thing I wanted to get on this pull request was some idea how much more divergency a divergent model might be.

I think the test that would make me happy would just be run 100 8-schools with and without this and count the divergences. I can get to it in a few days if you don’t want to do it.

Looking at the code real quick, this is the line that doesn’t make sense to me yet:

log_sum_weight = math::log_diff_exp(log_sum_weight, 0);

(https://github.com/stan-dev/stan/pull/2790/files#diff-4eb84e42fa085ef0ffa363cee96699e9R166)

What’s happening here?

I’m teaching through the week and much of next week but might be able to sneak something in between next week if you don’t get to it in time.

The new acceptance statistic doesn’t consider transitions to the initial point, so the weight of 1 = exp(0) has to be removed from the running log_sum_weight aggregator.

Ooooh, okay, cool

This is a bit OT and I understand it might not be relevant for our adaptation, but maybe it could be addressed why we don’t need to this:

Why do we do our adaptation in “one-step” and not in cyclical manner as is usually done with DNN.

Like first few cycles gets in to the correct space and the last cycle gets correct values

3x = Adaptation cycle -> Adaptation cycle -> Adaptation cycle

It is done in cycles, kinda. The metric and timestep are done a bit differently. The cmdstan manual has the best graphic of how the metric adaptation is done. It’s Figure 9.3 in the 2.17.1 manual at least.

By default there are like 5~ windows of increasing size. The HMC draws from the nth stage are used to build the metric in the (n + 1)th stage.

Timestep is being adapted continuously (from draw to draw). It’s all sorts of funky games, which is why we gotta throw the warmup stuff away in the end.

1 Like

But if you have adaptation ideas, find a model and try em out. Hit up Aki on this stuff too. He’s all up to date.

I ran some eight schools tests. 1000 centered models with the new vs. the old stepsizes.

divergences

As expected we got some more divergences. For the plots I threw away calculations with >150 divergences.

In terms of quantiles:

New:
0%  25%  50%  75% 100% 
0   11   19   32  990 

Old:
0%  25%  50%  75% 100% 
0    8   15   28  540

I guess this is roughly what we expected?

Yeah. In terms of divergences the tweak just rescales adapt_delta so it’s equivalent to running the old algorithm with a slightly lower adapt_delta or equivalently a higher step size. When a model is diverging both should lead to more divergent trajectories (modulo statistical variation).

I was writing up a summary of this change this morning, and had a question.

What is changing is the definition of the accept probability that adapt_delta targets. In the old adaptation, every point in the trajectory has an accept probability, and the overall except probability is just the uniform average of these.

In the new adaptation, this average is reweighted according to the probabilities NUTS uses to sample the next draw from the trajectory.

Isn’t this sort of double counting? Like, the Metropolis accept probabilities are what we’d have if we didn’t do multinomial NUTS. Now we add multinomial nuts on top so we don’t need the accept probabilities.

So reweighting the accept probability average with the multinomial probabilities seems like we’re doing the same thing twice?

Notational point – the sampler in Stan is significantly removed from that introduced in the NUTS paper and we’ve moved on from calling it NUTS to avoid confusion (many people assume that it’s still the specification in that paper). Right now “dynamic multinomial” is good, although I’ll introduce a new name soon.

Correct. The idea here is to presume hypothetical proposals from the initial state to each state in the trajectory, and then average over those proposals uniformly.

No, because we actually have two processes going on at once here. First we have the hypothetical proposals, and their corresponding acceptance probabilities. Second we have the process of choosing between those hypothetical proposals. Because multinomial sampling will prefer some states over others we want to emphasize those more common states in the proxy acceptance statistic.

In other words we want to emphasize the high weight states because those will be the most common. If we weight uniformly then we let low probability states dominate the acceptance statistic.

Can you do a phone call sometime on this? My internet is sketchy where I’m at, but I’d like to talk through this. Still not getting it.

With the termination criterion robustified the behavior of this tweak is a bit more as expected. The step size increases which decreases the number of leapfrog steps systematically, although the difference is somewhat small given that the multiplicative expansion limits the distribution of leapfrog steps.

In the normal IID case you can definitely see the increase step sizes “stretching” the nominal behavior, pushing the thresholds to higher dimensions.

stepsize_report.pdf (454.4 KB)

@bbbales2 – any thoughts?

‘improved_alg’ has both the new Uturn and timestep behaviors convoluted together though, right? What’s the difference between new Uturn and then new Uturn + new adaptation?

Sorry, stale legend. The dark red labeled “v2.20.0” is actually the current version of develop with the additional termination checks, so this is the comparison between new u-turn and new u-turn plus new adaptation.

It looks like there’s a treedepth thing happening at ~250 dimensions with the IID normals, right? Like the N_eff is getting crazy high for the old stuff, but apparently it’s just doing more sampling cause the Neff/gradient is the same?

This is an artifact of the multiplicative expansion and the “resonances” in effective sample size per gradient evaluation. At D=250 develop jumps to the next treedepth, where as the new adaptation won’t jump until a much higher dimension.

All the new adaptation does is stretch out the step size verses dimension curve, and hence everything else that depends on the step size (module dimensionality effects, so it’s not perfect). For example, consider D=50 for the develop implementation, which results in a step size of around 0.6. The new adaptation, however, doesn’t get to stepsize ~ 0.6 until D ~ 250, so the performance of develop at D=50 and the new adaptation at D=250 should be similar. At the very least the number of leapfrog steps is the same in those two circumstances.

Yo @ariddell and @ahartikainen . This is a new timestep adaptation @betanalpha came up with and we’re looking at putting in the algorithms.

I’m gonna talk about it at the Stan meeting tomorrow. I assume a pull request will appear in the not so distant future if nobody has any great objections.

The tldr; is it makes things faster (see pdf here) Request for Volunteers to Test Adaptation Tweak but comes with more divergences: Request for Volunteers to Test Adaptation Tweak .

If yah have comments, let them be heard!

Thanks! If it really comes with more divergences, will there be a way to get those back down by controlling adaptation accept target?

If a fit is diverging then there are fundamental problems (i.e. a breakdown of the treasured MCMC central limit theorem) no matter how many divergences there are, so in my opinion that’s a bit of a red herring.

The resulting step size will still be monotonic with adapt_delta, so increasing adapt_delta will still decrease the step size and potentially reduce divergences as with the current adaptation criteria.