Proposal for consolidated output

Thanks for the feedback, good point about the GQ’s decoupling the number of parametres the algorithm deals with from the number of parameters in output. I’d like to see Bob/Daniel comment when they get to it but then I’ll move this to the design wiki with some clean-up.

I think the next step is for me to see if there are complications with consolidating output that would make it hard to (temporarily) change the innards of the services methods and algorithms w.r.t. output while still letting the interfaces produce the same output. This was a challenge with the original refactor that created the services method.

1 Like

I checked it out, this is great!

That would be useful. Especially with RNGs.

The plan is to upgrade to 64 bit ints as soon as possible.

Even then using actual int may be too difficult with this kind of writer structure. It is easier with just string output.

We also want to look downstream to serialization. We will need good int serialization for things like rng seeds.

Thanks.

not sure how we do this. each service gets its own flat config struct or is rhere looser typing?

do the param names come with dimensions or types?

There are different numbers of constrained and unconstrained params for simplex, cov matrix, etc.

the hessian is an R thing, not in Stan. and is that constained or not?

the cov is on unconstrained scale. and it is dense or diagonal.

hmc only has one mass matrix after adaptation other than rhmc. it may be dense or diagonal.

gradients always unconstrained

log density unnormalized on unconstrained scale with jacobians.

divergence is boolean, despite how it looks and is documented now.

The notes should definitely be broken out.

Thanks for the survey. It’s super helpful.

I am still not sure what is being proposed to dispatch events sent to the relays to handlers defined by the interfaces. Is the relay generic across interfaces? Do interfaces register handlers with the relay?

I was trying to understand the templating proposal as a way to streamline writing all these.

For text defaults, that can just be an implementation of a handler that an interface could plug in. I think the default implementations should be no ops to stop stray text getting through anywhere.

@sakrejda, thanks for taking the huge effort of writing that out. I’ve tried multiple times to write it that clearly, but haven’t.

I think everything you’ve said about going from this current implementation to something workable is great.

I still don’t see the benefit of the templating. It’s not templating, per se. It’s the indirection in how the templates are used (structs living somewhere else).

I’d like to mention a future complication: in some cases, we won’t know how many iterations we will have. With that in mind, I’ve started thinking that this is an important distinction: things that are written once per iteration, more frequently than that, less frequently, and ad hoc. (@sakrejda, I think you laid all of that out nicely.)

Having written this out, I think we can do something that lowers the bar for understanding the relay code… there may still be room for templating to avoid code duplication but I think the confusing application is in the relay and given we have three kinds of algorithms it’s avoidable. I’ll try to write up a specific suggestion this weekend.

1 Like

3 is a manageable number. Brute force isn’t always so bad.

True, it’s the level below the relay where templating might make sense to take care of sending heterogeneous tables but those would bee helpers for the interface code rather than an all encompassing relay class.

Thanks! I’m really looking forward to seeing this. I feel better having the initial survey in hand already.

I’d like to see simple APIs for the RStan, PyStan, and CmdStan interface clients if possible, even if they’re a bit redundant. The API for the algorithms and the back-end code in stan-dev/stan is much less of a concern for me because it’s not being used by external clients.

Just a ping that I am unsure what the current status is - am I supposed to try to write some more code proposal or does someone else have the ball now?

Hey, sorry about that, I should’ve added more here. Next step is I need to write up what the calls will look like from the algorithm side and the interface side. There’s no point to writing more code until that’s clearly laid out. I was going to do it last weekend but I needed a faster .csv reader for Stan so I did that instead, I’ll be able to do it this week though.

1 Like

Thanks @sakrejda, that’s my understanding of where we’re at, too.

Thanks! I’m looking at this again. @sakrejda, seriously: thanks! And @martinmodrak for digging too.

I’m starting to think we could have writing done at each iteration. By that, I mean once at the end of each iteration. I can’t think of any algorithm that doesn’t have iterations (and even if it did, we could say it’s 1 iteration).

For each algorithm, I think we can write these things at the end:

  • parameter values
  • generated quantities
  • algorithm output (i.e. treedepth__)
  • unconstrained values / latent algorithm parameters
  • random seed

If we have the random seed(s), we can always generate the next iteration, if there is no other action between the end of the iteration and the start of the next iteration.

I think we also need to report the same information right before the start of the first iteration. If we had that, we would be able to reconstruct each iteration exactly. Am I correct in thinking that’s one useful thing about the output? If we had that, we could generate the information within an iteration (leapfrog steps).

Anyway, looking forward to @sakrejda’s next suggestions.

The other thing we were looking at is trajectory within iteration for HMC. So that’s a big block of information.

The RNG we use now isn’t restartable. The seeds are 8 bytes, but the states are something like 32 bytes. We could probably reseed from state if there’s a constructor.

Can we generalize from the algorithms notions of iterations to an output process that somehow comes with a header and streaming. It’s not so much that it’s an iteration, but that it provides a set of parameter values. Could we use that same notion internally with a sequence of leapfrog steps?

Absolutely!

I was thinking there are iterations and things done within iterations, but I’m sure there’s a better abstraction if we think about it.

There’s a constructor that takes two seeds! It’s also easy to get the state out as two numbers.

That’s good news. I never realized they had those. The PRNG we’re using is here:

https://www.boost.org/doc/libs/1_66_0/doc/html/boost/random/ecuyer1988.html

I see the two argument constructor, but I don’t see how to get a two-componet seed out of the state. Do you have an example somewhere?

I got curious, so I just std::cout << rng << std::endl at some point and it put out two numbers. I used that as the seed and got back the exact same results. I don’t know if there’s a more direct way to get the two numbers out, but I’m sure we can manage even if it’s not documented.

1 Like

Btw, I put together issues for this on github and went on vacation, another week before I pick it up.