No, I am suggesting that we define an I/O convention for how the interfaces have to wrap the samples and diagnostic info from a fit object into std::vectors or POD arrays to be analyzed by routes and then what they expect in return (probably strings and error codes if not numerical results like ESS and rhat).
Bonus points if the convention minimized excess copying on the C++ side.
None of this would be exposed to users – the interfaces would call these routes internally.
I am hesitant to put this stuff into math for the same reason I don’t think the Welford stuff really belongs there – the math library is essentially an autodiff library, not a general purpose math library. This stuff would be just as well in a separate auxiliary math library.
That’s a nice way to think about it. Effective sample size estimators and R-hat are more like the sampling algorithms—subject to change as we learn more. So it makes sense to leave those with the algorithms. Plus, I think it makes sense to have you manage them.
If we ever get around to adding derivatives to FFT, that’ll go in the math lib.
Welford’s algorithm is one of those oddball things where it’s at an API level of an accumulator rather than being like our ordinary math functions. So I’m OK with that going either way. I could see some of the algorithms that do need to be in the math library, like calculating sample variance, needing it, though. If that happens, then whatever they use in common should be in the math lib.
But how would we get the variable/diagnostic x draw x chain shape into a single vector? If we used vectors of vectors then we couldn’t guarantee memory locality for map. We’re probably have to do a copy at least once to put everything together, but let’s figure out the signature that will make it easiest for the interfaces to pack everything together and for the C++ to unpack into Eigen objects.
Chains are stored in separate holders in Python and R. And the variable/diagnostic \times draw is stored using memory-contiguous column-major-ordered holders (e.g., numpy array in Python) by the interfaces so those can be referenced with a double*.
Is there a place where we operate on a Chain x Draw object any other way than as a vector for each chain? I don’t see a need for Variable x Chain x Draw.
If you have vector<double*>, it’s easy to map each entry to an Eigen vector or row vector. So if the answer to the above is that all the underlying operations use Eigen vectors, this is the most economical way to pass things that doesn’t implicate Eigen on the interface side, which is what Allen was requesting.
For the ESS calculations as their are implemented we’d need matrices with shape draw x integration – so can the interfaces guarantee that the info from each chain points to contiguous memory? If so then we’d be looking at a signature like
int n_var, int n_draw, writer& writer);
I’m not sure we need the writers for ess. They add a lot of complexity.
Could we use exceptions instead? Or return a std::pair with the first
value being the result and the second value being a string containing
the error message (or empty if no error). (This is a pattern Go uses.)
Yes, the double* for each chain would be contiguous, but they wouldn’t be contiguous with each other.
I’d much prefer exceptions if that’s workable from the interface perspectives.
If there’s an error, the value won’t make sense. Returning both the value and error message is then clunky as only one will ever get used. This error code return is also the tradition in C with functions with signatures of the form int foo(..., result_t* result) where the return is a return code indicating error or not and the function setting result with the actual value calculated.
Unfortunately the conversation has diverged a bit here. Let me try to bring it back a bit.
We do not throw exceptions to the interfaces for the algorithm API. This thread is not about redoing the API but adding additional routes for common diagnostic calculations. If there is interest in changing the client relationship and how error information is propagated then a new thread should be started.
Diagnostic functions return various statuses depending on the thresholds we set (and possibly modifiable by the user). Okay, Warning, and Error messages do not indicate that the function has failed but rather that the input fit information is good or suspect. Exceptions would be more appropriate for failures to calculate the diagnostics.
The question introduced in this thread is designing a uniform interface for all diagnostic functions that is sufficiently robust for current (and immediate future diagnostics) that is compatible with the algorithm API.
We could handle that a different way and catch all exceptions, but right now, we handle std::domain_error within the methods and allow any other exceptions to propagate.
All good points, but I was thinking about messages in conjunction with exceptions. Of course, I’m assuming that writers can either flush immediately or buffer and flush appropriately even when handling exceptions.
For warnings, I was thinking that there might be non-exceptional cases that don’t warrant failure of computing a value, but should indicate some warning. Of course, we could just say it either works or doesn’t and the only way to communicate output from one of these functions is through the message in the exception.
Just need some clarification… what are you calling the “algorithm API”?
That doesn’t sound like it should be an exception. But it also doesn’t seem like the return should be Error when it’s not an error.
For example, an exception should be thrown if the chains aren’t all the same length passed to R-hat (until we generalize, that is). We shouldn’t throw an exception if R-hat is above some nominal operating threshold like 1.1.
I think this would be o.k. in a writer or logger if we add typed messages. Otherwise an interface (e.g.- ShinyStan) will need to parse text messages in order to decide if it wants to show a different sequence of plots when models don’t converge. No reason typed messages can’t decay to text but the inverse doesn’t work.
These functions need to return indicators as to whether or not the diagnostic has passed or not so that users can programmatically check if the diagnostics pass instead of having to parse through the returned messages. Okay, Warning, and Error return codes refer to the diagnostic output, not anything to do with internal operation. Exceptions are fine for the execution of the diagnostic itself failing.