In some recent Stan Math PRs we were discussing a few code style issues there are not mentioned in any Stan guidelines for developers. The latest 2 I remember are these two:
should we allow or discourage the use of function try catch
Google’s C++ guidelines that are linked in the Stan Coding Style Wiki dont mention those as Google discourages the use of exceptions altogether.
should we have a unified style for doxygen special commands (for the typewriter font we currently use <code></code>, \code \endocde and \c, there is also a possibility of using @c that came up in one PR. Or am I nitpicking too much?
I think it would be great if we can define a rule for these and write them down somewhere. That would make things easier for new developers and reviewers. We could also enforce them with the test-math-dependencies script.
Which brings me to the fact that some (or most) rules that the test-math-dependencies checks enforce are not written anywhere (at least not that I know). Things like: only use #include <stan/math/{prim,rev,fwd}/meta.hpp> to include files from the meta subfolder.
Since these are mostly Stan Math related, dont know if they belong in the Stan Coding Style Wiki. Any ideas?
It was on a PR comment I unfortunately fail to find now. In short Daniel said he feels that we should stick with try catches inside functions since function try block is not used anywhere else in Stan Math. Steve said helikes them and I suggested we stick with the way the other codebase is written for that PR and make a discourse thread (this thread) to write this restriction down somewhere if there are more devs that feel the same as Daniel. Or come to the opposite conclusion and know that for the future. Just to make things easier for developers/reviewers.
@stevebronder, this is usually the argument for this type of programming. It’s a good opinion. My opinion is the opposite: I dislike it because it moves the logic of the error from one specific part of the code far away from where that happened. These are both opinions, so I don’t think one is more right than the other.
As I think about it, I think this is related to the single vs multiple return style aka single-entry/single-exit (SESE).
So far, the practice has been to scope try catch blocks around expressions we know will throw particular exceptions (at least in the Math library). Maybe this was done because we didn’t know about the function try catch or maybe it’s to catch exceptions and be able to move forward. Regardless, if you look at the end of the Google Style Guide at the Parting Words section, the first sentence there is
Use common sense and BE CONSISTENT.
For the sake of consistency, I think we should go with scoped try catch blocks unless there’s something in that really can’t be handled well this way.
I also don’t exactly remember the code that was written, but it might have been easier to handle that way due to the resource management aspect of handling GPU code. If that was, then I could believe that it would have been easier to read that way. But then again, if that’s the case, maybe there should be a function that cleans up resources and that could be put inside the catch block.
+1! Yes, we should be writing things down. I tried to start something, but I think we should be vigilant in writing things down for ourselves and other devs. I started here: Home · stan-dev/math Wiki · GitHub. But maybe there’s a better way to do this?
No, I don’t think you’re nitpicking too much. I think we should be more consistent.
What threw me off about that one example was that it was the first time I saw the @c inside our doxygen comments. We’ve used \c before, but not much. See: [stan/math/prim/mat/fun/MatrixExponential.h](https://github.com/stan-dev/math/blob/develop/stan/math/prim/mat/fun/MatrixExponential.h#L164). That’s the only other spot that we use \c.
It’s odd. We use @ all the time, but just for @return, @param, and @tparam. It didn’t even occur to me that the inline use of it was the same as a \ in doxygen. I might have been the only one that would have been confused by that. It’s just another one of those consistency things… I’d rather use doxygen in a consistent manner so devs don’t all have to be experts at doxygen to write Math C++ code.
I’m good with whatever sorts of policies we write down. @rok_cesnovar, what do you suggest we focus on? Is it when we should use @ vs \? Or just \code / \endcode vs <code> / </code> vs \c vs @c? Or some more general principles? @Bob_Carpenter’s written a really nice How to Write Doxygen Doc Comments that we should probably add this advice to.
I first saw @c in the stochastic ODE PR docs and went, “Huh so we can use those inline.” I like the style and adopted it for a bit, though if we want to stay with \c etc. then that’s fine by me.
Your opinion is def reasonable. Though having to switch gears and worry about exceptions causes the wires in my brain to cross a bit and I have to sort of context switch from “the code that does the thing we want” to “the code that handles when the other code does not do what we want”.
Having specific try-catch blocks I think is also more difficult for developers. For instance here in tri_inverse.hpp we actually miss the kernel call there which should be in a try function and catch for the OpenCL error.
If we had much more specific catches and handling after errors I would agree that the specific exception logic needs to be close to the code. At least for OpenCL if we throw an error it’s either
A domain exception which gets past through our function to a higher scope
A system exception that causes termination from hardware failure ala
I’d actually argue that for functions like integrate_1d it would make that code a bit easier to read ala, “If this function throws then recover memory and continue throwing upwards”
template <typename F>
inline double gradient_of_f(const F &f, const double &x, const double &xc,
const std::vector<double> &theta_vals,
const std::vector<double> &x_r,
const std::vector<int> &x_i, size_t n,
std::ostream &msgs) try {
double gradient = 0.0;
start_nested();
std::vector<var> theta_var(theta_vals.size());
for (size_t i = 0; i < theta_vals.size(); i++)
theta_var[i] = theta_vals[i];
var fx = f(x, xc, theta_var, x_r, x_i, &msgs);
fx.grad();
gradient = theta_var[n].adj();
if (is_nan(gradient)) {
if (fx.val() == 0) {
gradient = 0;
} else {
domain_error("gradient_of_f", "The gradient of f", n,
"is nan for parameter ", "");
}
}
recover_memory_nested();
return gradient;
} catch (const std::exception &e) {
recover_memory_nested();
throw;
}
In the Stan math lib I think it’s also reasonable to try a calculation one way and if it fails try a different method. I guess that could be an exception to the rules even if we generally did function-level try-catch… what do you think about that?
Can you give me an example? Looking over the files that are caught by my “try {” grep the only time I see us explicitly use try-catch blocks is for either recovering from OpenCL errors or recovering memory in the gradient/ode stuff.
I was thinking of things like the solver where it might make sense to start separate solves on different initial conditions but only collect the first good answer. Maybe futures are a better match for that. Or some of the special functions where you might not be sure if a quick algorithm will diverge or not but the more expensive one is available too. With those sometimes it’s hard to identify which calculation does better in one part of the parameter space then another. There’s no current usage of the try-catch that I’m suggesting which is why I was thinking it would be an exception to the rules rather than an argument for using it broadly.
We mostly use invalid argument and a general “domain” exceptions. We would have to write a specific exception for that sort of thing (“approx_exception”?)
I tend to think of exceptions as, “The computer has done very bad and we need to clean up and tell the user.” In your example I think the code executed correctly but the algorithm was wrong. I’d rather resolve that with
if (approx_epsilon > needed_epsilon) {
// do other algorithm
}
If all those approx methods failed we would throw an actual domain exception saying “we were not able to estimate this thing”
The failure in the inner function can be a “very bad” thing so then the inner function can throw which means you’re not on the normal execution path to begin with. The bar we set for throwing exceptions (the “very bad” thing) is a little arbitrary. We just need to pick a standard approach and be willing to deal with exceptions when they come up.
I doubt Google’s injunction to be consistent was meant to be interpreted as an argument against using new language features.
Can someone write down a simple example where something’s coded the way we used to do it and the way @rok_cesnovar is suggesting we do it going forward?
The example on cppreference just looks like it’s exchanging syntax
void foo(...) {
try {
...
} catch (...) {
...
}
}
for a syntax with fewre bracets and less nesting.
void foo(...) try {
...
} catch (...) {
...
}
I don’t see an issue with non-locality. Was there a particular usage you were worried about @syclik? The doc also says this:
The primary purpose of function-try-blocks is to respond to an exception thrown from the member initializer list in a constructor by logging and rethrowing, modifying the exception object and rethrowing, throwing a different exception instead, or terminating the program. They are rarely used with destructors or with regular functions.
Just to clarify, my suggestion was not whether to use function try catch or not. I just suggested that if there are any such restrictions that they should be written down in a Wiki or somewhere. I have no strong opinion on function try catch.
I am suggesting this to make the developers job easier, so that there are less cases where the review comment is “well, actually we discourage this and that, please replace”. And any such list would probably also make the job of a newly appointed reviewer easier. Just a personal opinion and this is not referring to any specific review, just a general observation.
Same goes for the make test-math-dependencies. Its great that we enforce certain rules (don’t use Eigen:: inside stan/math/*/arr; only include the */meta.hpp, not specific metaprogram files (this one is relatively new);…). But writing these down can help a new developer or existing developer when we add something. The rules can be explained a bit more than with just a one-line error warning on Jenkins.
I’d rather there not be restrictions on what pieces of core C++ we can use aside from compiler compatibility issues. If there are such restrictions, I completely agree they should be documented in such a way the list of such constraints can be found by new devs.
I agree that the syntax is cleaner. But… it’s much harder to reason about which of these statements threw what exception and what needs to be cleaned up in terms of resources. For example, if this was left by a different developer and you’re now debugging it, you won’t know which statements throw which exceptions and you’ll have to do enough detective work to figure out how these things are failing… especially if this is happening in 3rd party libraries, which is the use case for OpenCL.
This is just opinion and we haven’t written a policy down, so we can decide what we want to do (even if it’s to explicitly say we’ll do what makes sense for the function), write it down, and put it prominently somewhere for developers.