Namespace stan::math::detail



From @seantalts on a GitHub pull request comment:

The stuff in the detail namespace isn’t part of the Math library’s public API, not even for people who might be writing distributions. There was a discussion at some point that lead to us deciding that we can safely hide implementation details in there without full doxygen doc.

In my hazy memory, I think I remember a discussion, but I don’t remember a decision taking place. Whatever we do, can we document it somewhere? Preferably in a wiki:

A few points:

  • I’m not in favor of complete lack of documentation. It encourages bad behavior (and I’ve been guilty of this too). I’m looking at this in terms of future maintenance. It’s really hard to follow code when there’s not really context. And I’ve seen on this project, future contributors often take functions and apply them or adjust them to address yet another use case and this can cause major problems. Having some doc as to purpose helps.
  • I’m in favor of testing these functions, even though there is no expectation that it would be called directly by a client to the math library. Even having a test that demonstrates how to call the function is useful for debugging in the future. If it’s code that’s at the heart of everything, I think that’s the least that we should require. Not having that makes it difficult for anyone else coming in to help debug. This comes from first-hand experience where I’ve spent lots and lots and lots of hours on code (some mine) where there wasn’t even a single test to show how to call a function and most of the time is spent figuring out how to instantiate it correctly. After that, it’s easier.

Anyway, I’m ok with the stan::math::detail namespace something that we treat as not officially part of the API. We should just put that down somewhere.


I’m not in favor of complete lack of documentation. [for internal functions]

I would phrase it like so: phenomenon in detail have no mandatory doxygen documentation requirements and are not part of the math library API. But code is code and one should document it as one would normally - preferring to make the code self-explanatory, but giving any additional context as necessary to understand it.

I’m in favor of testing these [internal] functions

I much prefer functional testing at API boundaries - these tests are much less fragile, easier to maintain, and help provide documentation. Testing classes internal to operands_and_partials current specific implementation doesn’t seem to serve an additional purpose given we are testing operands_and_partials at its API boundaries.

Not having that makes it difficult for anyone else coming in to help debug. […] there wasn’t even a single test to show how to call a function

Wherever functions are called serves as their canonical invocation examples, no? I can see why more examples could always be helpful (and serve as the aforementioned “additional purpose” to internal tests), but I think the marginal benefit is slim when weighed against the cost of maintaining this type of test. Maintenance costs here are higher than elsewhere because these implementation details may change at any moment, because they aren’t part of the API we are trying to uphold for users or distribution writers.


Good points. Intellectually, I agree, but pragmatically speaking, I’ve seen where it’s bitten us time and time again. The future maintenance cost when there are no tests are really, really, really bad.

Part of the problem is defining API boundaries, especially when we’re doing things generically. Just looking back at Stan, we had originally templated things only expecting double and var. Then we had to take care of int and we expanded things to std::vector<var> and Eigen::Matrix<var, R, C>. Then again expanding the API boundaries to replace var with fvar<double>, fvar<fvar<double>>, and fvar<fvar<var>>. So, if we’re really careful in defining the API boundaries and testing it all the way through, then I’m ok. But then again, I’m more comfortable breaking down something that is so generic and testing individual pieces and making sure they behave properly.

Only if you’re willing to go through the combinatorics and call it with every possible template instantiation crossed with boundaries conditions.

Here’s where I’m torn. I completely agree that there’s a cost to maintaining tests. But I don’t think the burden is as large as either: introducing an error due to negligence and having to track through the code or having a future person change a non-tested piece of code to something they feel is safe and that having unintended side-effects that weren’t considered.

The problem with code that’s only intended to be used internally may only be safe for a certain input space. Not documenting that causes problems in the future. Not documenting it also causes issues of repeated code. See the meta functions. We didn’t document it and there are definitely duplicate functions in there.

Like I said, I understand the arguments and I’m happy with a case-by-case evaluation depending on the complexity of the code and test coverage of the API boundary. For the pull request in question, I’ll push more on the API boundary and make sure we have it tested. For something that may affect every Stan program, I tend to unit test every component so that I’ve verified the behavior I expect. I also find it easier to change implementation and design when tests are in place, but I can definitely see how this might discourage more novice C++ programmers (there really aren’t good refactoring tools that I’ve found).

Anyway, back to the original question: should we come up with some policy and put that up somewhere?


I can update this page:

What are some good refactoring tools? I definitely haven’t got any good ones under my belt.

Any function(s) you think are particularly valuable to test that I can add?


Thanks. Feel free to completely rewrite the page.

I haven’t found any for C++. That’s why I find having even basic unit tests super helpful.

I’ll give the pull request another pass.

For a concrete example of what happens when you don’t test:

  • ADVI. The code is essentially frozen without anyone being able to help. It’s too fragile to help out. There aren’t enough lower level tests to replace different parts, like the stochastic gradient estimation, the adaptation, or anything else. Maybe it’s a bad example, but then there’s an example of mine.
  • bin/print or bin/stansummary. The code is awful and @sakrejda tried to help out. There just weren’t enough tests to make it easy to swap out parts of the implementation in hopes to get the whole implementation fixed. (Sorry, @sakrejda. My bad.)
  • gamma derivatives that @sakrejda just went through. That was tough and it would have been a lot easier if there was some notes on where the implementation was valid and more importantly invalid. It was treated as “internal” code and had no tests.


I don’t like detail as the name and was lobbying for following Eigen in using internal. As we’ve discussed, the nice part about doing so is that it signals what the external part of the API is. The problem is that there’s no enforcement of that. The detail/internal stuff is still public and other functions can call it by prefixing internal:: to the calls, so it doesn’t actually enforce a notion of internal interface.

What I find is that I’m discinclined to break small functionality out into functions because of the need to doc and test them. So I write longer functions. I’m guessing I’m not the only one.

I think the API boundary is defined in the doc. That’s harder the further up the inclusion hierarchy you go so that defining CmdStan’s boundary is really hard. But it should be easy for math.

I agree this should be written up somewhere. Is there a testing policy doc somewhere or do we need to start one? @seantalts — want to take a stab at updating or adding such a page?


Oops, sorry! Can change it to internal. I like that better too. I think my operands and partials pull request is the first thing officially using this, but we can clean up the other uses of detail as we come across them.


Didn’t see that thread first time. The code quality page should be updated.

We don’t enforce the same include-what-you-use in the tests as we do in the main code because we want to test in the context of how a user would use it (still don’t know if this is the right answer).

Can I lobby again for internal rather than detail? In my mind, that’s a much clearer name for something not intended to be part of the external API.

We need to break (5)—dependencies out into a graph and also include the prim < {fwd, rev} < mix and scal < arr < mat. Though we were talking about getting rid of arr altogether.


I’ve wasted a lot more time for lack of doc than I have writing doc so IMHO whatever gets written, even in stan::math::internal should be either apparent from the code or documented. I don’t mind that the doc is the Nomad manual or the Stan autodiff paper on arXiv (or the NIST math formula website) but it should be somewhere.

As an aside the math functions like the gamma derivatives in general should never ever ever end up in this internal namespace without tests/references. Most of them are only accurate over some specified region and figuring out what the region is requires fine-scale testing or analysis (and testing). They can fail really badly outside that region (orders of magnitude off). I think the incomplete gamma derivatives I did are fine but the hypergeometric function stuff I did would benefit from the same treatment (testing over a grid, comparing implementations).



I think self-documenting functions (simple behavior with names that match) remove much of the need for doc. In fact, in a lot of cases, I think doc can hurt because it can get out of synch. I’ve wasted a lot of time being misled by doc in the past, so I barely even consult it even if it’s there because of a fundamental lack of trust.

What should be documented is intent. If you’re doing something that’s odd for the language or that you got from a paper somewhere, doc that. Same if you have reduced some common formula to a more efficient form for computation.