Math library input checks


By other here do you mean ones that are not at the top normal_id_glm level?

I know I am a monorepo lover and hater of submodules but I might like to see this in a separate repo that includes Stan Math as a submodule? Just so it’s easy to run the same performance test on multiple versions of the Stan Math code. When I was doing some of the other performance tests I originally had them in the same repo but it became too difficult to manage testing different commits if the test itself could change between commits. I might add a Google Benchmark-ified version to if that’s okay? I might also try with the Chandler tricks above instead of the volatile and see if that makes a difference.


I mean those at top of normal_id_glm()… I just commented out the checks.

That completely makes sense. Or… having a script accept a hash?

Add to the stan-dev repo? I’d be happy with it, but we’d want to check with the TWG first. An alternative place to put it would be under the org.


Cool idea. That’s only order N and the NaN and infinite values are guaranteed to propagate.

Another idea would be to get rid of the checks on the data matrix. There’s nothing wrong in theory about letting NaN propagate. It’s what addition and multiplication do.

I’ve found the same thing whenever I used a profiler. Otherwise, I found microbenchmarks that make sure not to throw away the results were reliable.

I really liked Agner Fogg’s tutorials on C++ optimization. But it’s not about showing you how to run a profiler.


A different idea came up on the PR that should be repeated here.

We can change the preconditions for our functions in the math library. Right now, we assume that anything that is of the valid C++ type can be passed into each of our functions. We can change it so that the functions assume that the checks have already been validated.

In order to pull this off, I think we’d need to do a few things:

  1. Increment the Math library version by a major number
  2. For each function we do this with, create a separate check_* function. For example, normal_lpdf(double, double, double) should have a matching check_normal_lpdf(double, double, double) function. (naming subject to change)
  3. The Stan generator can then generate two function calls instead of one to maintain the existing behavior. Having the two functions makes it really easy to just reuse the same arguments.

If we did this, we could really start working on how to avoid generating checks and it’ll be decoupled from the function call itself. One of the difficulties will be how we handle expressions that are passed into the functions now. I’d think we’d want to store the expressions as temporaries before passing it into both functions, but I’m sure that can be worked out.



I had a different implementation in mind when I proposed separating the preconditions out - I think we’d want to declaratively (rather than imperatively with a check_* function) state what the conditions are for each argument - something like this:

typedef void check_fn_ptr(double *, size_t);
std::map<const char * name, std::vector<check_fn_ptr>> function_preconditions;

So that’s a map from function name to a vector of check fns where the index lines up with the arguments to the function in question. This is assuming all check functions could be made to have the signature above, but you get the idea. Then stanc uses this function_preconditions map to generate the appropriate checks at the appropriate times.

If we wanted to provide support to Stan Math library users who aren’t using it through stanc, we can provide a templated function that calls the appropriate checks (and possibly even a version that automatically calls the checks and the actual function together).


That’ll work!


Map is super slow to use in a low-level algorithm like this.

I don’t see what signature the check functions could have—they’d have to take in all the arguments—it’d have to be something like parameter packs or tuples.

We could do what @syclik is suggesting without incrementing a major version number by refactoring as:

  1. T0 foo(T1 x, T2 y) { foo_check(x, y); return foo_checkless(x, y); }

  2. inline void foo_check(T1 x, T2 y);

  3. inline T0 foo_checkless(T1 x, T2 y);

Then we could expose both foo and foo_checkless. This is the usual way to factor a function so that it can be used externally through foo and internally through foo_checkless.

That way we don’t have to cobble together hacks inside the generator. It’s so much easier to develop and test in the math library than in generated code.

My fear, of course, is that users will start preferring the checkless versions of functions.


Maybe we should talk about this on a hangout? I think there’s a misunderstanding about what I wrote above… Not sure where to start but here are some things:

  1. Each check function currently applies to one datastructure and would continue to do so. The check functions would likely not change. We would arrange to call them on the appropriate arguments at the compiler level, as that is the only level that has all of the global precondition information for a Stan program. For example, if we run a bunch of functions on X that all require X to be finite, the compiler can generate check_finite(X); in the ctor of the model and never again.
  2. Confused about the hack characterization - I think we want to put much more of this kind of thing in the compiler as that’s the level at which it has global Stan information required for these optimizations (though we can keep track of some of them at runtime at additional cost). I was thinking this would wait for the rewrite, though. What’s the concern around compiler optimizations?
  3. The fact that Maps are slow seems irrelevant, as 1) it’s used just briefly by stanc and 2) we could clearly use some other type like std::vector if we realized that performance was important and we didn’t want to pay for fast associative lookups.
  4. I wouldn’t think we’d expose checkless versions in the Stan language, and Math library direct users are pretty much on their own if they did such a thing, right?


If the map you’re proposing is in stanc, then I definitely missed the point. I thought it was dynamic figuring out of when to check.

I don’t see how to do this in a clean way other than by linking it to data or transformed data blocks in the language—hence the hack characterization.

So I’m probably missing something and we should talk in person now that I’m back.


The data structure that associates checks with arguments to functions exist somewhere statically, probably in the Math library. The dynamic thing is something we could really only do with an interpreter, I think.

I have become somewhat convinced that the best way to do this is in the new stanc, along with a bunch of other stuff. I’m hoping to have a roadmap, skeleton structure, and project announcement detailing why a new stanc is important and asking for help in the next few weeks, if I can cut back on some of these meetings!