Hello. I am working on adding a new function to the stan-math library and had a question regarding usage of existing templated Stan functions.

If we are suppling gradients/hessians for our function, is it better to “unroll” the Stan function when writing them out?

For example, assume our Stan function is created (in the functions block) as:

row_vector get_holiday_lift(
vector h_skew,
vector h_shape,
vector h_scale,
vector h_loc,
vector intensity,
data matrix d_peak,
data matrix hol_mask
)
{
int num_holidays = dims(d_peak)[1];
int num_dates = dims(d_peak)[2];
row_vector[num_dates] z;
row_vector[num_dates] tdd = zeros_row_vector(num_dates);
for (h in 1:num_holidays) {
z = (d_peak[h, :] - h_loc[h]) ./ h_scale[h];
tdd += (2.0 * intensity[h] * exp(-abs(z) .^ h_shape[h]) .*
inv_logit(h_skew[h] * z)
) .* hol_mask[h,:];
}
return tdd;
}

Then when generating the function (and hessians) in stan/math/prim/fun I could use the inv_logit that is supplied with Stan, or I could “unroll” it as it is defined, e.g. use the 1 / 1+ exp(-x) formulation.

I guess my question is: if supplying my own derivatives, is it most proficient to get down to std functions?

Also, I still see a LOT of documentation regarding “Adding a new function to the stan-math library”. Is there a “blessed” set of docs to go by?

Sorry this got missed. We’re used to discussing developer issues more through GitHub issues than on the forums, despite suggesting that people comment on the forums.

Sorry this is such a mess. To be honest, I’ve given up on writing new Stan functions for just this reason.

@stevebronder wrote out new doc according to how we do things now (which has been evolving for the last 10 years), but I don’t think it’s merged yet. If that gets merged, it’ll be the gold standard doc. The best advice I can give you at this point is find a function that’s like the one you are going to write and copy its style. Maybe @stevebronder or @rok_cesnovar have better suggestions.

To answer your more concrete questions.

D

Delegate to our existing functions wherever possible, as we’ve tried to optimize them for efficiency and arithmetic stability. You definitely don’t want to implement inv_logit the way you suggested. Check out our library implementation, which branches on either side of zero and uses a Taylor approximation for small inputs so that 1 + exp(a) doesn’t round to 1:

If you template the primitive implementation, it can do double duty for forward-mode autodiff. Or you can write custom forward-mode, but either way, you don’t need to implement Hessians manually. We compute second-order derivatives, including Hessian matrices, by nesting reverse mode in forward mode.

Assuming you mean efficient then no. It’s most efficient to reduce things to lazy Eigen expression templates. After that, call our built ins rather than writing your own, unless you can make the combination more stable with algebraic reductions.

Followup question about the “adding to stan-math”: if I call my function as external c++, will I get all of the benefits of autodiff that you are alluding to? So to restate: if I have an external function that I want to “speed up” and I write it in templated c++, instead of adding it to stan-math, if I add it via “external c++”, is that kinda the same thing (in terms of efficiency)?