Monthly Math development meeting: 8/15/2019, 10am

Hey all,

The next Math meeting is today, Thursday, Aug 15, 10 am Eastern time. This will last no more than one hour; if we need more time, we will schedule additional time.

This meeting will be open to all those that currently contribute to the Math library or that would like to contribute to the Math library. If you’d like an invite to the Google Calendar event, please DM me with your Google handle. If you’d like to just have a copy of the event on your calendar, please use this link:

Reminder:

  • this is a chance to collaborate and coordinate efforts. We’re expecting everyone to be kind and courteous. And patient. We have a team with diverse backgrounds.
  • impactful decisions will not be made at these meetings. That will continue to be done online on these forums. The meetings are a good time to discuss and get on the same page, but the decision making will be done online where we can include the opinions of others that can’t make the meeting.
  • important discussions that happen at the meeting will be communicated back online on these forums. If they don’t make it back here, it wasn’t important. No one will be penalized for not being present at a meeting.

If there’s anything you know you want to talk about, please post here. This is a good time to request help from others if there’s expertise that you need.


We still have a very fluid meetings. I believe we want to talk about Eigen and expanding the function signatures. If there’s anytthing else, please post. See you all soon.


Minutes

August 15, 2019. 10 am
Posted: August 24, 2019.

Attendees:
@rok_cesnovar @anon79882417 @stevebronder @wds15 @syclik

We discussed some technical things in this meeting:

  • Generalizing the Eigen types in functions.

    We discussed the idea of it and determined that it would be a good idea if we could get the implementation to work. Steve will continue to work on it.

  • lgamma implementation.

    There are many paths forward. Boost has fixed their implementation on a branch. Unsure when Boost will include it in their release. It’s an option for us to include the updated headers in our copy of the Boost distribution and patch it. RStan has done this in the past for single header files, so it’s still reasonable to do so. Actions:

    • Ok to patch boost if it’s easy to do
    • Will add tests for digamma to make sure the problem isn’t there.
  • Static initialization problem

    Sebastian will fix

  • Rok will write to @andrjohns about the fused mulitply add operator (fma)

    We discussed the discourse thread briefly and didn’t have a good reason for having a separate implementation.

  • Preprocessor flag cleanup.

    We can change at will as long as it works.

  • Intel TBB.

    We are waiting for word from the lawyer.

1 Like

Beside the Eigen auto stuff I propose we discuss:

In that order of priority.

3 Likes

Direct link: https://meet.google.com/hhm-gnpt-jnp

I’ll post notes here later.

I had to leave after 10:45.

I was unable to improvise a decision on the lgamma vote because improvisation requires a lot of background knowledge and practice. Ultimately, we’re trusting hard workers to make good decisions, but I wanted to say any 20% performance decrease intially sounds bad to me. Was this 20% in model gradient evaluation, or what exactly?

However, we’re more reliant on MCMC for sampler run time, and perhaps the evaluation cost of some deterministic function is negligible compared to over all sampler run-time in a well specified model, I think we’re ok. Since there’s no numerical side effect, there will be no effect on the sampling run time, is that correct?

I can’t speak on the extra if-defs/directives.

As far as documentation, perhaps doxygen and, in addition, in-line comments on a test case would make it even more obvious. Just a dummy inline evaluation to check boost’s precision against std's.

If you profile the C++ code, almost all the time for running NUTS is devoted to computing log densities and their gradients.

It won’t change behavior in terms of tree depth, number of log density and gradient evals, etc. It also won’t change time spent in the sampling parts of the sampler. But those are negligible. It will increase time spent to fit a model if that function’s a large component of the cost.

Sorry if that was all ottally obvious.

I’m going to be adding a bunch of tests like this as part of the ad autotesting framework. Currently we have no tests for most of the base C/C++ library functions at the double and int level.

I just posted the minutes. Please see the top post.

I think you meant write to Andrew about the fma() we discussed → FMA function: hand-coded vs std::fma

1 Like

Thanks! Updated.