We have our monthly Math meetings on the third Thursday of the month. The next Math meeting is Thursday, Jan 16, 2020, 10 am Eastern time. This will last no more than one hour; if we need more time, we will schedule additional time.
This meeting is open to all those that currently contribute to the Math library or that would like to contribute to the Math library. If you’d like an invite to the Google Calendar event, please DM me with your Google handle. If you’d like to just have a copy of the event on your calendar, please use this link:
Direct Google Meet link: https://meet.google.com/hhm-gnpt-jnp
- this is a chance to collaborate and coordinate efforts. We’re expecting everyone to be kind and courteous. And patient. We have a team with diverse backgrounds.
- impactful decisions will not be made at these meetings. That will continue to be done online on these forums. The meetings are a good time to discuss and get on the same page, but the decision making will be done online where we can include the opinions of others that can’t make the meeting.
- important discussions that happen at the meeting will be communicated back online on these forums. If they don’t make it back here, it wasn’t important. No one will be penalized for not being present at a meeting.
If there’s anything you know you want to talk about, please post here. This is a good time to request help from others if there’s expertise that you need.
- Anything else needed for the next release?
- Schedule: is this a good time for those that would like to be involved?
I took some quick notes of our open discussion. Please feel free to correct, clarify, or add more information.
- Steve: Helping with parallel autodiff. Working with Ben Bales and Sebastian. Looks nice. Parallelization of forward and reverse sweep.
- Sebastian: parallelization: problem is solved. Radical thing: do what we want to do without refactoring autodiff because restrict parallel function to only reduce. Exactly like how map_rect works. Chunk the gradients. Deep copies of the var on the new thread local storage. So it’s different from the main autodiff tape. Run the grad on the thread local stack. Harvest all the gradients. Create a precomputed gradient on the main stack. One constraint: before anything with vars on thread, must call start nested. Before stopping, recover-memory nested. Cool. Crazy speedups and scaling. User will be allowed a big container as first argument. std::vector of whatever, sliced in whatever chunks. As many shared arguments you want. Function will be passed as a function object as a type. Prohibiting internal state. Sebastian says “Did I say I am happy about it?”
- Martin: need review on PRs. 1495 is an open Issue. No PR yet.
- Daniel will make time to review all open PRs that pass tests.
- Marco: looking into the slowdown of normal_id_glm. Tracking it down required bisecting of 3 repos. Thinks it’s narrowed down somewhere after 3.0.
- Daniel: status. Working on old issues.
- Sebastian: AMICI. Some information about ODE solver. Sebastian will be at a hackathon sponsored by AMICI. Symbolic analysis of RHS.
- Martin: discussion about testing and the limitations in our current codebase. If we tighten tolerances, tests start to fail.