We have our monthly Math meetings on the third Thursday of the month. The next Math meeting is Thursday, Dec 19, 2019, 10 am Eastern time. This will last no more than one hour; if we need more time, we will schedule additional time.
This meeting is open to all those that currently contribute to the Math library or that would like to contribute to the Math library. If you’d like an invite to the Google Calendar event, please DM me with your Google handle. If you’d like to just have a copy of the event on your calendar, please use this link:
Direct Google Meet link: https://meet.google.com/hhm-gnpt-jnp
- this is a chance to collaborate and coordinate efforts. We’re expecting everyone to be kind and courteous. And patient. We have a team with diverse backgrounds.
- impactful decisions will not be made at these meetings. That will continue to be done online on these forums. The meetings are a good time to discuss and get on the same page, but the decision making will be done online where we can include the opinions of others that can’t make the meeting.
- important discussions that happen at the meeting will be communicated back online on these forums. If they don’t make it back here, it wasn’t important. No one will be penalized for not being present at a meeting.
If there’s anything you know you want to talk about, please post here. This is a good time to request help from others if there’s expertise that you need.
This was a short meeting. We discussed:
Upgrading Boost to 1.72. We talked about the current status. It’s waiting on one test to be fixed.
MPI on Windows. @wds15 believes he knows how to get it working.
- Shared libraries don’t work on Windows the same way as in Linux and Mac.
- @rok_cesnovar will help with the builds.
We discussed the parallel autodiff discourse thread.
- Discussed current status and the two proposals that are out there now. Discussed the differences in the proposals and what they accomplish well.
- Next steps are to try to summarize the thread and submit it as a design for discussion.
Flattening is still going on. @rok_cesnovar is working on it.
@rok_cesnovar is working on a way to switch to the fastest device available when using OpenCL.