Monthly Math development meeting: 01/16/2020, 10 am EST

Hey all,

We have our monthly Math meetings on the third Thursday of the month. The next Math meeting is Thursday, Jan 16, 2020, 10 am Eastern time. This will last no more than one hour; if we need more time, we will schedule additional time.

This meeting is open to all those that currently contribute to the Math library or that would like to contribute to the Math library. If you’d like an invite to the Google Calendar event, please DM me with your Google handle. If you’d like to just have a copy of the event on your calendar, please use this link:

Direct Google Meet link:


  • this is a chance to collaborate and coordinate efforts. We’re expecting everyone to be kind and courteous. And patient. We have a team with diverse backgrounds.
  • impactful decisions will not be made at these meetings. That will continue to be done online on these forums. The meetings are a good time to discuss and get on the same page, but the decision making will be done online where we can include the opinions of others that can’t make the meeting.
  • important discussions that happen at the meeting will be communicated back online on these forums. If they don’t make it back here, it wasn’t important. No one will be penalized for not being present at a meeting.

If there’s anything you know you want to talk about, please post here. This is a good time to request help from others if there’s expertise that you need.

cc (sorry, I’m sure I’ll forget people): @rok_cesnovar, @wds15, @seantalts, @stevebronder, @mcol, @tadej, @yizhang, @Bob_Carpenter, @martinmodrak

Agenda items:

  1. Anything else needed for the next release?
  2. Schedule: is this a good time for those that would like to be involved?



@stevebronder, @wds15, @martinmodrak, @mcol, @syclik


I took some quick notes of our open discussion. Please feel free to correct, clarify, or add more information.

  1. Steve: Helping with parallel autodiff. Working with Ben Bales and Sebastian. Looks nice. Parallelization of forward and reverse sweep.
  2. Sebastian: parallelization: problem is solved. Radical thing: do what we want to do without refactoring autodiff because restrict parallel function to only reduce. Exactly like how map_rect works. Chunk the gradients. Deep copies of the var on the new thread local storage. So it’s different from the main autodiff tape. Run the grad on the thread local stack. Harvest all the gradients. Create a precomputed gradient on the main stack. One constraint: before anything with vars on thread, must call start nested. Before stopping, recover-memory nested. Cool. Crazy speedups and scaling. User will be allowed a big container as first argument. std::vector of whatever, sliced in whatever chunks. As many shared arguments you want. Function will be passed as a function object as a type. Prohibiting internal state. Sebastian says “Did I say I am happy about it?”
  3. Martin: need review on PRs. 1495 is an open Issue. No PR yet.
  4. Daniel will make time to review all open PRs that pass tests.
  5. Marco: looking into the slowdown of normal_id_glm. Tracking it down required bisecting of 3 repos. Thinks it’s narrowed down somewhere after 3.0.
  6. Daniel: status. Working on old issues.
  7. Sebastian: AMICI. Some information about ODE solver. Sebastian will be at a hackathon sponsored by AMICI. Symbolic analysis of RHS.
  8. Martin: discussion about testing and the limitations in our current codebase. If we tighten tolerances, tests start to fail.

Ad 1. - I would really like a fix for (neg_binomial_2_log can return crazy values when phi is large) - which IMHO can negatively affect actual inferences in Stan (because that’s how I discovered it). But this hinges on getting neg_binomial_2 right and that is currently 3 separate PRs that need review and possibly other adjustments. (lbeta -, binomial_coefficient_log -, neg_binomial_2 itself: Not sure if it is useful to make a quick fix or wait until the other PRs make it through.

Some ideas for agenda relevant to me, not sure how much space those deserve (could be 0):

The time is OK for me, will join.


I will try to make it and can give an update on the parallel reduce_sum feature which is moving forward quite quickly right now.

1 Like

This clashes with another meeting starting at the same time. In case I finish earlier than expected I’ll try to join.

1 Like needs to be solved as well

The time is fine for me in general, but not sure I will make it today. Also want to follow up on the previous meeting that I have all the info regarding MPI on Windows but havent had the time to try it out yet.

1 Like

Somehow I can’t join as no one is admitting me to the meeting and then I am asked for a code which I do not know. Not sure.

1 Like

Same here.

oh. One moment.

@martinmodrak I can review those pulls if it is decided those are things that should be finished for this release. I’m not at the meeting so let me know.

1 Like

@bbbales2 Thanks for the offer!

We agreed with @syclik that I will make a PR fixing the most notable issues around the neg. binomial distribution with minimal intervention ( The rest is not urgent (i.e. unlikely to noticeably influence most models) and could benefit from better scrutiny.

I will however be glad if you double check my math and tests - I already thought several times that I have it all done only to uncover further issues, so chances are some stuff is broken.

I posted a few times already, but for the record, here is a link to AMICI:

a scalable ODE solving framework (adjoint method, events, sparse solvers, root solving, steady state calculations, EDIT: also analytical gradients for the ODE RHS, and 2nd order sensitivities).

1 Like

AMICI is for SBML though. It’s cool they can do analytic stuff but that doesn’t really help us much cause we write our models in Stan.

That’s correct. You need to write the ODE model in SBML, but the parameter inference can be done in Stan. The trick will be to interface the two packages.

So AMICI should be able to give us a C++ function which you pass in data and parameters and it will spit out the log-likelihood and the gradients wrt to the parameters. At least this is how I envision it.

I’d prefer to continue pushing for analytic gradients. Or maybe Stan autodiff written in Stan is a better way to say that.

The list of features in AMICI is from my perspective way too long so that one can ignore it. It’s a massive amount of work in that tool, I think, and we should take advantage of it.

The folks who work on AMICI fit systems biology networks models. These are huge.

analytic gradients in Stan - of course, I take it.

I’m doubtful of an SBML -> Stan pipeline.

It’s my impression SBML is a gigantic spec with all sorts of weird stuff in it. There’s a little note at the bottom of the AMICI page that says:

“Python-AMICI currently passes 500 out of the 1780 (~28%) test cases”

So hypothetically Stan could support 28% of SBML?

I’m not convinced SBML >= Stan for expressing systems biology systems either.

But yeah, the stuff the AMICI people do, we should do those things too as much as possible (some of that will require language things).

I have never worked with SBML myself… but that beast is around for quite some time and is well established. So there are lots of tools and models out there which are defined in SBML.

I think its worth a try.

Things we need in Stan is about the entire list of things I wrote above - and that’s a ton.

(but for my problems, I have to say that very likely the reduce_sum thing will just be a kill-it-all-with-hardware hammer - so I am quite happy)

1 Like

I just posted the minutes of our meeting in the top-level post. Thanks to all that attended!