Article on relative research impact of Bayesian modeling (Stan/PyMC3) vs deep learning (PyTorch, TensorFlow, Keras)

Bayesian grant writers,
I did a citation count-based impact assessment of deep learning vs Bayesian modeling to hopefully put an end to reviewers assuming that deep learning was doing all the worthwhile science/research. This has been a problem in the past with grant reviews.

It is peer-reviewed, reproducible, and designed to establish the impact of Bayesian software via an authoritative reference–citations are facts right? Sarcasm aside, I think being able to cite research impact in a proposal rather than argue it makes a stronger case and it saves space. I have been spending a page or two defending against deep learning lately so perhaps it will strengthen your proposals too.

The artless title is intended to err on the side of being stunningly obvious about the information contained within.

Thanks to all that helped out with its creation.

Citation is:
title={Deep Learning does not Replace Bayesian Modeling: Comparing research use via citation counting},
author={Baldwin, Breck},
journal={Applied AI Letters},
publisher={Wiley Online Library}