Addressing Stan speed claims in general

from the course page:

Despite the promise of big data, inferences are often limited not by the size of data but rather by its systematic structure. Only by carefully modeling this structure can we take fully advantage of the data – big data must be complemented with big models and the algorithms that can fit them. Stan is a platform for facilitating this modeling, providing an expressive modeling language for specifying bespoke models and implementing state-of-the-art algorithms to draw subsequent Bayesian inferences.

Is the implication that Stan based approaches should be used to fit Big Data problems? In my experience with Stan, data size is perhaps singularly the biggest contributor to performance degradation and the relationship is ofttimes exponential.

1 Like