Hi all,

The new GLM primitives have been merged into the math library. To give you an idea of their performance, I’ve done some testing on my laptop (4GB memory, 2 cores). These graphs are showing the performance increase on the mean time of 30 gradient computations (in C++). In all cases, the new GLM primitives are compared to the current fastest way of writing a GLM using existing primitives. All inputs are given as parameters, except for the matrix of covariates, which is specified as data. (Although the primitives allow this to be a parameter as well, in which case the performance increase should be much larger still.)

Observe that performance is not hugely dependent on the size of the data set, but increases pretty rapidly if we have models with more parameters. (At the high end of the parameter sizes I tested, my laptop started to run out of memory.)