If I’m doing a regression with a lognormal likelihood, would it be more efficient to do a change of variables to use
normal_id_glm_lpdf(log(y)| ...)(and adapt the jacobian) or should I uselognormal_lpdf(y | X * beta , sigma)?
I think normal_id_glm_lpdf(log(y)| ...) will be faster. However, benchmarking your model is the only way to know for sure.
Their main advantage lies in using analytically simplified gradients
That is the idea. However, right now glm functions are also better optimized. I am working on other distributions right now. Some may get in the next release (in ~2 weeks), but for most of them will be only ready for the following one.
Maybe the two of them have a better “rule of thumb” of when it makes sense to implement another GLM.
Any model will be faster when implemented in C++. The question is just if the difference is huge, negligible or something in between. So far glm functions were made for combinations of distribution and link function someone thought would be interesting for wider audience.