Conway-Maxwell-Poisson Distribution



There has been some talk about dealing with either overdispersed or underdispersed count data. I have implemented the Conway-Maxwell generalization of the Poisson PMF, which is attached. It uses a brute force but numerically stable approach to calculating the normalizing constant, which will be slow if you parameterize the location parameter.
CMP.stan (2.1 KB)

How to speed up calculation of Mittlag-Leffler's function in Stan?

target += sum_y * log_lambda - nu + sum_lfac - N * log_C(log_lambda, nu);

shouldn’t that be:

target += sum_y * log_lambda - nu * sum_lfac - N * log_C(log_lambda, nu);

Can nu = 0 happen? (Except by setting nu = 0)


Thanks. I edited the OP to multiply rather than add in the target statement. In this case, the only way nu can be zero is with underflow.


Should we be putting this distribution in the math lib?


real sum_lfac = 0;
for (n in 1:N) sum_lfac += lgamma(y[n] + 1);

makes me realize we should really have a sum-of-lgamma-plus-1 function because it’s the log product of factorials and it’s so common for combinatoric normalization.

The above could be written as

real sum_lfac = sum(lgamma(to_vector(y) + 1));

but that’d actually be less efficient than the way Ben had it because of the extra copy in to_vector().


It could. I don’t know a clever way to do the derivative of the normalizing constant, so it would only help so much.


The CMP uses a clever variant of the taylor expension of the exp function. What could help with lgamma is to precalculate values of lgamma up to max(y) and store those values. max(y) is usually small, if large it may be worth think about approx. with some continues distribution, eg. normal.


It would make sense to precompute lgamma for discrete cases, but you have to be careful about memory paging and indexing to ensure fetch times and branching don’t dominate arithmetic times. It’d probably be a win for something intensive like lgamma that gets called a lot if there aren’t too many possible integer values.