N-mixture likelihoods

Sorry to jump in on a tangential part of this reply, but do you have a link or an explanation for this? I’d like to better understand this.

Hi! I’ve split to a new topic. Is your question about N-mixture likelihoods in particular, or more generally about re-using discrete parameters with Poisson priors multiple times in the likelihood? The answer is roughly the same, but I’d answer with a somewhat angle depending on what you’re interested in.

I’m more curious about re-using discrete parameters with Poisson priors multiple times in a likelihood in general.

Thanks

Suppose you have some latent discrete parameter D, with prior

D ~ poisson(lambda)

and then a likelihood like

x ~ binomial(D, p1)
y ~ binomial(D, p2)

It doesn’t work to marginalize this to

x ~ poisson(p1 * lambda);
y ~ poisson(p2 * lambda);

because this version doesn’t capture the non-independence of x and y mediated by the shared latent discrete parameter D.

What you can do instead is

for(i in 1:U){
  target += poisson_lpmf(i | lambda) + binomial_lpmf(x | i, p1) + binomial_lpmf(y | i, p2);
}

Where U is some sufficiently high upper bound that you’re confident that you aren’t missing any important contributions to the posterior mass at values of D greater than U.

There might be some tricks to compute the sum more efficiently (e.g. recursively), and there’s a trick to determine a sufficiently high value of U adaptively (https://arxiv.org/pdf/2202.06121.pdf) but that’s the basic idea.

2 Likes

Happy to stumble upon this thread today, it happens to help a lot for a project I’m working on.

One follow-up: if lambda is itself a parameter estimated from the data (say we have a Gamma-distributed posterior for it), could the above approximation with the for loop still be used? Maybe by nesting another for loop where we iterate through the posterior draws of lambda?

Thanks in advance Jacob.

Yeah, it is fine if lambda is a parameter. Edit:

What’s best, if possible, is to estimate the model for lambda and the subsequent data model in a single step. Don’t use a separate loop or worry about the posterior draws of lambda, just make sure that your upper bound U is large enough to cover the important probability mass for the highest plausible values of lambda (or else use an adaptive truncation scheme as linked above).

If you have some posterior distribution for lambda that you need to use as a prior and cannot incorporate in a joint model, just take a parameteric approxiamtion to your posterior for lambda and use that as your prior.

2 Likes