Mixture Bayesian Poisson Regression Model

In no particular order:

  • When you have d ~ uniform(0, 10000), you have to declare real<lower=0, upper = 10000> d; The model statement is then no longer necessary.
  • In general, the Stan team discourages uniform priors. For d you are probably better of with a gamma or exponential style prior.
  • Your priors are too wide. Especially alpha, beta, and ci. exp(400) overflows and exp(-400) underflows. Scaling of Log-likelihood value in Hamiltonian Monte Carlo
  • Irrespective of the numerical issues, I would try to use more informative priors. What is the scale, you alpha, beta, and ci to be on? Can you standardize the x’s to make the betas on a scale closer to normal ~ (0, 1)?
  • You can work on the log scale with poisson_log_lpmf which should help with numerical stability because you avoid the mu = exp() step.
  • I don’t understand the poisson distribution you are using but I wonder whether there are no ways to simplify the distributions in log_mix. mu[1] and mu[2] only differ by term ci. The poisson is also relatively easy to think about at the log scale.