# How to write the poisson_lpmf in the product of two Poisson distribution

Let `W ~ f(Xi ,Yi), i = 1, 2, ........n` be a set of two-dimensional
random vectors, and we assume that:

`(π,π) ~ π0π(.|π0π0) + π1π(.|π1π1) + π2π(.|π2π2)`

It is three components mixture model and `π(.|ππ)` is the product of two Poisson distribution. Therefore, is it correct to write the `poisson_lpmf` in this way?

``````model {
.
.
for(i in 1:N){

lp[1]  =  log(p[i,1])  + poisson_lpmf(y1[i]|mu2[i]*exp(z1))
+ poisson_lpmf(y2[i]|mu2[i]);
lp[2]  =  log(p[i,2])  + poisson_lpmf(y1[i]|mu1[i])
+ poisson_lpmf(y2[i]|mu1[i]);
lp[3]  =  log(p[i,3])  + poisson_lpmf(y1[i]|mu1[i])
+ poisson_lpmf(y2[i]|mu1[i]*exp(z2));
}
target += log_sum_exp(lp);
}
``````

1 Like

So far as I can see, the Stan code matches your intentions. Generally, a good way to test is to generate some mock data and see if you can recover the parameters. Itβs fairly unlikely that your model will do a good job if the code doesnβt work.

1 Like

in this equation, I want to specify the following relationship:

`π0 > π0, π1 = π1 and π2 < π2`.

``````transformed parameters{
vector[N] mu1 = exp(alpha1 + x1*beta1);
vector[N] mu2 = exp(alpha2 + x2*beta2);
}

// Likelihood
for(i in 1:N){
lp[1] = log(p[i,1]) + poisson_lpmf(y1[i]|mu1[i])
+ poisson_lpmf(y2[i]|mu1[i]-z1);
lp[2] = log(p[i,2]) + poisson_lpmf(y1[i]|mu1[i])
+ poisson_lpmf(y2[i]|mu1[i]);
lp[3] = log(p[i,3]) + poisson_lpmf(y1[i]|mu2[i]-z2)
+ poisson_lpmf(y2[i]|mu2[i]);
}
target += log_sum_exp(lp);
``````

I compile the code in this way. But still I have confusion on the expression of likelihood part. I have different corresponding covariates for `y1` and `y2`. But, when I express in the above way, e.g. in `lp[1]`, to express mu1 > mu2, I declared `mu2=mu1 - z1`. Since I used` mu1` only, I lose the covariates of `y2`. The same is true for others.

Therefore, is there any technique that keeps the covarites of y1 and y2 in each `lp` conditional on `π0 > π0, π1 = π1 and π2 < π2`. relationship?

The loop loop overwrites `lp` so `target += log_sum_exp(lp);` must be inside the loop.

The code youβve written has covariates for both `mu1` and `mu2` and has the relationship between mus and lambdas (no `lambda0` or `mu0`, though?). But you want a relationship between `mu1` and `mu2`, right? I donβt quite understand what your goal is.

YES!

``````parameters {
.
.

real<lower=0,upper=100000> d; // this is hyperparameter that gives more fexibility to the dirichlet distribution
}

transformed parameters{
vector[N] mu1 = exp(alpha1 + x1*beta1);
vector[N] mu2 = exp(alpha2 + x2*beta2);
}

model {
real lp[3];
beta1 ~ normal(0,3);
alpha1 ~ normal(0,3);
beta2 ~ normal(0,3);
alpha2 ~ normal(0,3);
z1 ~ normal(0,3);
z2 ~ normal(0,3);

for (i in 1:N){
p[i] ~ dirichlet(a[i]*d);
}

// Likelihood

for(i in 1:N){

lp[1] = log(p[i,1]) + poisson_lpmf(y1[i]|mu1[i])
+ poisson_lpmf(y2[i]|mu1[i] - z1);
lp[2] = log(p[i,2]) + poisson_lpmf(y1[i]|mu1[i])
+ poisson_lpmf(y2[i]|mu1[i]);
lp[3] = log(p[i,3]) + poisson_lpmf(y1[i]|mu2[i]-z2)
+ poisson_lpmf(y2[i]|mu2[i]);
}
target += log_sum_exp(lp);
}
``````

Here, I want to express `mu1 > mu2 in lp[1], mu1 = mu2 in lp[2], and mu1 < mu2 in lp[3]` . To come up with this point, I used the above expression. Example, in lp[1], I substitute the value of ` mu2` by `mu1-z1` , where ` z1` is positive number.

My question here is that, if I substitute `mu2` by `mu1-z1` , I ignored `(mu2 = exp(alpha2 + x2*beta2))` and instead I used `(exp(alpha1 + x1*beta1)) - z1` . But here, `mu2` should estimate over `exp(alpha2 + x2*beta2)`
Due to this case, may be some miss estimation will exist. Therefore, would you please tell me some techniques that express the `mu's` relation in each `lp's` (rather than adding or subtraction positive numbers technique)?

and to give more flexibility to the dirichlet distribution (in `p[i] ~ dirichlet(a[i]*d)`), I considered the hyper parameter with prior βuniform distributionβ . But, when I read some where, the Stan team discourages the uniform priors. Therefore, would you please tell me which priors will give more flexibility to the dirichlet distribution?

This is what I donβt understand. Whether `mu1 > mu2` or `mu1 < mu2` is solely function of whether `x1 < x2`. What is the difference between the mixture components? If all you want is to calculate the probability that `mu1 < mu2` then itβs not a mixture model.

Usually a good guide to model construction is to think how you would simulate fake data.
Letβs say someone gives you an algorithm that supposedly solves your inference problem. A simple but effective way to test that the algorithm works is

1. assume values for parameters `alpha`, `beta`, etc
2. simulate fake data conditional on those parameter values
3. use the algorithm to estimate the parameters from the fake data
4. compare the estimates to the known βtrueβ parameter values

Can you describe how you would do step 2?

1 Like