Hello!

I’m wondering if the following problem is in principle solvable in Stan - it’s not immediately obvious how you might do it.

Suppose that I have a series of variables Y_i where i \in [1,n]. Suppose that I think that I have the following hierarchical model:

**Problem specification**

- Y_i \sim \text{Poisson}(\lambda_i), where \lambda_i is an unknown parameter
- \lambda_i \sim \text{Gamma}(\alpha, \beta) for i \in [1,n], ie a hierarchical prior and \alpha and \beta are unknown.

I am trying to understand the posterior distribution of each \lambda_i.

Suppose that I don’t have any actual Y_i data but that instead I have some model from elsewhere that gives the following constraints:

- \mathbb{E}(\lambda_i) = L_i for i \in [1,n], i.e. I have prior knowledge about what the mean of each of the \lambda_i variables should be.

Is this problem solvable in Stan directly?

**One approach - maybe incorrect…**

One thing I can think of is assuming an additional prior as follows:

- \lambda_i \sim \text{Normal}(L_i, \sigma_i^2) for i \in [1,n] where {\sigma_i^2} are treated as unknowns and then allowed to vary to be consistent with the data.

This type of prior could help encode my knowledge about how good my other models are.

But this is a prior mean assumption, whereas I guess what I’m expressing above is the desire for some sort of quasi-posterior mean constraint.

I’m not sure if this problem is adequately/fully specified… any thoughts welcome!

Thanks in advance,

Julian