Hi!

I have here a small test program just to help me understand how STAN works. I need something akin to a recursive function, except it is a recursive application of distributions. Instead of a recursive function, one can just (in C++) overwrite a target variable, and this seems the best fit. Because I tried writing this with recursive functions and failed.

The problem is that for inp=500, I get lambda~=1.98. This is not an intuitive value at all; I had expected closer to sqrt(10). lambda barely changes for inp=50000.

So can I just confirm that this is ok to do?

```
data {
int inp;
}
parameters {
real<lower=0> lambda;
}
model {
lambda ~ normal(0.0,5000.0);
inp ~ poisson(lambda*5.0);
inp ~ poisson(lambda*inp);
}
```

This would later develop into e.g.

```
model {
lambda ~ normal(0.0,5000.0);
inp ~ poisson(lambda*5.0);
for(i in 1:10){
inp ~ poisson(lambda*inp);
}
}
```

Stan overwrites assignments, but sampling statements work differently. When I write

```
y ~ normal(alpha + beta * x, sigma)
```

in Stan, it is *not* an assignment statement, but rather just shorthand for

```
target += normal_lupdf(y | alpha + beta * x, sigma);
```

This statement adds the unnormalized log pdf for normal with those arguments to the target density.

So when you do this,

it just adds the log pmf for both Poisson distributions to the target density, which is the same as taking an unnormalized likelihood

```
p(inp | lamba) ∝ p(inp | 5 * lambda) * p(inp | inc * lambda)
```

I’m guessing that’s not what you want. Including your actual data on the right-hand side also doesn’t make sense. If I try to fit `poisson(inp | lambda * inp)`

, the maximum likelihood estimate for `lambda`

will just be 1.

Setting that aside, it’s also unlikely you intended the behavior this example produces:

This adds `poisson(inp | lambda * inp)`

ten times, each of which is vectorized to account for all of the input. So your loop is equivalent to this:

```
for (i in 1:10)
for (n in 1:10)
inp[n] ~ poisson(inp[n] * lambda);
```

So you probably just want to get rid of the `for`

loop. But then it still doesn’t make sense with the `inp`

on the left and right.

Thanks!

I realize I also have been confused by needing a discrete latent parameter; which definitely seems like a pain for the full model I was hoping to implement!

for others: 7.2 Change point models | Stan User’s Guide

It’s a bit of pain to marginalize, but sampling discrete parameters can be even more painful. In cases where marginalization is possible, it’s usually much better behaved than discrete sampling. In cases where it’s not, discrete sampling is often intractable (like variable selection)