Yes, when you have `beta(a, b)`

[included implicitly in `beta_binomial(n, a, b)`

], you’re going to have issues with `a < 1`

and `b < 1`

because they have very heavy tails when transformed to the unconstrained scale (it’s a logit transform).

I often find `alpha > 100`

or `beta > 100`

because they’re like prior counts; for example, I’m over 100 in the repeated binary trial case study (on Stan web site). When you have a hard constraint at the boundary, we use a logit transform again, and if mass piles up near the boundary, it’s going to be near plus or minus infinity on the unconstrained scale, which can lead to instability.

If you really want a `uniform(0, 100)`

prior, then you need to constrain `alpha`

and `beta`

to have `<lower = 0, upper = 100>`

; otherwise, it’ll just reject above 100 and lead to inefficient sampling. We instead recommned no upper bound and a weakly informative prior (like a Pareto, exponential, or gamma or lognormal) that’s also unconstrained.

I also find it easier to reparameterize, also as described in the manual, and use the parameters `alpha / (alpha + beta)`

which is the prior mean and `(alpha + beta)`

, which is like the overall precision or prior count.

You can write that beta-binomial more efficiently and reliably as

```
y ~ beta_binomial(n, alpha, beta);
```