Can I express that the lower bound on a parameter should be greater than zero?

If not, is there a best practice workaround?

There are many models that require parameters greater than zero, this must be a common problem?

Here is an example where I want to do predictive checks for a gamma regression.

parameters {
  real x;
  real<lower=0> shape;
  real<lower=0> scale;
}

model {
  shape ~ normal(10,2.5);
  scale ~ normal(1,1);

  x ~ gamma(shape,scale);

I can only express that shape and scale have a lower bound of 0, but I would like to say

  real<lower>0> scale;

i.e., the bound of shape and scale should not include zero.

Good question. The math is easy. With a lower bound constraint in Stan,

parameters {
  real<lower=0> scale;
}

we take an unconstrained parameter in \mathbb{R} then apply \exp() to it. The result is always in (0, \infty) mathematically (exponentiation of finite value is strictly positive).

The problem is floating point arithmetic. With double precision (64 bits) as Stan uses, the exp(x) operation will underflow to 0 when x is below -500 or so. If you then try to use that scale in something like

y ~ normal(0, scale);

then Stan will report a rejection because the normal requires scale > 0. So you won’t wind up getting sampled values where scale is 0 if that’s what you were worried about.

You can even be very explicit about it if you want and put this in the model:

if (scale == 0) reject("scale cannot be 0");

If this happens a lot in your fits, then you may want to think about more informative priors.