Using gamma

I have a standard error modeled as a a1sd ~ gamma(2,0.5);

the model section contains these lines

a0 ~ normal(0, asd );
asd ~ gamma(2,0.5);

but when run the, asd takes negative values, which is surprising for a gamma

so I have declared asd to be

real<lower=0> asd ;

instead of just originally

real asd ;

but is it the way to do ?

1 Like

Edit: Ignore me, I’m wrong; see Betancourt’s response.

a0 ~ normal(0, asd);
asd ~ gamma(2, 0.5);

If that’s the only thing in the model, it should work. Do you have the complete model and the full error you got?

The sampling statement does not impose a constraint on the associated parameter. You always have to impose the constraint directly to avoid conflicts with the density evaluation (and anywhere else you might use it).

ok thank you

does this mean that in

asd ~ gamma(2,0.5);

asd is computed by MCMC and not evaluated directly with the analytic formula ?

No, the density is evaluated with the analytic formula. The problem is that the algorithms in Stan will update the value of asd at which the density is evaluated, and you have to tell Stan to keep those values within their intended domain.

It may help to remember that

x ~ pi(alpha, beta);

is just shorthand for
log_target_density += pi_lpdf(x | alpha, beta);

Your Stan program defines `log_target_density` and then the algorithms in Stan adjust the parameters as needed (to find the MLE, to generate samples, etc).

At this point I think the tilde notation should just go away. It’s not worth the ongoing confusion.

Yeah, I think I’m actually starting to agree.

There is strong support on both sides here.

The main downside will be that it’ll make Stan programs slower, at least until there’s a way to write a log density that drops constant terms.

For this specific thread I think getting rid of tilde would help.

I think the solution to the start of this thread would be requirement that the support of the lhs match the support of rhs. Stan already checks real vs. integer, why not check also for the range? The current very commonly used way of setting half-normal and half-Cauchy priors seems to cause lot of confusion.

Another thing which would help would be compound declarations

real<lower=0> asd ~ gamma(2, 0.5);

The problem is that all of these things are technically undecidable. So we could introduce heuristics, but no way to compute it in general. As a simple example of why we can’t do this statically and why it’s undecidable, something like

real<lower=f(x)> z;

is going to be without knowing how f(x) evaluate, which we can’t know statically. In fact, it can change every iteration because the bounds can depend on parameters.

It’s also not included in the runtime information on a type—that is, the variables don’t carry around their ranges at run times—that’s all in the transforms.

Yes, you are right.