# Choices of bounded parameters and truncations

Hello,

I have an equation for population growth that has three rate parameters a, b, and c.
For the equation to make physical sense (e.g. no negative populations), we must have 0<a<b<c.

Should I write:

``````data{
int N;
array[N] real<lower=0> y;
array[N] real t;
array[N] real x;
}
parameters {
real<lower=0> c;
real<lower=0, upper=c> b;
real<lower=0, upper=b> a;
real<lower=0> sigma;
}
model {
a ~ gamma( 2, 2 ) T[, b];
b ~ gamma( 2, 2 ) T[a, c];
c ~ gamma( 2, 2 ) T[b, ];
for (i in 1:N){
y[i] ~ lognormal( log( population_function(a, b, c, t[i], x[i]) ), sigma );
}
}
``````

Or can I write:

``````parameters {
real<lower=0> a;
real<lower=a> b;
real<lower=b> c;
...
}
model {
a ~ gamma( 2, 2 );
b ~ gamma( 2, 2 ) T[a, ];
c ~ gamma( 2, 2 ) T[b, ];
...
}
``````

Would these constraint declarations and their several permutations yield equivalent MCMC samples, except perhaps with different computational efficiencies or running into different numerical stability issues? Or does the math not make sense?

I can follow the examples in the Stan manual and tutorial below where the bounds are constant, but I don’t understand how truncations (which only add target += lccdf() calls, right?) affect the sampler or gradient calculations when the bound is changing.

https://betanalpha.github.io/assets/case_studies/stan_intro.html#62_constraint_implementation_in_the_parameters_block

I know this doesn’t address the questions about your approach directly.

But another approach which would avoid all the bounds/truncation subtleties altogether would be to view your problem as one of order statistics. First calculate the pdf of the 3-joint order statistics [a, b, c] for three realizations of your gamma prior, then calculate the pdf of the transformed distribution [x1, x2, x3] = [a, b-a, c-b]. Note the only constraints on the x_i are that they are all positive. Next draw X = [x1, x2, x3] from that distribution (calling the log of the pdf from the “functions” block). Then reconstruct [a, b, c] = [x1, x1+x2, x1+x2+x3] in the “transformed parameters” block.

Using the first differences like this avoids having to use any variables themselves as bounds. Also, using this method avoids the inefficiencies of not accepting values which don’t comply with your bounds. If one generalizes to many more than three parameters, these rejection costs might come prohibitive.

The simplest approach here is to use the `positive_ordered` data type:

``````parameters {
positive_ordered[3] abc;
}
model {
abc ~ gamma(2, 2);
}
``````

This will ensure that `0 < a[1] < a[2] < a[3]`