Hello,
I have an equation for population growth that has three rate parameters a, b, and c.
For the equation to make physical sense (e.g. no negative populations), we must have 0<a<b<c.
Should I write:
data{
int N;
array[N] real<lower=0> y;
array[N] real t;
array[N] real x;
}
parameters {
real<lower=0> c;
real<lower=0, upper=c> b;
real<lower=0, upper=b> a;
real<lower=0> sigma;
}
model {
a ~ gamma( 2, 2 ) T[, b];
b ~ gamma( 2, 2 ) T[a, c];
c ~ gamma( 2, 2 ) T[b, ];
for (i in 1:N){
y[i] ~ lognormal( log( population_function(a, b, c, t[i], x[i]) ), sigma );
}
}
Or can I write:
parameters {
real<lower=0> a;
real<lower=a> b;
real<lower=b> c;
...
}
model {
a ~ gamma( 2, 2 );
b ~ gamma( 2, 2 ) T[a, ];
c ~ gamma( 2, 2 ) T[b, ];
...
}
Would these constraint declarations and their several permutations yield equivalent MCMC samples, except perhaps with different computational efficiencies or running into different numerical stability issues? Or does the math not make sense?
I can follow the examples in the Stan manual and tutorial below where the bounds are constant, but I don’t understand how truncations (which only add target += lccdf() calls, right?) affect the sampler or gradient calculations when the bound is changing.