Data driven bounds (vectorizing lower/upper bounds)

Hi Stan people

I have a variable vector to which I know the hard limits of it’s value. It’s true value may be anywhere within that range, it would incorrect to assume it is towards the centre of this range.

I understand for a variable with a uniform prior it’s bounds must be equal to or less permissive than the prior distribution.
Unfortunately, my bounds change for each element of my variable vector.

I found this exchange from the mailing list in 2014 https://groups.google.com/forum/#!topic/stan-dev/c73kgSpQHrM
It seams you can’t have vectorised bounds?

Does anyone have any suggestions how I should get around this.

Many Thanks

Declare the vector with bounds of 0 and 1 in the parameters block, map each element to the appropriate bounds in the transformed parameters block, and don’t forget the Jacobian adjustments if you are putting a prior on the thing in the transformed parameters block.

Great,

So something like this:
Where h_min and h_max are my bounds

data {
  int<lower=0> N;
  real<lower=0> C[N];
  real h_min[N]; 
  real h_max[N];
}

parameters {
  real<lower=0> sigma;
  real<lower=0,upper=1> h_sample[N-1];
}

transformed parameters {
  real C_hat[N-1];
  real h_bar[N-1];
  for(t in 1:N-1){
    h[t] = (h_sample[t] * (h_max[t] - h_min[t]) + h_min[t]);
    C_hat[t] = C[t] * (1/h[t];
  }
}

model {
  sigma ~ cauchy(0, 1);
  h_sample ~ uniform(0, 1);
  for(n in 2:N){
    C[n] ~ normal(C_hat[n-1], sigma);
  }
}

I have not really quite got my head around these Jacobian adjustments. I understand we need to correct for the changes in scale. What exactly should my target += statement look like?

The way you have it written currently, no Jacobian adjustment is needed because you are putting the prior on the parameter h_sample rather than the transformed parameter h. This implies h is distributed uniform between h_max and h_min. Also, you do not need the line h_sample ~ uniform(0,1); because omitting it results in the same log-kernel.

1 Like