Constraining the sum of parameter vector

Good evening,

I have a situation where \sum_{j=1}^N t_j = a. Here the t_j's are parameters of interest and a is also a parameter of interest. I have three questions:

  1. Can I have the constraint \sum_{j=1}^N t_j = a even if the t_j's are parameters and a is a parameter? The prior for each t_j is Gamma(1,1) and a's prior is Gamma(1,2).

  2. If the answer to (1) is true, then how exactly is this accomplished?

  3. Assuming (1) and (2), is there a reference (i.e. journal article, book, etc.) that you can point me to? I searched for articles related to this topic and I found “Spherical Hamiltonian Monte Carlo for Constrained Target Distributions” by Lan, Zhou, and Shahbaba (2014; Proceedings of Machine Learning Research). The No U-Turn paper by Hoffman and Gelman mentions constraints briefly as does Bayesian Data Analysis by Gelman. My goal is to understand this aspect of HMC in detail.

Thank you for your time.

The sum of N independent Gamma(1,1)-distributed variables is Gamma(N,1)-distributed so the constraint is going to distort those priors a lot.

One way of implementing the constraint in Stan is

parameters {
  simplex[N] tn;
  real<lower=0> a;
}
transformed parameters {
  vector[N] t = a * tn;
}
model {
  tn ~ dirichlet(rep_vector(1,N));
  a ~ gamma(1,2);
}

I used Dirichlet prior because that’s the distribution you get if take N independent Gamma(1,1) distributed variables and divide them by their sum.

In general Stan handles constraints by change of variables.

4 Likes