Hi all…please suggest how to put a bound on a set of equations of parameters in stan?..for example I need to set the bounds on A*\beta>0 where A is a known matrix of dimension n\times n and \beta (R^n) are the parameters…I need the parameters \beta in the parameter block and the parameters log(A*\beta) in the transformed parameters block
for example, I have the following parameters in my parameter block: beta_1,beta_2 and i need to put the bound such that beta_1+beta_2>0 in the parameter block…is there a way to write this in Stan…thanks
Unfortunately, I don’t think what you want is supported in Stan out of the box. For the simple case of beta_1+beta_2>0, you can do something like:
parameters {
real beta_1;
real<lower=0> beta_2_raw;
}
transformed parameters {
real beta_2 = -beta_1 + beta_2_raw;
}
But this approach of setting bounds may become problematic, because know beta_1 and beta_2 are not fully symmetric - any prior on beta_1 will imply some weird prior on beta_2.
Generally you want to find a 1 to 1 mapping between the solutions to your inequality and the R^n that is invariant under permuting the dimensions. I believe it should exist, but my linear algebra is too weak to know a good answer. I noticed @bgoodri answered some similar inquiries in the pest, so maybe he has time for this one as well?
Also note that you can use $ signs to mark math (via Latex) and ` to mark code in your posts (I edited your post to show this already)
IIUC this would require the Stan sampler to work in unconstrained space and then transform the betas such that the constraint is respected, but there seems to be some sort of degeneracy in such a matrix inequality that prevents such an isomorphic transform.
Apart from simply rejecting proposals which don’t satisfy the constraint, would it make sense to push the sampler toward the part of parameter space satisfying the constraint with a non-Bayesian term in the construction of the log probability, such as
for (j in 1:rows(A))
target += sum(tanh(A[j]*beta));
which, from an optimization perspective, would help keep your constraint satisfied.
Let p \ge 0 be an n-vector. Since A is square I assume it is invertible, can you sample p and then parameterize as \beta = A^{-1}p? This seems it would guarantee that A\beta \ge 0 on every sample of p?
parameters{vector<lower=0>[n] p;}
transformed parameters{vector[n] b = inverse(A) * p;}