- Hi, I am writing a Multivariate Time Series model and the matrix for theta needs to have eigenvalue between -1 and 1. Are there any way I could achieve this ?

Thatâ€™s an interesting question and I donâ€™t know the answer. An important sub-question is whether the matrix is already guaranteed to have real eigenvalues (this would be ensured, for example, if the matrix has real entries and is symmetric). If real eigenvalues are not already structurally guaranteed, do you need the constraint you are asking for to guarantee that all eigenvalues are real? Or if not, do you want the constraint to apply to the real part of the eigenvalue, or the absolute value?

So the eigenvalue do not need to be real, just the absolute value of the complex eigenvalue.

There might be a slick way to implement the constraint you need, but I donâ€™t know what it is. There is also a hackish strategy that might potentially work.

- If you fit the model without the constraint, and find that at every iteration the constraint is satisfied (and the diagnostics all look good), then adding the constraint would not change the posterior.
- If you find that the constraint is satisfied at most (but not all) iterations, then you can get the posterior that respects the constraint by post hoc rejecting all the iterations that fail the constraint. If you find that the constraint is satisfied at some iterations, you can run your chains for longer until you have a good sample size of iterations that do respect the constraint.
- If you find that the constraint is rarely satisfied, you could see if you can convince the model to land more regularly within the region that satisfies the constraint by implementing a penalty term in the log-likelihood that is zero everywhere that satisfies the constraint and negative otherwise. The trick is that this penalty must be differentiable everywhere. I donâ€™t even know if this is possibleâ€“is the absolute value of the eigenvalue everywhere differentiable with respect to the matrix elements? If it is possible, you could consider a penalty like the following (note, I have not tested this Stan code):

```
data{
int<lower = 1> N;
real<lower = 0> z;
}
parameters{
matrix[N,N] M;
}
transformed parameters{
real E = max(abs(eigenvalues(M));
}
model{
if(E > 1){
target += - (E - 1)^z;
}
}
```

This is imposes increasingly abrupt penalties for larger `z`

, which will tend to force smaller stepsizes to avoid divergences but will also tend to encourage more iterations to land within the region that satisfies the constraint.