I should have paid attention to the details! I think you will need some math to help you out with this.

- By parametrise do you mean filling out the elements of
`B`

and `K`

and maybe making them dependent on observed data or other parameters? For the standard deviations that is relatively easy you can do something like `k[1] = exp(f(data, parameters)`

.
- For
`B`

, it’s harder. The biggest problem is that the eventual covariance matrix needs to be positive definite which leads to non-linear restrictions on b_{12}, b_{13}, b_{23}. It might be possible to write out all restrictions to ensure positive definiteness and positive correlation but it’s going to be unwieldy for large correlation matrices.

*I* would try to reformulate your problem in terms of linear regressions if that is possible.

For instance, does it make sense to write the following.

```
x2 ~ normal(a02 + a21 x1, sd2)
x3 ~ normal(a03 + a31 x1 + a32 x2, sd1)
```

with `a21, a31, a32 > 0`

is equivalent to your original description. The b’s in your formulation will have the same sign as the a’s in my formulation.

Or you could make use of the fact that if R is a correlation matrix then R^{-1} has the partial correlations scaled by a positive factor as off diagonal elements (linear regression coefficients are also scaled partial correlations).