Time varying Covariance matrix in Rstan

I am looking for an urgent help in this problem. I will be indebted for helping me out in this problem:
I am trying to build a time varying VAR model in stan:
1- I modelled the constants+ coefficients as random walk.
2- I modelled the covariances as stochastic volatility random walk.

But I failed to find a way to model the covariances between the variables as random walk. Here is the stan model:


    int<lower=1> T;          
    int<lower=1> N;
    int<lower=1> CC; 
    matrix[T, N] Y;           
    matrix[N, N*(N+1)] X[T-1];
}
parameters {
    vector[CC] B_raw[T-1];
   vector[CC] B1;
   vector<lower=-1, upper=1>[CC] b;
   real<lower=0> tau;
   vector[N] z_sigma[T-1];
   vector[N] Z1;
   real<lower=0> tau_sig;
   real<lower=-1, upper=1> s;
   real<lower=-1, upper=1> rho_raw[T-1];
   real<lower=0> tau_rho;
}

transformed parameters {
   vector[CC] B[T-1];
   vector[N] log_sigma[T-1];
   real<lower=-1, upper=1> rho[T-1];
      B[1,1] = B1[1];
      B[1,2] = B1[2];
      B[1,3] = B1[3];
      B[1,4] = B1[4];
      B[1,5] = B1[5];
      B[1,6] = B1[6];
      log_sigma[1,1] = Z1[1];
      log_sigma[1,2] = Z1[2];
      rho[1] = 0;
   for (t in 2:(T-1)) {
     for (c in 1:CC){
        B[t] = b[c] * B[t-1] + tau * B_raw[t]; 
     }
     for (n in 1:N){
        log_sigma[t] = s * log_sigma[t-1] + tau_sig * z_sigma[t];
     }
       rho[t] = rho[t-1] + tau_rho * rho_raw[t];
   }
}
model {
   vector[N] Mean;
   B1 ~ normal(0.1, 0.5);
   Z1 ~ normal(-3, 1);
   b[1]~ normal(0.5, 1);
   b[2]~ normal(0.5, 1);
   s ~ normal(0.5, 1);
   for(n in 1:CC) {
        for (t in 1:T-1) {
            B_raw[t, n] ~ normal(0, 1);
        }
    }
    
    for(n in 1:N) {
        for (t in 1:T-1) {
            z_sigma[t, n] ~ normal(0, 1);
        }
    }
    
    for (t in 1:T-1) {
       rho_raw[t] ~ normal(0, 1);
   }
   
   tau ~ normal(0, 0.3);
   tau_sig ~ normal(0, 0.3);
    tau_rho ~ normal(0, 0.3);
    
      for (t in 2:T) {
         Mean = X[t-1] * B[t-1];
         matrix[2, 2] Sigma;
         Sigma[1,1] = exp(2 * log_sigma[t-1, 1]);
         Sigma[2,2] = exp(2 * log_sigma[t-1, 2]);
         Sigma[1,2] = rho[t-1] * exp(log_sigma[t-1, 1]) * exp(log_sigma[t-1, 2]);
         Sigma[2,1] = Sigma[1,2];
         Y[t] ~ multi_normal(Mean, Sigma);
      }
}```

The idea is that stan is rejecting the initial values for rho since it is bounded [-1,1] (i.e. the correlation between y1 and y2). I tried several ways but I always end up getting rejection of initial values.

It’s complicated yeah. I implemented this for the ctsem package, some discussion (be sure to see the updates in later posts) here: Partial-pooling of correlation (or covariance) matrices? - #9 by rmac