Why hierarchical model get a lot of warnings

I want to construct hierarchical ordered logit model. I use several ways but get a lot of warnings. I use induced dirichlet prior for threshold and reduce sum to speed up. My stan code like this.

functions {
  real partial_sum(
    int[] slice_y,
    int start, int end,
    matrix x1,matrix x2, vector[] thresh,
    vector beta1, vector[] beta2, int[] g
  )
  {
    real lp = 0;
    for(i in start:end)
      lp += ordered_logistic_lpmf(slice_y[i-start+1] |x1[i]*beta1+x2[i]*beta2[g[i]]  , thresh[g[i]]);
    return lp;
  }
  real induced_dirichlet_lpdf(vector c, vector alpha, real phi) {
    int K = num_elements(c) + 1;
    vector[K - 1] sigma = inv_logit(phi - c);
    vector[K] p;
    matrix[K, K] J = rep_matrix(0, K, K);
    // Induced ordinal probabilities
    p[1] = 1 - sigma[1];
    for (k in 2:(K - 1))
      p[k] = sigma[k - 1] - sigma[k];
    p[K] = sigma[K - 1];
    // Baseline column of Jacobian
    for (k in 1:K) J[k, 1] = 1;
    // Diagonal entries of Jacobian
    for (k in 2:K) {
      real rho = sigma[k - 1] * (1 - sigma[k - 1]);
      J[k, k] = - rho;
      J[k - 1, k] = rho;
    }
    return   dirichlet_lpdf(p | alpha)
           + log_determinant(J);
  }
}

data {
  int<lower=1> N;             // Number of observations
  int<lower=1> K;             // Number of ordinal categories
  int<lower=1> D;             //Number of covariates
  int<lower=1> DN;          //Number of  covariates to estimate use time-varying beta
  array[N] int<lower=1, upper=K> y; // Observed ordinals
  matrix[N,D-DN] x1;
  matrix[N,DN] x2;
  int g[N];                   //time indicator
  int<lower=1> P;        //P different time
}
parameters {
  vector[D-DN] beta1;  
  vector[DN] beta2[P];     //time varying beta
  ordered[K - 1] thresh[P];    //time varying threshold
  real<lower=0> sigma;

  vector<lower=0,upper=15>[DN] Omega;
}

model {
  vector[DN] Zero= rep_vector(0,DN);
  beta1~ normal(0,10);
  beta2[1] ~normal(0,10);
  //tau ~ cauchy(0,2.5);
  //Omega ~ lkj_corr(2);
  thresh[1] ~ induced_dirichlet(rep_vector(1, K), 0);
  for (i in 1: (P-1)){
   (thresh[i+1] - thresh[i]) ~ normal(0,sigma);
   //(beta2[i+1]-beta2[i]) ~ normal(0,omega[i]);
   (beta2[i+1]-beta2[i]) ~ normal(Zero,Omega);
  }
  target += reduce_sum(partial_sum, y, 1, x1,x2, thresh, beta1,beta2, g);
}


I have got the warnings like this:

Warning messages:
1: There were 152 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them. 
2: Examine the pairs() plot to diagnose sampling problems
 
3: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#bulk-ess 
4: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#tail-ess 

And my estimate result is

               mean se_mean    sd    2.5%     25%     50%     75%   97.5% n_eff Rhat
beta1[1]     -17.71    0.20  9.61  -36.27  -24.02  -17.88  -11.39    1.90  2418 1.00
beta1[2]      -4.67    0.18  9.95  -23.83  -11.40   -4.60    2.07   15.01  2919 1.00
beta1[3]     -19.19    0.24  9.22  -36.77  -25.41  -19.40  -13.25   -0.33  1427 1.00
beta1[4]      -0.56    0.02  0.44   -1.53   -0.83   -0.54   -0.24    0.21   824 1.00
beta2[1,1]     7.89    0.17  5.44   -3.72    4.40    8.47   11.79   17.12  1016 1.00
beta2[1,2]    -3.45    0.03  1.44   -6.06   -4.47   -3.54   -2.51   -0.47  2296 1.00
beta2[1,3]    -0.28    0.00  0.10   -0.46   -0.34   -0.28   -0.22   -0.07  2083 1.00
beta2[1,4]     2.73    0.01  0.25    2.29    2.55    2.71    2.88    3.25  1564 1.00
beta2[1,5]     2.17    0.03  0.83    0.96    1.49    2.05    2.72    3.99   815 1.01
beta2[1,6]     1.33    0.01  0.52    0.35    0.99    1.30    1.64    2.43  2979 1.00
beta2[1,7]    -0.13    0.01  0.28   -0.66   -0.31   -0.14    0.03    0.50  2876 1.00
beta2[2,1]    12.75    0.08  4.64    3.59    9.82   12.51   15.34   23.07  3000 1.00
beta2[2,2]    -5.68    0.05  1.84   -9.82   -6.78   -5.39   -4.39   -2.67  1286 1.00
beta2[2,3]    -0.30    0.00  0.11   -0.54   -0.36   -0.30   -0.24   -0.09  2335 1.00
beta2[2,4]     2.62    0.00  0.20    2.24    2.49    2.61    2.74    3.06  1898 1.00
beta2[2,5]     1.56    0.01  0.51    0.61    1.23    1.52    1.88    2.64  2063 1.00
beta2[2,6]     0.82    0.01  0.49   -0.23    0.50    0.87    1.17    1.67  1735 1.00
beta2[2,7]    -0.24    0.01  0.30   -0.95   -0.39   -0.22   -0.07    0.32  2532 1.00
beta2[3,1]    14.35    0.08  3.71    7.68   11.81   14.12   16.58   22.22  2279 1.00
beta2[3,2]    -3.61    0.02  1.07   -5.56   -4.39   -3.64   -2.90   -1.40  1852 1.00
beta2[3,3]    -0.30    0.00  0.07   -0.43   -0.34   -0.30   -0.25   -0.16  2156 1.00
beta2[3,4]     2.61    0.00  0.15    2.33    2.51    2.61    2.71    2.93  2004 1.00
beta2[3,5]     1.19    0.01  0.31    0.55    0.99    1.20    1.39    1.78  1862 1.00
beta2[3,6]     1.45    0.01  0.34    0.84    1.22    1.42    1.68    2.16  2620 1.00
beta2[3,7]    -0.15    0.00  0.21   -0.53   -0.29   -0.16   -0.02    0.30  2377 1.00
beta2[4,1]    12.17    0.07  3.79    4.77    9.72   12.11   14.64   19.68  3236 1.00
beta2[4,2]    -6.45    0.06  1.72  -10.21   -7.59   -6.28   -5.16   -3.69   851 1.00
beta2[4,3]    -0.30    0.00  0.09   -0.47   -0.35   -0.30   -0.25   -0.10  2597 1.00
beta2[4,4]     2.69    0.01  0.20    2.36    2.54    2.66    2.81    3.16  1135 1.00
beta2[4,5]     1.33    0.01  0.34    0.66    1.12    1.30    1.53    2.06  2198 1.00
beta2[4,6]     0.86    0.01  0.37    0.11    0.61    0.89    1.13    1.51  1726 1.00
beta2[4,7]    -0.17    0.00  0.23   -0.59   -0.33   -0.19   -0.04    0.34  2767 1.00
beta2[5,1]    12.68    0.08  3.48    5.81   10.41   12.62   14.93   19.54  1941 1.00
beta2[5,2]    -4.76    0.02  0.90   -6.56   -5.37   -4.76   -4.15   -3.01  3032 1.00
beta2[5,3]    -0.32    0.00  0.07   -0.46   -0.37   -0.32   -0.28   -0.20  2368 1.00
beta2[5,4]     2.41    0.00  0.16    2.10    2.30    2.41    2.51    2.72  1462 1.00
beta2[5,5]     1.17    0.01  0.27    0.62    0.98    1.18    1.35    1.67  1938 1.00
beta2[5,6]     1.44    0.01  0.28    0.93    1.25    1.43    1.63    2.00  1908 1.00
beta2[5,7]    -0.35    0.00  0.20   -0.77   -0.48   -0.34   -0.22    0.00  2124 1.00
thresh[1,1]   -1.91    0.03  1.21   -4.33   -2.72   -1.86   -1.09    0.39  1695 1.00
thresh[1,2]    1.03    0.04  1.28   -1.57    0.16    1.03    1.91    3.46  1148 1.00
thresh[1,3]    4.45    0.04  1.38    1.68    3.54    4.44    5.40    7.17  1094 1.00
thresh[1,4]    7.26    0.04  1.41    4.42    6.35    7.26    8.25    9.89  1142 1.00
thresh[2,1]   -2.03    0.03  1.39   -4.98   -2.93   -1.95   -1.11    0.50  1747 1.00
thresh[2,2]    1.79    0.03  1.22   -0.62    0.96    1.80    2.59    4.16  1798 1.00
thresh[2,3]    5.72    0.03  1.22    3.42    4.89    5.69    6.52    8.16  1718 1.00
thresh[2,4]    8.30    0.03  1.25    5.91    7.43    8.30    9.12   10.80  1696 1.00
thresh[3,1]   -1.75    0.04  1.30   -4.28   -2.63   -1.75   -0.92    0.80  1324 1.01
thresh[3,2]    2.75    0.03  1.22    0.46    1.91    2.72    3.53    5.20  1262 1.01
thresh[3,3]    6.07    0.03  1.23    3.79    5.21    6.05    6.88    8.62  1359 1.01
thresh[3,4]    8.55    0.03  1.26    6.20    7.68    8.53    9.36   11.14  1376 1.00
thresh[4,1]   -2.42    0.04  1.52   -5.54   -3.37   -2.36   -1.44    0.50  1622 1.00
thresh[4,2]    2.71    0.04  1.37    0.30    1.76    2.63    3.51    5.61  1189 1.01
thresh[4,3]    6.92    0.05  1.45    4.40    5.93    6.82    7.81    9.95  1034 1.01
thresh[4,4]    9.19    0.05  1.47    6.66    8.19    9.10   10.07   12.27  1051 1.01
thresh[5,1]   -2.88    0.04  1.72   -6.66   -3.82   -2.78   -1.76    0.21  1704 1.00
thresh[5,2]    3.50    0.06  1.69    0.68    2.36    3.31    4.48    7.29   778 1.01
thresh[5,3]    7.42    0.06  1.75    4.51    6.22    7.23    8.47   11.43   758 1.01
thresh[5,4]   10.01    0.07  1.83    6.97    8.74    9.79   11.08   14.14   734 1.01
sigma          1.05    0.03  0.73    0.17    0.54    0.88    1.36    2.89   566 1.01
Omega[1]       5.65    0.16  3.87    0.33    2.46    4.99    8.27   14.10   621 1.00
Omega[2]       3.73    0.10  2.90    0.19    1.50    3.10    5.15   11.28   890 1.00
Omega[3]       0.11    0.00  0.14    0.00    0.03    0.07    0.14    0.48   921 1.00
Omega[4]       0.35    0.01  0.32    0.02    0.15    0.27    0.45    1.15  1350 1.00
Omega[5]       0.83    0.03  0.83    0.03    0.28    0.61    1.12    2.96   738 1.01
Omega[6]       1.08    0.03  1.02    0.06    0.42    0.82    1.42    3.81  1313 1.00
Omega[7]       0.39    0.02  0.57    0.02    0.12    0.25    0.48    1.53   859 1.00
lp__        -920.76    0.60 10.95 -940.60 -928.41 -921.10 -914.03 -896.96   330 1.01

I don’t know why I can get these errors. And if my model has some errors to cause the identification problem?

I use a random walk may get a true results…
so I would not estimate the sigma and Omega.
But I still not understand why cause this problem.

The Stan team has some great suggestions in the warning messages and linked document on warnings. I’ve estimated some similar models and everything looks ok to me from a quick review. My suggestion would be looking at your priors and adapt_delta. How many samples are you running? 152 divergent transitions is a bit high, but it may be possible to (at least) reduce it by increasing adapt_delta.

hi~thank you for your reply
my prior is:

  beta1~ normal(0,10);
  beta2[1] ~normal(0,10);
  thresh[1] ~ induced_dirichlet(rep_vector(1, K), 0);

for sigma and Omega I use the default uniform prior, which is the hierarchical prior of :

for (i in 1: (P-1)){
   (thresh[i+1] - thresh[i]) ~ normal(0,sigma);
   (beta2[i+1]-beta2[i]) ~ normal(Zero,Omega);
  }

If I set sigma and Omega to 1(that may be a strong assumption), that would not have any errors.

And the adapt_delta is default 0.95.
I use adapt_delta=0.999 but still get the errors.

And I find my model has a strong setting that

thresh[P] ~ normal(thresh[1] , sqrt([P-1] * sigma^2) )
...

If it can causes the unidentify problems?

I consider use non-centered parameterization.
but thresh is an ordered vector. I don’t know how to tranformed it in block.

Noncentered parameterization is not feasible for thresh due to the constraints but might work for beta2.

Either way, I see Omega and sigma have no priors and their relationship to the observed data is also quite indirect. That could make them difficult to estimate. You should try and see if the divergencies disappear with

transformed parameters {
  real sigma = 1.0;
  vector[DN] Omega = rep_vector(1.0, DN);
}

Non-centering beta2 would look like this:

parameters {
  vector[DN] beta2_raw[P];
  vector[DN] Omega;
}
transformed parameters {
  vector[DN] beta2[P];
  beta2[1] = 10*beta2_raw[1];
  for (i in 1:P) {
    beta2[i] = beta2[i-1] + Omega .* beta2_raw[i];
  }
}
model {
  //Omega ~ prior()
  for (i in 1:P) {
    beta2_raw[i] ~ std_normal();
  }
}

or with a full covariance matrix

parameters {
  vector[DN] beta2_raw[P];
  vector[DN] Omega;
  cholesky_factor_corr[DN] L_Omega;
}
transformed parameters {
  vector[DN] beta2[P];
  beta2[1] = 10*beta2_raw[1];
  for (i in 1:P) {
    beta2[i] = beta2[i-1] + Omega .* (L_Omega*beta2_raw[i]);
  }
}
model {
  //Omega ~ prior()
  L_Omega ~ lkj_corr_cholesky(2.0);
  for (i in 1:P) {
    beta2_raw[i] ~ std_normal();
  }
}

Hi nhuurre,

Very thanks for your reply!
If I set

Everything is OK! No errors and running quickly.
I try to use the NCP in beta2, But still get the errors like before. Maybe I have some problem in my dataset? I upload it And upload my R code. Hope it can be useful!
new_brmdata.csv (5.1 MB)
tv_iv_ncp_ologit.stan (2.0 KB)

library(rstan)
rstan_options(auto_write = TRUE)
options(mc.cores=parallel::detectCores())
SEED<- 16001
data<- read.csv("new_brmdata.csv",stringsAsFactors = FALSE,header=TRUE)
g<-factor(data$kk, labels = 1:26)
data$y<- as.numeric(data$y)
data$Industry<-as.factor(data$Industry)
data$Industry = relevel(data$Industry, ref = "Z")
mf <- model.frame(y ~ x3+x10+x7+Assets+SOE+islist+CRA_F+Industry+Gdpcum01+rf+M2_ratio+PMI, data = data)
y <- mf$y
x <- model.matrix(y ~ x3+x10+x7+Assets+SOE+islist+CRA_F+Industry+Gdpcum01+rf+M2_ratio+PMI, data = mf)[,-1]
N<- nrow(data)
D<- ncol(x)
K<-5
x1=x[,20:23]
x2 = x[,-20:-23]

g=as.numeric(g)
P=26
DN=19
dat<-list(N=N,D=D,K=K,y=y,x=x,x1=x1,x2=x2,g=g,P=P,DN=DN)
fit<- stan("tv_iv_ncp_ologit.stan",data=dat,control = list(adapt_delta = 0.999,max_treedepth=12))

Then you should experiment with narrow priors, like

Omega ~ lognormal(0, 0.1);

Hi nhuurre, I’m really curious about this block that why use beta2[1] = 10*beta2_raw[1]?It may increases the running time.
If I can use the beta2[1] = beta2_raw[1]?
thanks~

Your original model had

beta2[1] ~ normal(0,10);

and I reparametrized it as

beta2_raw[1] ~ normal(0,1);
beta2[1] = 10*beta2_raw[1];

That probably didn’t make much difference. It’s the beta2[2:] ones that needed reparametrization most.
So yes, you can use beta2[1] = beta2_raw[1] here.
Indeed, your latest model has

  beta21 ~normal(0,10);

So you should use

  beta2[1] = beta21;

if you want to keep the original prior.

thanks~nhuurre, your suggestions really help me!
And The NCP really increase the running time. The former code spent me about 3h. The NCP
model may cost 24h(I have not get the result). If there are some methods to improve the speed?

Yikes!
Keep in mind that increasing max_treedepth and adapt_delta always increases runtime. If NCP helps it’s because NCP allows lowering adapt_delta without causing divergent transitions.

Yeah! I definitely forget that I really increase the max_treedepthand alpha_delta. Expecting the result can converge

Hi nhuurre,
I get the result, and there are two warnings:

Warning messages:
1: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#bulk-ess 
2: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See
https://mc-stan.org/misc/warnings.html#tail-ess 

If I can ignore these warnings because no divergent here.
And I find that my beta1 and thresh parameters have lower n_eff compared with other parameters.


                  mean se_mean    sd      2.5%       25%       50%       75%     97.5% n_eff Rhat
beta1[1]         -4.74    0.08  2.14     -9.07     -6.16     -4.72     -3.29     -0.57   724 1.00
beta1[2]        -11.12    0.25  7.24    -25.52    -15.85    -11.05     -6.28      2.76   858 1.00
beta1[3]        -10.78    0.25  3.98    -19.27    -13.29    -10.59     -8.00     -3.61   251 1.01
beta1[4]          0.04    0.00  0.07     -0.09     -0.01      0.04      0.09      0.18   953 1.00
beta21[1]        15.30    0.06  3.36      8.04     13.19     15.46     17.58     21.45  3367 1.00
beta21[2]        -3.76    0.01  0.51     -4.69     -4.09     -3.80     -3.47     -2.64  2173 1.00
beta21[3]        -0.20    0.00  0.06     -0.32     -0.24     -0.20     -0.16     -0.08  3185 1.00
beta21[4]         2.68    0.00  0.07      2.55      2.64      2.67      2.71      2.82  1464 1.00
beta21[5]         1.77    0.00  0.22      1.30      1.63      1.78      1.90      2.22  2662 1.00
beta21[6]         1.17    0.01  0.35      0.46      0.94      1.18      1.41      1.84  3960 1.00
beta21[7]         0.00    0.00  0.26     -0.50     -0.19     -0.01      0.18      0.53  3692 1.00
beta21[8]         0.11    0.01  0.27     -0.49     -0.03      0.13      0.28      0.62  2731 1.00
beta21[9]         0.81    0.00  0.21      0.44      0.68      0.79      0.90      1.33  2366 1.00
beta21[10]       -0.41    0.00  0.24     -0.93     -0.53     -0.41     -0.29      0.11  2723 1.00
beta21[11]       -0.77    0.00  0.18     -1.22     -0.86     -0.73     -0.64     -0.49  1970 1.00
beta21[12]        0.60    0.01  0.24      0.04      0.45      0.63      0.77      1.00  2137 1.00
beta21[13]        0.40    0.02  0.98     -1.35     -0.08      0.31      0.78      2.78  2348 1.00
beta21[14]       -0.19    0.00  0.22     -0.70     -0.31     -0.17     -0.06      0.22  2784 1.00
beta21[15]        0.71    0.02  0.71     -0.43      0.28      0.58      1.04      2.43  1879 1.00
beta21[16]        1.78    0.02  0.96     -0.49      1.40      1.80      2.22      3.64  1738 1.00
beta21[17]        0.52    0.01  0.74     -1.12      0.09      0.57      0.99      1.90  3187 1.00
beta21[18]       -0.12    0.00  0.16     -0.47     -0.21     -0.12     -0.02      0.17  3060 1.00
beta21[19]        0.08    0.01  0.42     -0.85     -0.16      0.13      0.34      0.84  2546 1.00
thresh[1,1]       4.09    0.06  0.90      2.23      3.52      4.15      4.68      5.76   256 1.01
thresh[1,2]       8.00    0.06  0.89      6.12      7.42      8.05      8.61      9.63   236 1.01
thresh[1,3]      11.63    0.06  0.90      9.73     11.05     11.68     12.23     13.27   240 1.01
thresh[1,4]      14.39    0.06  0.89     12.53     13.83     14.44     15.00     16.07   244 1.01
thresh[2,1]       4.05    0.06  0.87      2.28      3.49      4.10      4.63      5.66   244 1.01
thresh[2,2]       8.15    0.06  0.85      6.36      7.60      8.19      8.73      9.72   233 1.01
thresh[2,3]      11.87    0.06  0.84     10.12     11.33     11.90     12.45     13.42   234 1.01
thresh[2,4]      14.50    0.06  0.86     12.72     13.96     14.55     15.08     16.13   236 1.01
thresh[3,1]       4.01    0.06  0.85      2.27      3.47      4.05      4.59      5.56   232 1.01
thresh[3,2]       8.39    0.05  0.80      6.74      7.86      8.42      8.93      9.86   224 1.01
thresh[3,3]      12.02    0.05  0.82     10.31     11.48     12.06     12.58     13.50   224 1.01
thresh[3,4]      14.57    0.05  0.84     12.84     14.04     14.62     15.14     16.11   232 1.01
thresh[4,1]       3.92    0.06  0.84      2.20      3.38      3.97      4.49      5.48   224 1.01
thresh[4,2]       8.17    0.05  0.79      6.55      7.67      8.23      8.70      9.65   224 1.01
thresh[4,3]      12.29    0.05  0.79     10.69     11.79     12.33     12.84     13.76   220 1.01
thresh[4,4]      14.76    0.05  0.81     13.13     14.25     14.81     15.32     16.25   233 1.01
thresh[5,1]       3.85    0.06  0.82      2.16      3.34      3.91      4.41      5.35   220 1.01
thresh[5,2]       8.11    0.05  0.75      6.60      7.63      8.15      8.61      9.50   219 1.01
thresh[5,3]      12.38    0.05  0.76     10.86     11.89     12.42     12.90     13.77   221 1.01
thresh[5,4]      15.19    0.05  0.77     13.64     14.71     15.23     15.72     16.62   226 1.01
thresh[6,1]       3.97    0.05  0.79      2.34      3.47      4.02      4.51      5.43   212 1.01
thresh[6,2]       7.86    0.05  0.74      6.32      7.38      7.90      8.36      9.25   222 1.01
thresh[6,3]      12.42    0.05  0.75     10.88     11.94     12.47     12.94     13.80   218 1.01
thresh[6,4]      15.41    0.05  0.75     13.92     14.93     15.44     15.92     16.81   226 1.01
thresh[7,1]       4.12    0.05  0.76      2.52      3.65      4.15      4.63      5.49   208 1.01
thresh[7,2]       7.73    0.05  0.73      6.24      7.27      7.77      8.23      9.10   218 1.01
thresh[7,3]      12.68    0.05  0.74     11.19     12.21     12.72     13.17     14.05   218 1.01
thresh[7,4]      15.37    0.05  0.75     13.87     14.89     15.41     15.88     16.78   227 1.01
thresh[8,1]       4.09    0.05  0.76      2.50      3.61      4.13      4.61      5.46   214 1.01
thresh[8,2]       7.71    0.05  0.73      6.22      7.23      7.74      8.20      9.07   216 1.01
thresh[8,3]      12.99    0.05  0.73     11.52     12.52     13.01     13.48     14.34   226 1.01
thresh[8,4]      15.39    0.05  0.75     13.88     14.90     15.44     15.90     16.82   236 1.01
thresh[9,1]       4.17    0.05  0.74      2.60      3.70      4.22      4.66      5.51   217 1.01
thresh[9,2]       7.65    0.05  0.71      6.20      7.19      7.68      8.13      8.95   215 1.01
thresh[9,3]      13.05    0.05  0.71     11.61     12.60     13.09     13.52     14.37   223 1.01
thresh[9,4]      15.72    0.05  0.73     14.24     15.26     15.75     16.21     17.05   230 1.01
thresh[10,1]      4.38    0.05  0.73      2.87      3.92      4.43      4.88      5.75   219 1.01
thresh[10,2]      7.63    0.05  0.70      6.20      7.19      7.68      8.11      8.94   223 1.01
thresh[10,3]     13.01    0.05  0.71     11.59     12.54     13.05     13.48     14.31   223 1.01
thresh[10,4]     15.73    0.05  0.72     14.31     15.28     15.76     16.21     17.08   228 1.01
thresh[11,1]      4.51    0.05  0.71      3.03      4.05      4.56      4.99      5.84   210 1.01
thresh[11,2]      7.66    0.05  0.67      6.29      7.23      7.69      8.11      8.93   214 1.01
thresh[11,3]     13.01    0.05  0.68     11.64     12.57     13.06     13.47     14.28   221 1.01
thresh[11,4]     15.90    0.05  0.70     14.51     15.46     15.94     16.37     17.22   224 1.01
thresh[12,1]      4.78    0.05  0.69      3.38      4.34      4.81      5.25      6.07   205 1.01
thresh[12,2]      7.49    0.05  0.66      6.11      7.08      7.53      7.93      8.73   217 1.01
thresh[12,3]     13.00    0.04  0.66     11.65     12.58     13.03     13.45     14.23   219 1.01
thresh[12,4]     15.92    0.04  0.67     14.52     15.49     15.96     16.37     17.16   222 1.01
thresh[13,1]      4.91    0.05  0.67      3.50      4.47      4.94      5.37      6.16   203 1.01
thresh[13,2]      7.43    0.05  0.66      6.05      7.00      7.46      7.87      8.68   211 1.01
thresh[13,3]     12.85    0.05  0.67     11.46     12.43     12.89     13.30     14.12   213 1.01
thresh[13,4]     15.82    0.05  0.67     14.44     15.38     15.87     16.27     17.08   214 1.01
thresh[14,1]      5.04    0.05  0.66      3.68      4.62      5.07      5.50      6.29   213 1.01
thresh[14,2]      7.50    0.04  0.64      6.17      7.10      7.53      7.93      8.71   213 1.01
thresh[14,3]     12.88    0.04  0.65     11.55     12.47     12.92     13.32     14.10   214 1.01
thresh[14,4]     15.78    0.04  0.66     14.42     15.35     15.82     16.21     17.03   215 1.01
thresh[15,1]      5.10    0.04  0.64      3.79      4.69      5.13      5.54      6.34   218 1.01
thresh[15,2]      7.61    0.04  0.61      6.34      7.22      7.63      8.04      8.78   216 1.01
thresh[15,3]     12.84    0.04  0.63     11.56     12.43     12.87     13.25     14.01   216 1.01
thresh[15,4]     15.89    0.04  0.63     14.57     15.48     15.94     16.30     17.08   222 1.01
thresh[16,1]      5.24    0.04  0.63      3.92      4.84      5.27      5.65      6.43   221 1.01
thresh[16,2]      7.51    0.04  0.61      6.26      7.13      7.54      7.92      8.68   220 1.01
thresh[16,3]     12.78    0.04  0.60     11.54     12.39     12.80     13.19     13.94   217 1.01
thresh[16,4]     15.92    0.04  0.61     14.68     15.54     15.95     16.33     17.10   219 1.01
thresh[17,1]      5.29    0.04  0.62      4.02      4.89      5.31      5.71      6.46   219 1.01
thresh[17,2]      7.78    0.04  0.59      6.56      7.40      7.80      8.18      8.90   221 1.01
thresh[17,3]     12.57    0.04  0.59     11.34     12.18     12.59     12.96     13.72   212 1.01
thresh[17,4]     15.99    0.04  0.60     14.76     15.61     16.00     16.38     17.14   216 1.01
thresh[18,1]      5.36    0.04  0.62      4.08      4.96      5.38      5.77      6.54   229 1.01
thresh[18,2]      7.70    0.04  0.59      6.52      7.32      7.72      8.10      8.84   222 1.01
thresh[18,3]     12.71    0.04  0.57     11.53     12.33     12.73     13.09     13.81   216 1.01
thresh[18,4]     15.95    0.04  0.59     14.75     15.58     15.97     16.35     17.11   218 1.01
thresh[19,1]      5.19    0.04  0.62      3.89      4.80      5.20      5.60      6.39   217 1.01
thresh[19,2]      7.73    0.04  0.58      6.55      7.36      7.75      8.12      8.82   224 1.01
thresh[19,3]     12.88    0.04  0.58     11.71     12.50     12.90     13.26     13.99   215 1.01
thresh[19,4]     16.25    0.04  0.59     15.07     15.87     16.28     16.65     17.38   224 1.01
thresh[20,1]      5.12    0.04  0.63      3.78      4.73      5.14      5.54      6.32   219 1.01
thresh[20,2]      7.61    0.04  0.58      6.45      7.23      7.63      8.00      8.69   232 1.01
thresh[20,3]     13.04    0.04  0.58     11.89     12.67     13.07     13.43     14.16   222 1.01
thresh[20,4]     16.38    0.04  0.59     15.19     16.01     16.40     16.77     17.50   237 1.01
thresh[21,1]      4.98    0.04  0.64      3.64      4.58      5.00      5.40      6.20   222 1.01
thresh[21,2]      7.64    0.04  0.57      6.49      7.27      7.66      8.02      8.72   247 1.01
thresh[21,3]     13.35    0.04  0.56     12.20     12.99     13.37     13.73     14.43   235 1.01
thresh[21,4]     16.67    0.04  0.57     15.53     16.31     16.70     17.05     17.79   247 1.01
thresh[22,1]      4.92    0.04  0.66      3.52      4.50      4.95      5.35      6.20   223 1.01
thresh[22,2]      7.57    0.04  0.57      6.41      7.19      7.59      7.94      8.66   262 1.01
thresh[22,3]     13.53    0.03  0.56     12.42     13.16     13.55     13.90     14.61   258 1.01
thresh[22,4]     16.82    0.03  0.57     15.66     16.44     16.85     17.20     17.95   266 1.01
thresh[23,1]      4.84    0.04  0.68      3.43      4.42      4.87      5.29      6.15   228 1.01
thresh[23,2]      7.59    0.04  0.57      6.44      7.22      7.62      7.98      8.69   264 1.01
thresh[23,3]     13.65    0.03  0.55     12.51     13.29     13.67     14.03     14.72   260 1.01
thresh[23,4]     16.98    0.03  0.57     15.83     16.60     17.00     17.36     18.09   275 1.01
thresh[24,1]      4.73    0.05  0.71      3.23      4.29      4.76      5.20      6.11   242 1.01
thresh[24,2]      7.73    0.04  0.58      6.54      7.36      7.74      8.13      8.86   277 1.01
thresh[24,3]     13.58    0.04  0.58     12.41     13.21     13.60     13.97     14.73   268 1.01
thresh[24,4]     17.04    0.04  0.59     15.84     16.65     17.06     17.44     18.19   286 1.01
thresh[25,1]      4.65    0.05  0.73      3.11      4.20      4.67      5.14      6.10   254 1.01
thresh[25,2]      7.97    0.03  0.59      6.76      7.59      7.98      8.37      9.11   292 1.01
thresh[25,3]     13.85    0.03  0.59     12.68     13.48     13.87     14.25     14.99   285 1.01
thresh[25,4]     16.85    0.04  0.60     15.64     16.44     16.87     17.24     18.00   290 1.01
thresh[26,1]      4.63    0.05  0.77      3.01      4.13      4.65      5.15      6.09   272 1.01
thresh[26,2]      8.04    0.04  0.62      6.81      7.63      8.05      8.45      9.22   308 1.01
thresh[26,3]     13.93    0.04  0.61     12.71     13.52     13.95     14.34     15.09   294 1.01
thresh[26,4]     16.77    0.04  0.62     15.53     16.35     16.80     17.20     17.96   303 1.01
sigma             0.25    0.00  0.04      0.17      0.22      0.25      0.28      0.34   614 1.01
yita[1,1]         0.53    0.01  0.94     -1.35     -0.09      0.54      1.17      2.34  5745 1.00
yita[1,2]        -0.45    0.02  0.97     -2.39     -1.11     -0.45      0.21      1.43  4024 1.00
yita[1,3]        -0.18    0.01  0.98     -2.11     -0.83     -0.20      0.48      1.71  6901 1.00
yita[1,4]        -0.38    0.02  1.01     -2.46     -1.05     -0.36      0.31      1.58  4251 1.00
yita[1,5]        -0.47    0.02  0.99     -2.36     -1.15     -0.47      0.23      1.46  4360 1.00
yita[1,6]         0.06    0.01  0.92     -1.77     -0.55      0.05      0.69      1.81  4913 1.00
...

And I considered if there are some identification problems?