Same model and data, but different results at different times, why?

Hi all, I’m a newcomer to Rstan.
Recently, I’m doing some analysis about ecology data(species coexist), specifically, include using Lotka-Volterra model fit time-series biomass data. The results are sound statistically. However, I find that the n_effs of results at different times are different(but all are sound), I can’t understand this, I’m wondering if anyone can give me a hand, thank you!
Here is my model and some results:

  data{             

  int  n;        // length of timeseries
  int  nmix;     // number of mixes
  int  sr;     // species richness
  int  rep;     // number of replicates
  real N[n,sr,rep,nmix]; // observations, d1=time, d2=species, d3=replicate, d4=Mix code
  real year[n];
  int  sp[nmix,sr]; // species id for each mix
}

transformed data { 
  real x_r[0];
  int x_i[0];
  real logN[n,sr,rep,nmix]; // log transformed data
  logN=log(N);
}

parameters {
  vector<lower=0>[sr] r;
  matrix<lower=0>[sr,sr] a;
  real<lower=0> sdev;
}

model {

  // priors 
  for ( i in 1:sr ){
        r[i] ~ normal(0,1);
        for ( j in 1:sr ){
      a[i,j] ~ normal(0,1);
    }
  }  
  sdev ~ normal(0,1);

  for ( l in 1:nmix){ // nmix
  
  // intermediate parameters
    matrix[n,sr] Nsim; // simulated values, matrix. dim1=time, dim2=dim_ODE=2

    // simulation
    for (k in 1:rep){ // 60 replicates
      // simulate trajectory
      for (m in 1:sr){
        Nsim[1,m]=N[1,m,k,l];
      }
      for (t in 1:(n-1)){
        for (m in 1:sr){
          Nsim[t+1,m]=Nsim[t,m]*exp(r[sp[l,m]]*(1.0-sum(a[sp[l,1:sr],sp[l,1:sr]][m,].*Nsim[t,1:sr])));
        }
      }

      // lognormal residuals
      for (j in 1:sr){ // species
        for (t in 1:n){
          logN[t,j,k,l] ~ normal(log(Nsim[t,j]), sdev);
        } // t
      } // j
    } // k
  } // l

}
'
And that is my first result:
```stan
Inference for Stan model: 888f4e1eaae93dd631debb29714e7e61.
4 chains, each with iter=7000; warmup=2000; thin=1; 
post-warmup draws per chain=5000, total post-warmup draws=20000.

            mean se_mean     sd      2.5%       25%       50%       75%     97.5% n_eff   Rhat
r[1]      0.0435  0.0011 0.0937    0.0022    0.0086    0.0172    0.0374    0.2739  6916 1.0003
r[2]      0.1488  0.0020 0.1518    0.0192    0.0467    0.0936    0.1948    0.5689  5616 1.0007
r[3]      0.3817  0.0144 0.5419    0.0003    0.0036    0.0207    0.7107    1.7576  1425 1.0028
r[4]      0.7363  0.0047 0.3069    0.3459    0.5266    0.6663    0.8675    1.5279  4287 1.0004
r[5]      1.7550  0.0039 0.3817    1.0899    1.4858    1.7297    1.9914    2.5775  9409 1.0003
r[6]      2.0720  0.0078 0.4762    1.2164    1.7538    2.0313    2.3379    3.1487  3717 1.0006
a[1,1]    0.7433  0.0054 0.5645    0.0594    0.2992    0.6105    1.0620    2.1208 10982 0.9999
a[1,2]    0.7460  0.0040 0.5741    0.0316    0.2913    0.6176    1.0735    2.1357 20195 1.0000
a[1,3]    0.2307  0.0027 0.3041    0.0033    0.0394    0.1159    0.2974    1.1082 12334 1.0000
a[1,4]    0.7964  0.0042 0.6030    0.0323    0.3195    0.6750    1.1480    2.2471 20444 1.0000
a[1,5]    0.7306  0.0044 0.5795    0.0258    0.2734    0.5943    1.0578    2.1492 16982 0.9999
a[1,6]    0.7447  0.0044 0.5790    0.0269    0.2852    0.6170    1.0788    2.1263 17528 1.0001
a[2,1]    0.6044  0.0068 0.5359    0.0410    0.1881    0.4348    0.8785    1.9668  6288 1.0002
a[2,2]    1.0673  0.0062 0.7028    0.0557    0.5056    0.9714    1.5204    2.6410 13036 1.0002
a[2,3]    0.1524  0.0018 0.1639    0.0156    0.0519    0.0962    0.1895    0.6057  8204 1.0009
a[2,4]    0.8012  0.0047 0.6073    0.0290    0.3206    0.6831    1.1545    2.2628 16907 1.0000
a[2,5]    0.9287  0.0047 0.6455    0.0438    0.4129    0.8255    1.3249    2.4197 18746 1.0001
a[2,6]    0.7608  0.0043 0.5591    0.0330    0.3194    0.6500    1.0926    2.0863 17005 1.0002
a[3,1]    0.2164  0.0059 0.3644    0.0007    0.0094    0.0375    0.2623    1.3069  3856 1.0006
a[3,2]    0.5612  0.0082 0.5361    0.0147    0.1583    0.3835    0.8067    1.9562  4291 1.0003
a[3,3]    0.3942  0.0097 0.5088    0.0204    0.0459    0.1230    0.5974    1.7998  2769 1.0011
a[3,4]    0.7965  0.0045 0.6091    0.0315    0.3118    0.6659    1.1502    2.2473 17996 0.9999
a[3,5]    0.5306  0.0093 0.5300    0.0128    0.1353    0.3405    0.7753    1.9292  3266 1.0008
a[3,6]    0.5372  0.0083 0.5109    0.0129    0.1601    0.3753    0.7624    1.8988  3798 1.0009
a[4,1]    0.0070  0.0000 0.0063    0.0002    0.0023    0.0053    0.0101    0.0236 18029 0.9999
a[4,2]    0.2204  0.0015 0.1971    0.0066    0.0738    0.1675    0.3099    0.7375 18251 0.9999
a[4,3]    0.0109  0.0001 0.0070    0.0006    0.0055    0.0101    0.0150    0.0273  8433 1.0007
a[4,4]    0.8145  0.0044 0.6158    0.0349    0.3199    0.6837    1.1836    2.2779 19226 1.0001
a[4,5]    0.1891  0.0012 0.1674    0.0057    0.0619    0.1426    0.2705    0.6172 20308 1.0000
a[4,6]    0.4184  0.0040 0.2971    0.0168    0.1758    0.3683    0.6065    1.0963  5583 1.0003
a[5,1]    0.0159  0.0001 0.0116    0.0007    0.0068    0.0138    0.0227    0.0433 12045 0.9999
a[5,2]    0.4304  0.0031 0.3160    0.0171    0.1714    0.3710    0.6353    1.1508 10566 1.0001
a[5,3]    0.0035  0.0000 0.0031    0.0001    0.0012    0.0027    0.0049    0.0113 10576 1.0001
a[5,4]    0.7650  0.0042 0.5964    0.0293    0.2923    0.6290    1.1095    2.2008 19718 1.0003
a[5,5]    0.6937  0.0038 0.3467    0.0986    0.4418    0.6702    0.9134    1.4472  8483 1.0000
a[5,6]    1.2060  0.0038 0.3644    0.5566    0.9466    1.1848    1.4393    1.9767  9042 1.0002
a[6,1]    0.0129  0.0001 0.0094    0.0005    0.0054    0.0111    0.0186    0.0350 11291 1.0000
a[6,2]    0.1784  0.0012 0.1520    0.0056    0.0616    0.1381    0.2567    0.5627 16748 1.0000
a[6,3]    0.0040  0.0000 0.0032    0.0001    0.0015    0.0032    0.0057    0.0118 12514 0.9999
a[6,4]    0.8030  0.0043 0.6045    0.0356    0.3202    0.6783    1.1555    2.2465 19870 1.0003
a[6,5]    0.1828  0.0011 0.1360    0.0080    0.0747    0.1568    0.2609    0.5102 15853 1.0002
a[6,6]    1.2168  0.0033 0.3408    0.6044    0.9747    1.2022    1.4401    1.9270 10890 1.0003
sdev      2.6496  0.0009 0.1014    2.4628    2.5785    2.6470    2.7162    2.8564 12257 1.0001
lp__   -624.8780  0.0812 5.6176 -636.9850 -628.3700 -624.5245 -620.9423 -614.8194  4788 1.0005

Samples were drawn using NUTS(diag_e) at Thu May 06 11:25:54 2021.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
that is second result:
```stan
Inference for Stan model: 888f4e1eaae93dd631debb29714e7e61.
4 chains, each with iter=7000; warmup=2000; thin=1; 
post-warmup draws per chain=5000, total post-warmup draws=20000.

            mean se_mean     sd      2.5%       25%       50%       75%     97.5% n_eff   Rhat
r[1]      0.0461  0.0013 0.1007    0.0023    0.0087    0.0173    0.0383    0.3092  6367 1.0006
r[2]      0.1503  0.0021 0.1568    0.0189    0.0466    0.0923    0.1984    0.5844  5520 1.0008
r[3]      0.3839  0.0146 0.5409    0.0003    0.0037    0.0226    0.7111    1.7524  1368 1.0022
r[4]      0.7377  0.0049 0.3049    0.3485    0.5287    0.6653    0.8701    1.5387  3869 1.0005
r[5]      1.7580  0.0036 0.3795    1.0960    1.4875    1.7333    1.9961    2.5821 11107 1.0002
r[6]      2.0802  0.0097 0.4782    1.2174    1.7609    2.0425    2.3489    3.1825  2445 1.0009
a[1,1]    0.7348  0.0051 0.5663    0.0592    0.2872    0.6031    1.0485    2.1366 12285 1.0002
a[1,2]    0.7332  0.0044 0.5865    0.0235    0.2742    0.5967    1.0586    2.1762 17501 0.9999
a[1,3]    0.2307  0.0033 0.3002    0.0035    0.0385    0.1170    0.3009    1.0832  8323 1.0001
a[1,4]    0.7952  0.0052 0.6044    0.0316    0.3131    0.6718    1.1466    2.2602 13519 1.0004
a[1,5]    0.7259  0.0041 0.5705    0.0276    0.2716    0.5980    1.0515    2.1194 19534 0.9999
a[1,6]    0.7513  0.0041 0.5750    0.0322    0.2953    0.6270    1.0869    2.1398 19268 1.0001
a[2,1]    0.6117  0.0065 0.5513    0.0378    0.1876    0.4380    0.8851    2.0498  7158 1.0005
a[2,2]    1.0718  0.0062 0.7002    0.0589    0.5218    0.9710    1.5251    2.6263 12815 1.0004
a[2,3]    0.1527  0.0019 0.1663    0.0145    0.0519    0.0952    0.1901    0.6238  8073 1.0002
a[2,4]    0.7983  0.0044 0.6079    0.0329    0.3130    0.6730    1.1516    2.2538 19359 0.9999
a[2,5]    0.9299  0.0053 0.6529    0.0430    0.4085    0.8181    1.3312    2.4406 15442 1.0000
a[2,6]    0.7582  0.0041 0.5672    0.0300    0.3101    0.6459    1.0862    2.1082 18697 1.0000
a[3,1]    0.2159  0.0062 0.3654    0.0007    0.0089    0.0344    0.2690    1.3105  3463 1.0006
a[3,2]    0.5624  0.0083 0.5262    0.0157    0.1648    0.3969    0.8083    1.9499  4016 1.0006
a[3,3]    0.3959  0.0113 0.5134    0.0195    0.0453    0.1216    0.6035    1.8185  2067 1.0014
a[3,4]    0.7864  0.0045 0.6004    0.0310    0.3076    0.6641    1.1386    2.2165 17695 1.0004
a[3,5]    0.5215  0.0090 0.5279    0.0116    0.1318    0.3359    0.7510    1.9355  3445 1.0008
a[3,6]    0.5391  0.0086 0.5258    0.0149    0.1568    0.3665    0.7542    1.9781  3731 1.0011
a[4,1]    0.0070  0.0000 0.0063    0.0002    0.0023    0.0052    0.0099    0.0231 17416 1.0000
a[4,2]    0.2234  0.0013 0.1956    0.0073    0.0749    0.1703    0.3177    0.7251 21179 1.0001
a[4,3]    0.0108  0.0001 0.0070    0.0007    0.0055    0.0100    0.0149    0.0269  8322 1.0000
a[4,4]    0.8132  0.0044 0.6143    0.0303    0.3256    0.6866    1.1705    2.2791 19193 0.9999
a[4,5]    0.1848  0.0012 0.1639    0.0052    0.0590    0.1389    0.2656    0.6059 17291 0.9999
a[4,6]    0.4256  0.0037 0.2964    0.0184    0.1880    0.3781    0.6107    1.1069  6349 1.0001
a[5,1]    0.0159  0.0001 0.0115    0.0008    0.0069    0.0137    0.0225    0.0431 14440 1.0006
a[5,2]    0.4350  0.0035 0.3132    0.0167    0.1772    0.3806    0.6391    1.1476  8103 1.0001
a[5,3]    0.0035  0.0000 0.0031    0.0001    0.0011    0.0027    0.0049    0.0115 11758 1.0000
a[5,4]    0.7690  0.0040 0.5877    0.0307    0.3037    0.6467    1.1079    2.1831 21366 0.9999
a[5,5]    0.6863  0.0038 0.3478    0.0892    0.4347    0.6619    0.9090    1.4389  8579 1.0000
a[5,6]    1.2098  0.0037 0.3602    0.5665    0.9552    1.1881    1.4394    1.9814  9416 1.0006
a[6,1]    0.0130  0.0001 0.0095    0.0006    0.0054    0.0111    0.0187    0.0351 10955 1.0001
a[6,2]    0.1795  0.0012 0.1508    0.0057    0.0625    0.1406    0.2587    0.5549 16700 1.0002
a[6,3]    0.0040  0.0000 0.0031    0.0002    0.0015    0.0033    0.0057    0.0116 12683 1.0000
a[6,4]    0.8113  0.0044 0.6139    0.0302    0.3245    0.6857    1.1732    2.2772 19054 1.0002
a[6,5]    0.1798  0.0011 0.1339    0.0069    0.0737    0.1550    0.2573    0.5010 15967 1.0001
a[6,6]    1.2171  0.0033 0.3404    0.6146    0.9715    1.1966    1.4408    1.9284 10778 1.0001
sdev      2.6492  0.0008 0.1014    2.4616    2.5794    2.6455    2.7150    2.8579 16539 1.0000
lp__   -624.9184  0.0796 5.5632 -636.8079 -628.4315 -624.5551 -621.0053 -615.0941  4889 1.0009

Samples were drawn using NUTS(diag_e) at Thu May 06 11:34:47 2021.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
1 Like

This is due to different seeds being passed to different chains, thus HMC, being a probabilistic sampler, results in slight different results. Take a look at the Reproducibility section of the Stan’s Reference Guide here: Redirecting…

Edit: the redirecting in the link is because I’ve linked the root link that redirects to the most updated version of the documentation. This is for this post being robust for future use…

2 Likes

Hi, storopoli, thanks for your nice answer, I’ll read it. And one more question: I wonder that if results at different times differ slightly, which implies that this model is sound? Thank you!

1 Like

It means it converges. Wether is a sound model is a theoretical domain issue…

1 Like

Thank you once angin,I guess what you mean is that judging a model not only needs to see its convergence technically but needs to see whether is reasonable in my own field.