The sampling has no response, when rstan 2.21 run in R4.0.5

I have a program, when set the number of iterations to 20000, 2 chains. But after the first 1000 runs of the program, it seems that there is no response anymore. The current Rstan 2.21, R4.0.5.
The problem is that there was an identical model before, and it is normal to run on R3.6.3.
It seems that R3.6.3 cannot be returned now.
I don’t know where is the problem?

1 Like

Is it possible for you to share more information about the issue? It would help if you could provide the model code, the type of computer you’re running the model on, and your operating system. Can you also confirm what version of Rtools you have installed on your computer? I believe that with the move to R4.0+ there was a new Rtools that resulted in some problems for people using rstan who didn’t update.

Also, as a quick check, are you running the model on the same data and on the same machine as you had before? How long have you wanted for updates to the console after the first 1000 iterations? Are you able to run other Stan models without the same issue occurring?

1 Like

Thank you!

My computer: Dell Desktop: Intel(R) Core™ i5-4570 CPU @ 3.20GHz 3.20 GHz; RAM = 8.0G

System is Windows 10-64bit (20H2),

R: 4.0.5

Rtudio 1.3 1093 and 1.4.1717 (I have tried the two versions)

Rtools: Rtools4 (install from rtools40v2-x86_64.exe)

Rstan: 2.21.2Thank you.pdf (60.3 KB)

I noticed the following error message from the pdf that you attached:

Error in open.connection(con, open = mode) :
 Could not resolve host:

I recalled seeing this error on this forum before. A quick check later, and I found this discussion post. I wonder whether this might help you

Thank you so much.
This error report has often appeared since upgrading to R 2.21.2, but I found that it does not affect the sampling of the program.

Thank you very much for your good suggestions!

I reinstalled StanHeaders and rstan according to the suggested method:

remove.packages(c("StanHeaders", "rstan"))
install.packages("StanHeaders", repos = c("", getOption("repos")))
install.packages("rstan", repos = c("", getOption("repos")))

and found that no error warning was displayed.

More interestingly, some information displayed has also changed when sampling, such as:

Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 35.41 seconds.

And before, the time in the information often is INTEGER seconds (such as 40 seconds), even appeared 0 seconds.

Is the model now running again with those reinstalls done?

Thanks a lot for your kindly reply.

The program is running.

The problem is still that the running time of the program is indeed too long. When replying to this message, the program has been running for more than 23 hours, but 5000/50000 iterations have not yet been completed.

In fact, in Finance, the model I used is intuitively simple, but it is quite troublesome to estimate: it seems that there is no simple estimation method.

Also, the model has:
number of parameters: 51
number of latent processes: 7
panel data: 184 * 22 .

Is there any reason that you need 50,000 iterations? Unlike other common Bayesian software, Stan doesn’t usually need so many iterations to produce good results. Have you tried just running it for the default 2000 iterations (1000 being warmup)? Also, how many of those iterations are warmup (the default is 1/2 of iterations are warmup)? Again, your model may not need a bunch of warmup samples, and warmup usually takes a little longer than the remaining samples. So, as you adjust your number of iterations, you might try increasing just the total iterations and keeping warmups fairly stable. That obviously depends, however, on how the sampling actually goes.

I could see how with model complexity you might get warnings about low ESS, but it may be easier to try building up to the needed number of iterations as opposed to starting with a really high number and waiting multiple days to get a result. Unfortunately, I’m not familiar with this model or what you’re trying to do, so I don’t really have any recommendations for what you may do differently to improve the speed of your model. One thing I did notice by looking at your model is that you have several for loops that aren’t needed (or at least, I don’t think they are). Stan supports vectorization in most sampling statements, meaning that

for(p in 1:3) {
  target += beta_lpdf(0.5*phi_h[p] + 1 | 20, 1.5);
  target += inv_gamma_lpdf(sigma_h[p]^2 | 2.5, 0.5);


target += beta_lpdf(0.5*phi_h + 1 | 20, 1.5);
target += inv_gamma_lpdf(sigma_h^2 | 2.5, 0.5);

will give the same result. While for loops in Stan are much faster than they are in R, cutting down any unnecessary loops should improve modeling efficiency. You can read a little more about vectorization support in Stan here and here.

1 Like

Thank you very much for your patient reply.
I tried to reduce the use of loops and convert it to vector or matrix representation.
According to your suggestion, I used the following settings for a slightly more complicated model,

#set initial value
init0 = function(rnd = 1)  {
  list(     sigma_Y = sigma_Y0*rnd,  #11
            sigma_V = sigma_V0*rnd,  #11
            alpha_V = alpha_V0*rnd,  #11            
            lambda = lambda0*rnd,     #1            
            mu_beta = mu_beta0*rnd,  #3
            phi_beta = phi_beta0*rnd, #3
            beta1 = beta10*rnd,    #N
            beta2 = beta20*rnd,    #N
            beta3 = beta30*rnd,    #N            
            h1 = h10*rnd, #N
            h2 = h20*rnd, #N
            h3 = h30*rnd, #N
            mu_h = mu_h0*rnd,   #3
            phi_h = phi_h0*rnd,  #3
            sigma_h = sigma_h0*rnd #3

n_chains = 2
init1 = lapply(1:n_chains, function(id) init0(rnd = runif(1, 0.4, 0.5))) # this number is result of  many tries.

n.iter = 2000
n.warmup = n.iter/2
DNS_RV = stan(model_code = DNS_RV_code,
              data = mydata,
              iter = n.iter,
              warmup = n.warmup,
              thin = 1, 
              chains = n_chains,
              init = init1,
              cores = getOption("mc.cores", 2L),
              control = list(adapt_delta = 0.998, max_treedepth = 18)

The Elapsed Time of sampling are as follows:
Chain 1: Elapsed Time: 23.344 seconds (Warm-up)
Chain 1: 107.943 seconds (Sampling)
Chain 1: 131.287 seconds (Total)
Chain 2: Elapsed Time: 21.656 seconds (Warm-up)
Chain 2: 243.35 seconds (Sampling)
Chain 2: 265.006 seconds (Total)

With 2000 iterations (warmup=1000), there are still many divergent samples.
Warning messages:
1: There were 1257 divergent transitions after warmup. See
to find out why this is a problem and how to eliminate them.
2: There were 2 chains where the estimated Bayesian Fraction of Missing Information was low. See
3: Examine the pairs() plot to diagnose sampling problems
4: The largest R-hat is 3.04, indicating chains have not mixed.
Running the chains for more iterations may help. See
5: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
Running the chains for more iterations may help. See
6: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
Running the chains for more iterations may help. See


1257 of 2000 iterations ended with a divergence (62.85%).
Try increasing ‘adapt_delta’ to remove the divergences.

There are a few questions to consult:

  1. How can find a better way to set the initial value of Parameters to reflect randomness? Or how to specify different initial values for different chains?
  2. Are there any better suggestions for sampling settings?

If I’m understanding you correctly, it sounds like you’re having to try to hunt for some very specific initial values. That shouldn’t be an issue in a well-specified model. The fact that your model also has so many pathological fitting warnings also suggests to me that the model is not well specified. There are some things that you could do to improve those model results (e.g., increasing adapt_delta, increasing iterations), but those also add time to fitting. Those things are sometimes needed for complex models, but my experience is that most of the time good model specification can alleviate those needs.

One of the things that you could do to improve your model performance is reparameterize your parameter estimation to be based on the normal distribution. You can read more about Stan’s reparameterization recommendations and some examples in the manual here.

I also noticed a lot math going on in the sampling statements. I don’t actually know whether this affects sampling speed or the stability of your results, but there is a transformed parameters block in Stan that could be used to compute those things. There’s some additional information on transformed parameters and reparameterization in the Stan manual here

1 Like

Thank you very much for your patience, professionalism and advice. I will try to optimize the model.

Hi, I do transformed parameters according to the suggestions, but the warning:


Error in stanc(file = file, model_code = model_code, model_name = model_name, :


Syntax error in ‘string’, line 64, column 85 to column 86, parsing error:

Found a expression where we expected a statement. Is there a missing semi-colon here?

Or did you mean to use the preceding expression in:

  • a function call

  • a sampling statement

  • the conditional in a for, while, or if statement

  • assignment to a variable?

My code

transformed parameters {
  vector[N]  h1;
  vector[N]  h2;
  vector[N]  h3;
  h1[1] = mu_h[1]/(1-phi_h[1]) + sigma_h[1]/sqrt(1-phi_h[1]^2)*h1_raw[1];
  h2[1] = mu_h[2]/(1-phi_h[2]) + sigma_h[2]/sqrt(1-phi_h[2]^2)*h2_raw[1];
  h3[1] = mu_h[3]/(1-phi_h[3]) + sigma_h[3]/sqrt(1-phi_h[3]^2)*h3_raw[1];
  for (n in 2:N) {
    h1[n] = mu_h[1] + phi_h[1]*h1[n-1] + sigma_h[1]*h1_raw[n-1];
    h2[n] = mu_h[2] + phi_h[2]*h2[n-1] + sigma_h[2]*h2_raw[n-1];
    h3[n] = mu_h[3] + phi_h[3]*h3[n-1] + sigma_h[3]*h3_raw[n-1];
  //h1[2:N] = mu_h[1] + phi_h[1]*h1[1:(N-1)] + sigma_h[1]*h1_raw[2:N];
  //h2[2:N] = mu_h[2] + phi_h[2]*h2[1:(N-1)] + sigma_h[2]*h2_raw[2:N];
  //h3[2:N] = mu_h[3] + phi_h[3]*h3[1:(N-1)] + sigma_h[3]*h3_raw[2:N];

In transformed parameters block, can we use the h1[2:N]?
And “line 64” in waring, how to position line 64?

Although the model has been re-parameterized, the earliest discovered problems still exist.

That is, if the program is only run 2000 iterations, it will be completed within hundreds of seconds, although the result is very unsatisfactory. But if you set 10000 iterations, the first 1000 iterations can still be achieved within serval minutes, but the program will no longer respond after 1000 iterations. I have run it for more than 24 hours and the code no longer responds. Of course, the two red circles are still there, showing the runing of the code.

With more iterations, even the first thousand iteration will no longer respond.

More importantly, even if I used the code and data that was successfully run in R 3.6.3 + Rstan 2.19.3, I encountered similar problems in R 4.0.5 + Rstan (>2.19.3).

1 Like

two additional ideas: could you try running just a single chain? This can result in more information being shown in the console, so maybe you’ll see some additional hints of whats wrong.

Alternatively, you could try using cmdstanr instead (Getting started with CmdStanR • cmdstanr), which tends to be better behaved than rstan, although it is still in beta.

Best of luck with your model!

Thank you very much for your suggestion. I will try to use the cmdstanr, and try to adjust the model specification, such as simplifying the parameters and their distributions.
Although so far there has been no substantial progress of the model, I have a deeper understanding of the characteristics of the model and the links between financial variables, and the relationship between them and the software, especially understanding of the many difficulties of transforming ideas into practices.
We also thank the developers for your patient answers, and thank you for your contributions, so that we have such an excellent platform to study financial time series problems.

1 Like