Error after chains complete: "Supplied csv file is corrupt"

Hi guys,

I’ve got an RL model I’m trying to fit. Initially I tried it on a desktop computer and it ran to the end of the chains but then crashed the R session; I then tried it on my university’s cluster computing service and I can see it gets to the end of the four chains again but then crashes with the following error messages:
“grep: write error”
“Error: supplied CSV file is corrupt!”

I’ve tried it with the VB method and that runs fine, so it seems to be something to do with the MCMC method. I’ve also tried running it with a smaller dataset (N=10) and that works fine, however if I try it on the full dataset with say 500 samples it crashes again.

Model is below and full data is attached – for context, I’m trying to model performance on a task at two timepoints, baseline (before a training intervention) and Follow-up (after the intervention), and I also have two groups (active vs. sham training) who I want to allow to perform differently during the follow-up session. I have other, simpler models that built up to this point if that helps with debugging – but this is the model that I’m primarily interested in fitting with MCMC.

// = basic RW with two learning rates + lapse + separate sensitivities + action bias + two Pav biases

functions {
  
  real partial_sum(array[] int pressed_slice,
                   int start, int end,
                   vector pGo_vector) {
    return bernoulli_lpmf(pressed_slice | pGo_vector[start:end]);
  }
}

data{
  int<lower=1> Nsubs;   //number of subjects
  int<lower=1> Ntrials; //number of trials per session
  int<lower=1, upper=4> cue[Nsubs, 2*Ntrials]; //cue presented for each subject on each trial - 1=gw,2=ga,3=ngw,4=nga
  int<lower=0, upper=1> pressed[Nsubs, 2*Ntrials]; //response for each subject on each trial
  int<lower=-1, upper=1> outcome[Nsubs, 2*Ntrials]; //outcome for each subject on each trial - 1=reward,0=neutral,-1=punishment
  int<lower=1, upper=2> training_cond[Nsubs]; //training condition - 1 = sham, 2=active
}

transformed data{
  int pressed_arr[Nsubs*2*Ntrials] = to_array_1d(pressed);
}

parameters {
  vector[12] mu;
  vector<lower=0>[12] sigma;
  vector[Nsubs] lapse_z;     // lapse probability
  vector[Nsubs] Rew_ep_z;        // learning rate
  vector[Nsubs] Pun_ep_z;        // learning rate
  vector[Nsubs] gobias_z;    // go bias
  vector[Nsubs] pre_Rew_pavbias_z;   // baseline pavlovian bias
  vector[Nsubs] pre_Pun_pavbias_z;   // baseline pavlovian bias
  vector[Nsubs] post_Rew_pavbias_z;   // followup pavlovian bias
  vector[Nsubs] post_Pun_pavbias_z;   // followup pavlovian bias
  vector[Nsubs] Rewsens_z;   // reward sensitivity
  vector[Nsubs] Punsens_z;   // punishment sensitivity
}

transformed parameters {
  real<lower=0,upper=1> pGo[Nsubs,2*Ntrials];   // prob of go (press)
  vector<lower=0,upper=1>[Nsubs*2*Ntrials] pGo_vector;
  
  vector<lower=0, upper=1>[Nsubs] lapse;
  vector<lower=0, upper=1>[Nsubs] Rew_ep;
  vector<lower=0, upper=1>[Nsubs] Pun_ep;
  vector[Nsubs] gobias;
  vector[Nsubs] pre_Rew_pavbias;
  vector[Nsubs] pre_Pun_pavbias;
  vector[Nsubs] post_Rew_pavbias; 
  vector[Nsubs] post_Pun_pavbias; 
  vector<lower=0>[Nsubs] Rewsens;
  vector<lower=0>[Nsubs] Punsens;
  
  // Compute non-centred parametrisation
  for (sub in 1:Nsubs) {
    lapse[sub]  = Phi_approx(mu[1] + sigma[1] * lapse_z[sub]);
    Rew_ep[sub]  = Phi_approx(mu[2] + sigma[2] * Rew_ep_z[sub]);
    Pun_ep[sub]  = Phi_approx(mu[3] + sigma[3] * Pun_ep_z[sub]);

    if(training_cond[sub] == 1){//if sham
      post_Rew_pavbias[sub] = mu[7] + sigma[7] * post_Rew_pavbias_z[sub];
      post_Pun_pavbias[sub] = mu[8] + sigma[8] * post_Pun_pavbias_z[sub];
    } else {//else if active
      post_Rew_pavbias[sub] = mu[9] + sigma[9] * post_Rew_pavbias_z[sub];
      post_Pun_pavbias[sub] = mu[10] + sigma[10] * post_Pun_pavbias_z[sub];

    }
  }
  
  gobias  = mu[4] + sigma[4] * gobias_z;
  pre_Rew_pavbias = mu[5] + sigma[5] * pre_Rew_pavbias_z;
  pre_Pun_pavbias = mu[6] + sigma[6] * pre_Pun_pavbias_z;
  Rewsens = exp(mu[11] + sigma[11] * Rewsens_z);
  Punsens = exp(mu[12] + sigma[12] * Punsens_z);
  
  //Fit the model
  for (sub in 1:Nsubs){
    //Declare and initialise the trial-by-trial parameters
    vector[4] pre_qv_g  = rep_vector(0, 4);  // initial Q value for go
    vector[4] pre_qv_ng = rep_vector(0, 4);  // initial Q value for nogo
    vector[4] pre_wv_g  = rep_vector(0, 4);  // initial action weight for go
    vector[4] pre_wv_ng = rep_vector(0, 4);  // initial action weight for nogo
    vector[4] pre_sv    = rep_vector(0, 4);  // (pavlovian) stimulus value
    vector[4] post_qv_g  = rep_vector(0, 4);  // initial Q value for go
    vector[4] post_qv_ng = rep_vector(0, 4);  // initial Q value for nogo
    vector[4] post_wv_g  = rep_vector(0, 4);  // initial action weight for go
    vector[4] post_wv_ng = rep_vector(0, 4);  // initial action weight for nogo
    vector[4] post_sv    = rep_vector(0, 4);  // (pavlovian) stimulus value

    for (trial in 1:Ntrials){
      
      //Action part - baseline session
      if(cue[sub,trial] == 1 || cue[sub,trial] == 3){ //if reward trial
        pre_wv_g[cue[sub, trial]]  = pre_qv_g[cue[sub,trial]] + gobias[sub] + pre_Rew_pavbias[sub] * pre_sv[cue[sub,trial]];
        pre_wv_ng[cue[sub, trial]] = pre_qv_ng[cue[sub,trial]];  // qv_ng is always equal to wv_ng (regardless of action)
        pGo[sub, trial]   = inv_logit(pre_wv_g[cue[sub,trial]] - pre_wv_ng[cue[sub,trial]]);
        // noise
        pGo[sub, trial]   *= (1 - lapse[sub]);
        pGo[sub, trial]   += lapse[sub]/2;
      } else {
        pre_wv_g[cue[sub, trial]]  = pre_qv_g[cue[sub,trial]] + gobias[sub] + pre_Pun_pavbias[sub] * pre_sv[cue[sub,trial]];
        pre_wv_ng[cue[sub, trial]] = pre_qv_ng[cue[sub,trial]];  // qv_ng is always equal to wv_ng (regardless of action)
        pGo[sub, trial]   = inv_logit(pre_wv_g[cue[sub,trial]] - pre_wv_ng[cue[sub,trial]]);
        // noise
        pGo[sub, trial]   *= (1 - lapse[sub]);
        pGo[sub, trial]   += lapse[sub]/2;
      }
      
      
      //Action part - followup session
      if(cue[sub,Ntrials+trial ]== 1 || cue[sub,Ntrials+trial] == 3){ //if reward trial
        post_wv_g[cue[sub, Ntrials+trial]]  = post_qv_g[cue[sub,Ntrials+trial]] + gobias[sub] + post_Rew_pavbias[sub] * post_sv[cue[sub,Ntrials+trial]];
        post_wv_ng[cue[sub, Ntrials+trial]] = post_qv_ng[cue[sub,Ntrials+trial]];  // qv_ng is always equal to wv_ng (regardless of action)
        pGo[sub, Ntrials+trial]   = inv_logit(post_wv_g[cue[sub,Ntrials+trial]] - post_wv_ng[cue[sub,Ntrials+trial]]);
        // noise
        pGo[sub, Ntrials+trial]   *= (1 - lapse[sub]);
        pGo[sub, Ntrials+trial]   += lapse[sub]/2;
        
      } else {
        post_wv_g[cue[sub, Ntrials+trial]]  = post_qv_g[cue[sub,Ntrials+trial]] + gobias[sub] + post_Pun_pavbias[sub] * post_sv[cue[sub,Ntrials+trial]];
        post_wv_ng[cue[sub, Ntrials+trial]] = post_qv_ng[cue[sub,Ntrials+trial]];  // qv_ng is always equal to wv_ng (regardless of action)
        pGo[sub, Ntrials+trial]   = inv_logit(post_wv_g[cue[sub,Ntrials+trial]] - post_wv_ng[cue[sub,Ntrials+trial]]);
        // noise
        pGo[sub, Ntrials+trial]   *= (1 - lapse[sub]);
        pGo[sub, Ntrials+trial]   += lapse[sub]/2;
      }
      
      //Pavlovian learning - baseline session
      if (outcome[sub, trial] >= 0) {
        pre_sv[cue[sub, trial]] += Rew_ep[sub] * (Rewsens[sub] * outcome[sub, trial] - pre_sv[cue[sub, trial]]);
      } else {
        pre_sv[cue[sub, trial]] += Pun_ep[sub] * (Punsens[sub] * outcome[sub, trial] - pre_sv[cue[sub, trial]]);
      }
      
      //Pavlovian learning - followup session
      if (outcome[sub, Ntrials+trial] >= 0) {
        post_sv[cue[sub, Ntrials+trial]] += Rew_ep[sub] * (Rewsens[sub] * outcome[sub, Ntrials+trial] - post_sv[cue[sub, Ntrials+trial]]);
      } else {
        post_sv[cue[sub, Ntrials+trial]] += Pun_ep[sub] * (Punsens[sub] * outcome[sub, Ntrials+trial] - post_sv[cue[sub, Ntrials+trial]]);
      }
    
      //Instrumental learning - baseline session
      if (pressed[sub, trial]) { // update go value
        if (outcome[sub, trial] >=0) {
          pre_qv_g[cue[sub, trial]] += Rew_ep[sub] * (Rewsens[sub] * outcome[sub, trial] - pre_qv_g[cue[sub, trial]]);
        } else {
          pre_qv_g[cue[sub, trial]] += Pun_ep[sub] * (Punsens[sub] * outcome[sub, trial] - pre_qv_g[cue[sub, trial]]);
        }
      } else { // update no-go value
        if (outcome[sub, trial] >=0) {
          pre_qv_ng[cue[sub, trial]] += Rew_ep[sub] * (Rewsens[sub] * outcome[sub, trial] - pre_qv_ng[cue[sub, trial]]);
        } else {
          pre_qv_ng[cue[sub, trial]] += Pun_ep[sub] * (Punsens[sub] * outcome[sub, trial] - pre_qv_ng[cue[sub, trial]]);
        }
      }
      
      //Instrumental learning - followup session
      if (pressed[sub, Ntrials+trial]) { // update go value
        if (outcome[sub, Ntrials+trial] >=0) {
          post_qv_g[cue[sub, Ntrials+trial]] += Rew_ep[sub] * (Rewsens[sub] * outcome[sub, Ntrials+trial] - post_qv_g[cue[sub, Ntrials+trial]]);
        } else {
          post_qv_g[cue[sub, Ntrials+trial]] += Pun_ep[sub] * (Punsens[sub] * outcome[sub, Ntrials+trial] - post_qv_g[cue[sub, Ntrials+trial]]);
        }
      } else { // update no-go value
        if (outcome[sub, Ntrials+trial] >=0) {
          post_qv_ng[cue[sub, Ntrials+trial]] += Rew_ep[sub] * (Rewsens[sub] * outcome[sub, Ntrials+trial] - post_qv_ng[cue[sub, Ntrials+trial]]);
        } else {
          post_qv_ng[cue[sub, Ntrials+trial]] += Pun_ep[sub] * (Punsens[sub] * outcome[sub, Ntrials+trial] - post_qv_ng[cue[sub, Ntrials+trial]]);
        }
      }
    }
  }
  
  pGo_vector = to_vector(to_array_1d(pGo));
}
  
model {
  int grain_size = 1;
  
  // population parameters
  mu[1]  ~ normal(0, 1.0); //lapse
  mu[2]  ~ normal(0, 1.0); //ep  
  mu[3]  ~ normal(0, 1.0); //ep
  mu[4]  ~ normal(0, 10.0); //go
  mu[5]  ~ normal(0, 10.0); //pre_Rew_pav
  mu[6]  ~ normal(0, 10.0); //pre_Pun_pav
  mu[7]  ~ normal(0, 10.0); //post_Rew_pav - sham condition
  mu[8]  ~ normal(0, 10.0); //post_Pun_pav - sham condition
  mu[9]  ~ normal(0, 10.0); //post_Rew_pav - active condition
  mu[10]  ~ normal(0, 10.0); //post_Pun_pav - active condition
  mu[11]  ~ normal(0, 1.0); //rew sens
  mu[12]  ~ normal(0, 1.0); //pun sens
  sigma[1:3] ~ normal(0, 0.2);
  sigma[4:10] ~ cauchy(0, 1.0);
  sigma[11:12] ~ normal(0, 0.2);
  
  // subject level parameters
  lapse_z   ~ std_normal();
  Rew_ep_z  ~ std_normal();
  Pun_ep_z  ~ std_normal();
  gobias_z  ~ std_normal();
  pre_Rew_pavbias_z ~ std_normal();
  pre_Pun_pavbias_z ~ std_normal();
  post_Rew_pavbias_z ~ std_normal();
  post_Pun_pavbias_z ~ std_normal();
  Rewsens_z ~ std_normal();
  Punsens_z ~ std_normal();
  
  //trial level
  target += reduce_sum(partial_sum,pressed_arr,
                        grain_size, pGo_vector);

}
        
generated quantities {
  matrix[Nsubs, 2*Ntrials] log_lik;
  
  for (sub in 1:Nsubs){
    for (trial in 1:(2*Ntrials)){
      log_lik[sub,trial] = bernoulli_lpmf(pressed[sub,trial] | pGo[sub,trial]);
    }
  }

}

d.gng.Rdata (139.0 KB)
Frankly I’m at a bit of a loss for what to do next to be honest! Would really appreciate any thoughts for what I can try. Thanks in advance.

  • Operating System: Windows (on desktop); unsure for cluster
  • CmdStan Version: 0.4.0
  • Compiler/toolkit: unsure

Further thought – there’s quite a lot of data involved, though not tonnes. ~600 subjects doing 160 trials each, and the model contains 10 subject-level parameters. Could it be a memory issue?

@rok_cesnovar are you able to check this out?

@hugo_fleming are you able to post the R code you’re using to do this? Sometimes errors like this are generated if the files being written out are being overwritten somehow.

Hi Brian – sure, R code below.

Cheers!

# Inputs:
# - data in gng_alltogether
# - stan model in parallel_2pav2lr_timepointxcondition.stan

# Outputs:
# - waic, postpreds (pGo) and posterior parameters

library(tidyverse)
library(cmdstanr)
options(mc.cores=parallel::detectCores())
library(loo)

gng_alltogether <- read_rds('/lustre/scratch/scratch/ucbtha5/second_cbm/main_gng_alltogether.rds')
m.parallel.standard2pav2lr_timepointxcondition <- cmdstan_model(stan_file='/lustre/scratch/scratch/ucbtha5/second_cbm/model_for_cluster.stan')

# > Compile data ####

# Compute single dataset across all conditions
gng_for_model <- gng_alltogether %>%
  group_by(subjn) %>% 
  nest() %>% 
  ungroup() %>% 
  mutate(data = map(data,function(df){
    df %>% mutate(combined_trial_idx=row_number())
  })) %>%
  unnest(data) %>%
  select(subjn,training_condition,combined_trial_idx,cue,keyPressed,outcome)%>%
  pivot_wider(names_from="combined_trial_idx",
              values_from=c("cue","keyPressed","outcome")) %>%
  mutate(training_condition = if_else(training_condition=="Sham",1,2))

d.gng <- list(
  Nsubs = nrow(gng_for_model) %>% as.integer(),
  Ntrials = gng_for_model %>% select(starts_with("keyPressed")) %>% ncol() / 2, #divide by two because half baseline half followup
  cue     = gng_for_model %>% select(starts_with("cue_")) %>% as.matrix(),
  pressed = gng_for_model %>% select(starts_with("keyPressed_"))%>% as.matrix(),
  outcome = gng_for_model %>% select(starts_with("outcome_")) %>% as.matrix(),
  training_cond = gng_for_model %>% pull(training_condition) %>% as.integer()
)

# > Fit model ####

fit.mc.standard2pav2lr_timepointxcondition <- m.parallel.standard2pav2lr_timepointxcondition$sample(
  data=d.gng,
  seed=42,
  refresh=10,
  iter_warmup=1000,
  iter_sampling=1000,
  adapt_delta=0.9,
  output_dir="/lustre/scratch/scratch/ucbtha5/second_cbm/data_output/"
)
#fit.mc.standard2pav2lr_timepointxcondition$save_output_files(dir=here::here("data output/Stan fits"),basename = experiment_name)

# > Compute other quantities ####
#waic.mc.standard2pav2lr_timepointxcondition <- fit.mc.standard2pav2lr_timepointxcondition$draws("log_lik",format="df") %>% 
#  as.matrix() %>%
#  waic()

pp.mc.standard2pav2lr_timepointxcondition <- fit.mc.standard2pav2lr_timepointxcondition$draws("pGo",format="df") %>% 
  slice_sample(n=40) %>%
  as_tibble() %>% 
  select(c(.draw,starts_with("pGo"))) %>%
  pivot_longer(cols=starts_with("pGo"),
               names_to=c("subj_idx","trial_idx"),
               names_pattern="pGo\\[(.*),(.*)\\]",
               values_to="pGo")%>%
  mutate(subj_idx = as.double(subj_idx),
         trial_idx = as.double(trial_idx))%>%
  mutate(
    timepoint = if_else(trial_idx <= 80,1,2),
    trial_no = if_else(timepoint==1, trial_idx,trial_idx-80)
  ) %>%
  left_join(gng_for_model %>% 
              select(subjn,training_condition) %>% 
              mutate(subj_idx = 1:n(),
                     training_condition = if_else(training_condition==1,"Sham","Active"))) %>%
  select(-c(subj_idx,trial_idx)) %>%
  left_join(gng_alltogether %>% 
              filter(subjn %in% gng_for_model$subjn) %>% #necessary when using a smaller dataset, e.g. troubleshooting
              select(subjn,trial_no,timepoint,timepoint_asfact,
                     training_condition,cond_asfact)) %>%
  mutate(Accuracy = if_else(
    cond_asfact == "Go to win reward" | cond_asfact == "Go to avoid punishment",
    pGo,1-pGo
  )) %>%
  group_by(.draw,subjn,training_condition,timepoint_asfact,cond_asfact,trial_no) %>% 
  summarise(Accuracy = mean(Accuracy),.groups="drop")

write_rds(pp.mc.standard2pav2lr_timepointxcondition, file = '/lustre/scratch/scratch/ucbtha5/second_cbm/data_output/pp_mc_standard2pav2lr_timepointxcondition.rds')

params.mc.standard2pav2lr_timepointxcondition <- fit.mc.standard2pav2lr_timepointxcondition$draws(c("pre_Rew_pavbias","pre_Pun_pavbias",
                                                                                                    "post_Rew_pavbias","post_Pun_pavbias"),
                                                                                                  format="df") %>% 
  as_tibble() %>% 
  select(-c(".chain",".iteration",".draw")) %>% 
  pivot_longer(cols=everything(), 
               names_pattern="(.*)_(.*)_pavbias\\[(.*)\\]",
               names_to=c("timepoint","valence","subj_idx"),
               values_to="pavbias") %>% 
  group_by(timepoint, valence,subj_idx) %>% 
  summarise(pavbias=mean(pavbias),.groups="drop") %>%
  mutate(subj_idx = as.numeric(subj_idx)) %>% 
  arrange(subj_idx) %>%
  left_join(
    gng_for_model %>% 
      select(subjn,training_condition) %>% 
      mutate(subj_idx = 1:n(),
             training_condition = if_else(training_condition==1,"Sham","Active"))) %>%
  mutate(timepoint_asfact = if_else(timepoint=="pre",
                                    factor("Baseline",levels=c("Baseline","Follow-up"),ordered=TRUE),
                                    factor("Follow-up",levels=c("Baseline","Follow-up"),ordered=TRUE)),
         timepoint_asnum = if_else(timepoint=="pre",1,2))

write_rds(params.mc.standard2pav2lr_timepointxcondition, file = '/lustre/scratch/scratch/ucbtha5/second_cbm/data_output/params_mc_standard2pav2lr_timepointxcondition.rds')

params_for_test.mc.standard2pav2lr_timepointxcondition <- params.mc.standard2pav2lr_timepointxcondition %>%
  pivot_wider(id_cols=c("subjn","training_condition"),
              names_from=c("timepoint","valence"),
              values_from="pavbias") %>%
  mutate(rew_change = post_Rew-pre_Rew,
         pun_change = post_Pun-pre_Pun) %>%
  ungroup()

write_rds(params_for_test.mc.standard2pav2lr_timepointxcondition, file = '/lustre/scratch/scratch/ucbtha5/data_output/second_cbm/paramsfortest_mc_standard2pav2lr_timepointxcondition.rds')

EDIT: Nevermind! I updated to R 4.2.1 and rtools42 and I am no longer getting the error

I am getting this error message as well using cmdstan 2.29.2 (and 2.30.0-rc1).
Windows 11
R 4.1.3
cmdstanr version 0.5.2
rtools40

Model is from examples folder

cmdstanr::set_cmdstan_path(file.path("C:/Users/gscot/.cmdstan", "cmdstan-2.29.2"))
file <- file.path(cmdstan_path(), "examples", "bernoulli", "bernoulli.stan")
mod <- cmdstan_model(file)
data_list <- list(N = 10, y = c(0,1,0,0,0,0,0,0,0,1))
fit <- mod$sample(
  data = data_list,
  seed = 123,
  chains = 4,
  parallel_chains = 4,
  refresh = 0
)

Output:

Model executable is up to date!
Running MCMC with 4 parallel chains...

Chain 1 finished in 0.0 seconds.
Chain 2 finished in 0.0 seconds.
Chain 3 finished in 0.0 seconds.
Chain 4 finished in 0.0 seconds.

All 4 chains finished successfully.
Mean chain execution time: 0.0 seconds.
Total execution time: 0.2 seconds.

'""' is not recognized as an internal or external command,
operable program or batch file.
 Error: Supplied CSV file is corrupt!

cmdstanr::check_cmdstan_toolchain() returns: The C++ toolchain required for CmdStan is setup properly!

Furthermore, when I run mod$diagnose(), I get this output:

method = diagnose
  diagnose
    test = gradient (Default)
      gradient
        epsilon = 9.9999999999999995e-07 (Default)
        error = 9.9999999999999995e-07 (Default)
id = 1 (Default)
data
  file =  (Default)
init = 2 (Default)
random
  seed = 475791669
output
  file = C:/Users/gscot/AppData/Local/Temp/RtmpwHz61Q/bernoulli-202206251311-1-2485e3.csv
  diagnostic_file =  (Default)
  refresh = 100 (Default)
  sig_figs = -1 (Default)
  profile_file = C:/Users/gscot/AppData/Local/Temp/RtmpwHz61Q/bernoulli-profile-202206251311-1-863e3c.csv
num_threads = 1 (Default)

However, in the R temp folder RtmpwHz61Q, I see these files being created:
bernoulli-202206251311-1-402047.csv
bernoulli-202206251311-2-402047.csv
bernoulli-202206251311-3-402047.csv
bernoulli-202206251311-4-402047.csv

Which does not match the output file in the output from mod$diagnose(). I also do not see a profile csv file like “bernoulli-profile-202206251311-1-863e3c.csv”.

If I try running read_cmdstan_csv("C:/Users/gscot/AppData/Local/Temp/RtmpwHz61Q/bernoulli-202206251311-1-402047.csv"), I get the same error as before:

'""' is not recognized as an internal or external command,
operable program or batch file.
 Error: Supplied CSV file is corrupt!

Here is the csv file
bernoulli-202206251311-1-402047.csv (48.3 KB)

I would appreciate any help resolving this error.

Thanks,
Scott

Is this with the Github (development) version of cmdstanr?

Install using remotes::install_github(" stan-dev/cmdstanr ")

No it was with

install.packages("cmdstanr", repos = c("https://mc-stan.org/r-packages/", getOption("repos")))

Please note the EDIT at the top of my post (in case you missed it) - I updated to R 4.2.1 and rtools42 and I am no longer getting the error

Thanks,
Scott