Error with multiple cores for leave-one-group-out kfold

Using packages rstanarm and loo to run a logistic regression with four different intercepts and univariate slope hierarchical by individual, then using k-fold leave-one-group-out for model selection (kfold_split_grouped, then kfold). I get the same error when running the kfold command regardless of the model complexity.
Error message:
Fitting K = 60 models distributed over 3 cores
Error in checkForRemoteErrors(val) :
3 nodes produced errors; first error: object ‘n_chains’ not found
I think it has to do with the number of cores in the kfold command (The above error occurred with 3 cores). If I run it with one core, it works but impossibly slowly. More than one core, and there is a problem.

I updated R and all packages yesterday.

Below is some sample R code with simulated data that should reproduce the error. The problem occurs at line 83.
########################################################################

Test of hierarchical analysis using rstanarm.

library(rstanarm)
library(MCMCvis)
library(loo)

########################################################################

Simulate data.

Logistic linear regression with four different intercepts by group and

slope different for each individual, but overall slope

distributed normally with defined mean and variance.

N_indiv ← 60 # Number of individuals.
N_data ← 20 # Number of datapoints per individual.
Interc4 ← c(0,-0.5,0.5,-0.25) # One of four intercepts, depending on group.
Slope_mean ← 0.05 # Known, defined hyperprior mean of slope.
Slope_std ← 0.1 # Known, defined hyperprior of standard deviation of slope.

set.seed(45820)
Tst_Bayes ← vector(length = 0)
slope_samp ← rnorm(n = N_indiv, mean = Slope_mean, sd = Slope_std) # Each individual has it’s own slope.
Period_samp ← sample(1:4, replace = TRUE, size = N_indiv) # Define group for determining intercept.
for(i in 1:N_indiv){

Sample slope for each individual.

slope_s_i ← slope_samp[i]
Interc_p ← Interc4[Period_samp[i]]

Sample predictor variable. For simplicity, random draw of x from uniform distribution.

x_samp ← runif(n = N_data, min = 0, max = 20)
logit_y_samp ← Interc_p + (slope_s_i*x_samp)
y_samp ← exp(logit_y_samp)/(1.0+exp(logit_y_samp))
YN_samp ← runif(n = N_data)
GoNoGo ← vector(length = N_data)
GoNoGo[YN_samp <= y_samp] ← 1
GoNoGo[YN_samp > y_samp] ← 0
Tst_Bayes ← rbind(Tst_Bayes, cbind(rep(i, times = N_data),rep(Period_samp[i], times = N_data),x_samp, y_samp, YN_samp, GoNoGo))
}

Tst_Bayes2 ← data.frame(
ID = Tst_Bayes[,1],
Period = as.factor(Tst_Bayes[,2]),
x_samp = Tst_Bayes[,3],
GoNoGo = Tst_Bayes[,6])

########################################################################

Analysis.

n_chains ← 3
n_thin ← 10
n_finalsampout ← 2000 # This is per chain!
n_warmup ← n_finalsampout/2
n_iter ← n_warmup + (n_finalsampout*n_thin)
set.seed(6379)
seed_samp ← sample(x = 1:99999, size = 7)

tstA ← stan_glm (GoNoGo ~ 1 , data = Tst_Bayes2, family = binomial(link = “logit”), chains = n_chains, iter = n_iter, warmup = n_warmup, thin = n_thin, cores = n_chains, seed = seed_samp[04])

Can run these if you want: tstB ← stan_glm (GoNoGo ~ Period , data = Tst_Bayes2, family = binomial(link = “logit”), chains = n_chains, iter = n_iter, warmup = n_warmup, thin = n_thin, cores = n_chains, seed = seed_samp[07])

Can run these if you want: tstC ← stan_glmer(GoNoGo ~ Period + (0 + x_samp|ID) , data = Tst_Bayes2, family = binomial(link = “logit”), chains = n_chains, iter = n_iter, warmup = n_warmup, thin = n_thin, cores = n_chains, seed = seed_samp[06])

Can run these if you want: tstD ← stan_glmer(GoNoGo ~ Period + (0 + x_samp|ID) + x_samp, data = Tst_Bayes2, family = binomial(link = “logit”), chains = n_chains, iter = n_iter, warmup = n_warmup, thin = n_thin, cores = n_chains, seed = seed_samp[05])

#################

Leave one out.

loo_tstA ← loo(tstA, cores = 2)
loo_tstB ← loo(tstB, cores = 2)
loo_tstC ← loo(tstC, cores = 2)
loo_tstD ← loo(tstD, cores = 2)

loo_compare(loo_tstA,loo_tstB,loo_tstC,loo_tstD)

elpd_diff se_diff

tstD 0.0 0.0

tstC -1.0 1.4

tstB -97.5 13.3

tstA -116.3 14.4

#################

Leave one group out.

K_grp ← kfold_split_grouped(K = max(Tst_Bayes2$ID), x = Tst_Bayes2$ID)

logo_tstA ← kfold(tstA, K=max(Tst_Bayes2$ID), folds = K_grp, cores = 3) # Error message here.
logo_tstB ← kfold(tstB, K=max(Tst_Bayes2$ID), folds = K_grp, cores = 3) # Error message here.
logo_tstC ← kfold(tstC, K=max(Tst_Bayes2$ID), folds = K_grp, cores = 3) # Error message here.
logo_tstD ← kfold(tstD, K=max(Tst_Bayes2$ID), folds = K_grp, cores = 3) # Error message here.

########

elpd compare

loo_compare(logo_tstA,logo_tstB,logo_tstC,logo_tstD)

########

model stacking

lpd_point ← cbind(logo_tstA$pointwise[,“elpd_kfold”],
logo_tstB$pointwise[,“elpd_kfold”],
logo_tstC$pointwise[,“elpd_kfold”],
logo_tstD$pointwise[,“elpd_kfold”])
stacking_weights(lpd_point)

Ping @jonah