R 3.5.1 complains about slot "mode"

It doesn’t look like there have been any updates to the r-base packages since 3.5.1 came out. So possibly linked to another dependency or something particular to your installation. It doesn’t seem to be an isolated incident, but is not entirely reproducible either.

1 Like

I commented on that issue, because I have been running into it too.
So far, I haven’t seen the error when loading shinystan before running.
I’ve also do not recall encountering it within RStudio. My recent incidents have all been from launching R from the command line (when I don’t want to keep RStudio’s console occupied).

Have you tried running garbage collect immediately before running stan?

No, I had not.
But I just reran the eight-schools example, calling gc() before fit:

> gc()
          used (Mb) gc trigger  (Mb) max used (Mb)
Ncells 1277091 68.3    2552130 136.3  1381792 73.8
Vcells 3175057 24.3    8388608  64.0  5594504 42.7
> fit <- sampling(tstmod, data = schools_dat, iter = 1000, chains = 4, verbose = TRUE)
[snip]
Error in FUN(X[[i]], ...) : 
  trying to get slot "mode" from an object of a basic class ("NULL") with no slots
> system("free -m")
              total        used        free      shared  buff/cache   available
Mem:          64338        5880       56445         319        2012       57453
Swap:         22895           0       22895

and still got the error.
FWIW, the computer has 64gb of RAM, so it takes a rather big model to start running into memory problems. Although garbage collectors acting up would explain the sporadic nature of the problem.

Also worth noting that I’m on StanHeaders 2.18.0 and rstan 2.17.4. I could probably downgrade StanHeaders (or when is rstan 2.18 likely to be released?), but I haven’t run into obvious issues other than warnings.

Have you tried it with just one core? You might see a better error message.

I tried not setting options(mc.cores =...), and I tried setting it but with only 1 chain. I could not reproduce the error.

My guess is that it will work with multiple chains but one core and that there is some issue on your machine with the multicore mechanism.

Have you tried using Ubuntu backports?

Because I couldn’t reproduce when running it with a single core, that supports the idea it’s a multicore problem.

I have not tried Ubuntu backports. Antergos is not in the Debian family (it is Arch based / uses pacman rather than apt as the package manager). Theoretically
Does it have patches applied to its R? Have they been upstreamed?

I didn’t see any links in your github issue where I could read more.

I don’t think there is anything more to read on the rstan side. Basically, when you do multicore, the stanfit objects are not coming back to the main process properly. Does example(parLapply, package = "parallel") work?

I missed your Github comment. I’ve reopened the issue. Since it was reproduced on a upstream Debian my assumption is that this is dependency related. I’m still learning what this might implicate.

For example this coincided with the issue being resolved on Debian 10:
https://tracker.debian.org/news/985566/openmpi-312-1-migrated-to-testing/

I ran replicate(1e2, example(parLapply, package = "parallel")). No errors.

Any reason why Stan handles parallelism on R’s side, instead of the C++ side, where it could for example use OpenMP?
One user-side advantage would be that you wouldn’t have to manually go hunting down and killing all the extra R processes whenever you need to cancel a running Stan model.

@increasechief Thank you for reopening the issue. I’m commenting more there.
I do not actually have a system openmpi installed. However, were I to install the default:

$ pacman -Ss openmpi
extra/openmpi 3.1.2-1
    High performance message passing library (MPI)

I would also get MPI 3.1.2-1, like Debian 10. But as I don’t have (a system) MPI at all, I’m not under the impression that R relies on system MPI libraries, so I’d have thought it should be relatively OS agnostic.

Did anyone ever find a “solution” for this? I keep running into it trying to run models on the cloud and when I try to debug locally, R crashes every time it gets to the sampling command. I have loaded shinystan and tried increasing memory on the cloud.