Please also provide the following information in addition to your question:
Operating System: Windows 2016 Server
rstanarm Version: 2.18.2
I have estimated a stan_lm object.
There are 1.1 M observations and 41 predictors.
The sample is 16000 (posterior sample size).
I had to increase iter to get n_eff > 1000 for log-posterior.
It was run on 4 chains.
In the RStudio environment, the stan_lm object is shown with size 1.5 Gb
I attempted to use shinystan.
launch_shinystan(stan_lm object)
Hang on… preparing graphical posterior predictive checks for rstanarm model.
See help(‘shinystan’, ‘rstanarm’) for how to disable this feature.
Error: cannot allocate vector of size 130.9 Gb
Is this expected behavior to require this vector size to launch shinystan?
If you need 16000 nominal draws to get an effective sample size of 1000 for stan_lm something is wrong. But yes, if you have 16000 posterior predictions for each of 1.1 million observations, that is going to consume some RAM.
If you want to use the rest of shinystan’s functionality and ok without the PPCs (you can do those separately with pp_check()/bayesplot package, then you can use:
launch_shinystan(fit, ppd = FALSE)
and it should require much less memory. Although it will still be a decent amount given the number of posterior draws.