LOO-CV: run more iterations to get more posterior draws?

I’m getting a suggestion to “run more iterations to get at least about 2200 posterior draws to improve LOO-CV approximation accuracy” … but I don’t know how to do that. Sorry!

Is it asking for something simple that just takes more time, like passing some ndraws parameter, or increasing the value of iter when fitting the model? I’m using brms::brm(..., backend = "cmdstanr") if that makes a difference.

I did find a relevant GitHub issue by @paul.buerkner and @avehtari but am still searching for practical hints/instructions.

Thanks in advance for your time and attention!

> loo(amount_wb_model)

Computed from 2000 by 3993 log-likelihood matrix.

         Estimate    SE
elpd_loo  -8887.6 139.2
p_loo      1551.2  31.2
looic     17775.1 278.3
------
MCSE of elpd_loo is NA.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.3, 2.2]).

Pareto k diagnostic values:
                         Count Pct.    Min. ESS
(-Inf, 0.7]   (good)     2830  70.9%   65      
   (0.7, 1]   (bad)       910  22.8%   <NA>    
   (1, Inf)   (very bad)  253   6.3%   <NA>    
See help('pareto-k-diagnostic') for details.
Warning message:
Found 1163 observations with a pareto_k > 0.7 in model 'amount_wb_model'. We recommend to run more iterations to get at least about 2200 posterior draws to improve LOO-CV approximation accuracy. 

It would help if you showed all the arguments.

Now I’m guessing you are running 2 chains to get 2000 posterior draws.

Although it’s good idea to run at least 4 chains, with this many very bad Pareto-k values, it’s unlikely that it helps. It would help if you showed the full brms call with the model formula and family, and tell a bit about your data, so that I can provide other suggestions how to improve your model or cross-validation computation.

It was my mistake! 4 chains, but an upstream typo led to my code trying to fit hurdle_negbinomial() instead of zero_inflated_negbinomial(), with data that are zero-inflated. And only 500 post-warmup iterations. A long story. Sorry for the false alarm.

1 Like