It’s a categorical model with 4 outcome categories, 2 group-level effects and 46 x 3 = 138 population-level effects, fit using brms. When I calculate elpd_loo using
loo(mymodel, reloo = TRUE), the model is re-fit once due to a problematic observation. This re-fitting finishes fine, and a loo object is returned. However, the following warning is printed:
UNRELIABLE VALUE: Future (‘’) unexpectedly generated random numbers without specifying argument ‘[future.]seed’. There is a risk that those random numbers are not statistically sound and the overall results might be invalid. To fix this, specify argument ‘[future.]seed’, e.g. ‘seed=TRUE’. This ensures that proper, parallel-safe random numbers are produced via the L’Ecuyer-CMRG method. To disable this check, use [future].seed=NULL, or set option ‘future.rng.onMisuse’ to “ignore”.
However, it seems to me that
loo() doesn’t have an argument called seed. Nonetheless, just to make sure, I tried just now to run
loo() on my model again, with reloo = TRUE and seed = TRUE. After another 90 minutes or re-fitting the model without the problematic observation, I get the same error again.
It might also be worth pointing out that the new reloo object is not identical to the first. Elpd_loo and the other diagnostics differ by a tiny amount (about 0.015 points). But I guess that is attributable to the fact that the refitting was done using different random seeds.
The question is: given that I’m still getting the warning, can I trust the reloo diagnostics or not?