Comparing complex multivariate models with loo_compare

I have run two complex multivariate models following this tutorial: The models follow this format:

f1 = as.formula('y1~1+var1+var2+var3+(1+var1+var2+var3|GroupID)')
f2 = as.formula('y2~1+var1+var2+var3+(1+var1+var2+var3|GroupID)')
f3 = as.formula('y1~1+var1+var2+(1+var1+var2+|GroupID)')
f4 = as.formula('y2~1+var1+var2+(1+var1+var2+|GroupID)')
model_of_interest = brm(f1+f2,data=df,iter=6000,chains=4,cores=4,family="gaussian",control=list(adapt_delta=0.9),save_all_pars=TRUE)
null_model = brm(f3+f4,data=df,iter=6000,chains=4,cores=4,family="gaussian",control=list(adapt_delta=0.9),save_all_pars=TRUE)

and I now want to compute loo() for each and compare using loo_compare(). Using this tutorial as a guide, I compute loo as follows:

loo1 <- loo(model_of_interest, moment_match=TRUE)
loo2 <- loo(null_model, moment_match=TRUE)

However, before I can enter this into loo_compare() I get this error for both loo1 and loo2:

Error in .update_pars(x, upars = upars, …) :
length(new_samples) == nrow(pars) is not TRUE
Error: Moment matching failed. Perhaps you did not set ‘save_all_pars’ to TRUE when fitting your model?

I am confused because I have definitely set save_all_pars=TRUE in both model fits, and I can’t seem to find helpful guidance online about how to solve this. I am using R version 4.0.2. I appreciate any advice, thanks very much.

1 Like

Hmm, this seems to be an error coming from brms and not loo itself (although it could be a loo error that brms is just catching). @paul.buerkner any ideas here? It sounds like save_all_pars is TRUE but they still get the error.

The error message just gives a suggestions that “perhaps” this is related to “save_all_pars” (because this is the most common problem), but in this case it is not. This is an error that occurs with some 3.x versions of R but should be gone in R 4.x

1 Like

I have the same error with a distributional model (lognormal family with estimated sigma) but I‘m not able to upgrade R to 4.x (older Mac which doesn‘t run Catalina)
Is there anything I could do?

Can you provide a minimal reproducible example?

The error happens when I truncate the outcome variable, which is also true for my original model…

model.test <-
      bf(hp | trunc(lb = 52, ub = 335) ~ mpg + (1 | cyl), sigma ~ (1 | cyl)),
      family = lognormal(),
      data = mtcars,
      save_all_pars = TRUE
model.test <-
    criterion = "loo",
    moment_match = TRUE

Error in validate_ll(log_ratios) : All input values must be finite.
Fehler: Moment matching failed. Perhaps you did not set ‘save_all_pars’ to TRUE when fitting your model?

In your case, there seems to be a numerical problem with the truncated lognormal distribution that is unlikely to be related to moment matching directly.

Could it be that the error was fixed by

@Pascal_J: if you do traceback() right after you get the error (running on 1 core), you may compare that with what I reported in

This is the result of traceback():

31: stop(..., call. = FALSE)
30: stop2("Moment matching failed. Perhaps you did not set ", "'save_all_pars' to TRUE when fitting your model?")
29: loo_moment_match.brmsfit(x = .x1, loo = .x2, newdata = .x3, resp = .x4, 
        k_threshold = .x5, check = .x6)
28: loo_moment_match(x = .x1, loo = .x2, newdata = .x3, resp = .x4, 
        k_threshold = .x5, check = .x6)
27: eval(expr, envir, ...)
26: eval(expr, envir, ...)
25: eval2(call, envir = args, enclos = parent.frame())
24: do_call("loo_moment_match", moment_match_args)
23: .loo(x = .x1, newdata = .x2, resp = .x3, model_name = .x4, pointwise = .x5, 
        k_threshold = .x6, moment_match = .x7, reloo = .x8, moment_match_args = .x9, 
        reloo_args = .x10)
22: eval(expr, envir, ...)
21: eval(expr, envir, ...)
20: eval2(call, envir = args, enclos = parent.frame())
19: do_call(paste0(".", criterion), args)
18: .fun(criterion = .x1, pointwise = .x2, resp = .x3, k_threshold = .x4, 
        moment_match = .x5, reloo = .x6, moment_match_args = .x7, 
        reloo_args = .x8, x = .x9, model_name = .x10, use_stored = .x11)
17: eval(expr, envir, ...)
16: eval(expr, envir, ...)
15: eval2(call, envir = args, enclos = parent.frame())
14: do_call(compute_loo, args)
13: .fun(models = .x1, criterion = .x2, pointwise = .x3, compare = .x4, 
        resp = .x5, k_threshold = .x6, moment_match = .x7, reloo = .x8, 
        moment_match_args = .x9, reloo_args = .x10)
12: eval(expr, envir, ...)
11: eval(expr, envir, ...)
10: eval2(call, envir = args, enclos = parent.frame())
9: do_call(compute_loolist, args)
8: loo.brmsfit(.x1, moment_match = .x2, model_names = .x3)
7: loo(.x1, moment_match = .x2, model_names = .x3)
6: eval(expr, envir, ...)
5: eval(expr, envir, ...)
4: eval2(call, envir = args, enclos = parent.frame())
3: do_call(fun, args)
2: add_criterion.brmsfit(model.test, criterion = "loo", moment_match = TRUE)
1: add_criterion(model.test, criterion = "loo", moment_match = TRUE)

Updating to the master branch of stan-dev/loo had no effect.

Yes, the traceback looks quite different (it also doesn’t show the failure in validate_ll that made me think of that possibility). I had put some breakpoints using debug in inner functions such as loo_moment_match, then stepped line by line to see what the data looked like right before the failure.

Following up on the initial problem, I have managed to run brms::loo_subsample() instead of loo() without errors, but I receive lots of pareto k diagnostic warnings. I have tried to run brms::loo_moment_match() on the loo_subsample() result as follows:

loo1 <- brms::loo_subsample(fit1)
loo1_updated <- brms::loo_moment_match(fit1,loo1)

but I receive this very confusing error:

Error in prep_call_sampler(object) :
the compiled object from C++ code for this model is invalid, possible reasons:

  • compiled with save_dso=FALSE;
  • compiled on a different platform;
  • does not exist (created from reading csv files).

Does brms::loo_subsample not work with brms::loo_moment_match? Please let me know if I should start a new topic, I’m not sure if this is related to the above issue. I have updated to the version of latest version of the loo package (loo_2.3.1.9000) via github.

Perhaps @topipa has some ideas for both this issue and the one that @Pascal_J described a bit further up.

Update: this issue has been solved here