m1 <- brm(bform1,
prior = prior(normal(0, 5), class = “b”),
data = d2,
warmup = 1000,
iter = 3500,
chains = 4,
cores = 4,
file = “m1_brms”)

and then I get this error message:

Error: At least two response categories are required.
In addition: Warning messages:
1: Rows containing NAs were excluded from the model.
2: In max(out$Y) : no non-missing arguments to max; returning -Inf

Both y1 and y2 are copies of the same integer-valued variable y (I was not allowed to use the same response variable twice, even though I use it on different subsets). On both subsets sub1 and sub2 they take values in {1,2,3}, so it seems that there are at least two response categories.

What I am doing wrong?

Please also provide the following information in addition to your question:

The problem is that brms will still omits all rows with NAs in variables used anywhere in the model.
In other words, your predictor variables should not be NA even for observations not actually used for the sub-model. I will add an informative error message for this.

The error message is pretty informative on what is the problem. For models including subsets, you need to compute loo/waic per response variable and can’t do it jointly (because there are not based on the same observations).

Thanks for the swift response! I would have thought that a “joint” loo/waic could be computed anyway by simply adding the log-likelihoods of the 2 submodels. If not, how would I compute loo/waic per response from the multivariate model, without having to run separate models again?

Thank you. Yes, I could figure out what to do from the error message and consulting brms.pdf, so there was no need for me to take up your time. Sorry!

Having looked up the definitions again (in Gelman, Hwang & Vehtari 2013), waics are just point-wise sums of log-“average-likelihoods”, where the averages are taken over the posterior. Then an “effective number of parameters” is subtracted and the whole thing is multiplied by -2.

It then seems reasonable to me to simply add the waics of the two sub-models and call the result the “joint” waic and use that to compare multivariate models with different sets of predictors (but the same y-values). Wouldn’t you agree?

If you are careful with adding the right things, it can be meaningful, but I don’t want to automate this in brms (yet). You will have to work on the (pointwise) results returned by loo/waic etc. yourself to get the desired results.