Testing an "omnibus" effect of a variable across the output categories

I think mu2_Injury1 + mu3_Injury1 + mu4_Injury1 = 0 “tests” the null hypothesis that the sum of these 3 parameters is zero. This is not the same as the null hypothesis mu2_Injury1 = 0 and mu3_Injury1 = 0 and mu4_Injury1 = 0 which I guess is what you mean by omnibus test. In principle, you could inspect the marginal posteriors of these 3 parameters (summarized in your output above under Population-Level Effects) which basically means to use

hypothesis(modelname, c(
  "mu2_Injury1 = 0",
  "mu3_Injury1 = 0",
  "mu4_Injury1 = 0"
))

but that doesn’t include a multiplicity adjustment, so it doesn’t take into account that you’re running 3 separate “tests” and want to control the “family-wise error rate” of having at least one false positive “test” result among them. As far as I know, brms doesn’t offer a multiplicity adjustment. I’m not sure, but I think a very conservative multiplicity adjustment could be achieved by constructing simultaneous 95% posterior intervals based on a 95% highest posterior density (HPD) region. Basically, the idea is as follows (here illustrated with the “Eight schools” example from here, so the content of schools.stan has to be copied over from there):

library(rstan)
options(mc.cores = parallel::detectCores())
rstan_options(auto_write = TRUE)

schools_dat <- list(J = 8,
                    y = c(28,  8, -3,  7, -1,  1, 18, 12),
                    sigma = c(15, 10, 16, 11,  9, 11, 10, 18))

fit <- stan(file = "schools.stan", data = schools_dat, seed = 2098606L)
lvl <- 0.95
mat <- as.matrix(fit)
lp_q <- quantile(mat[, "lp__"], probs = 1 - lvl)
mat <- cbind(mat, "in_itvl" = mat[, "lp__"] >= lp_q)
# Names of the parameters to adjust:
pars_adj <- paste0("eta[", seq_len(4), "]")
# Adjusted 95% posterior intervals:
for (pars_adj_i in pars_adj) {
  message(pars_adj_i)
  print(range(mat[as.logical(mat[, "in_itvl"]), pars_adj_i]))
}

Note that these simultaneous 95% posterior intervals which are based on the lp__ values are very conservative as they take all parameters into account and as they “frame” the HPD region. But they might be helpful for a sensitivity analysis nonetheless. For focusing on a subset of parameters (i.e., a multivariate marginal posterior), a generic multivariate kernel density estimation (as implemented, e.g., in Compositional::mkde()) would have to be applied to the draws of this subset of parameters, but that might introduce unstable estimates, especially in the tails where you are interested in (see this post for the univariate case). Furthermore, HPD intervals (in general) are not invariant with respect to parameter transformations (see this post and this thread for the univariate case). Furthermore, for the intervals I calculated above, a multimodal posterior would add even more “conservativeness” as the resulting intervals would “bridge” the gap between the modes.

Some acknowledgments, even if I don’t know if my suggestion above is helpful: I got the idea from an answer on this site.