Per Esteban’s original question: I doubt that there is an implementation of design-adjusted, model-based
brms in the simple way that it is using the design-based
survey models. My understanding is that design-based estimators do not assume an underlying distribution of values across design variables and so are “automatic” in a sense. Per Roderick Little, “Design weighting… has the virtue of simplicity, and by avoiding an explicit model it has an aura of robustness to model misspecification” (“Comment: Struggles with Survey Weighting and Regression Modeling” 2007, p. 1).
By contrast, model-based analyses require assumptions about the relationships between the outcome and design variables. It requires a specification of a model for the design variables. Currently, there is no single model that can be implemented to give the “best” estimates in all cases that can automatically be applied. See “Bayesian Nonparametric Weighted Sampling Inference” by Si, Pillai, and Gelman (2015) for a comparison of different models for the sampling weights.
Guido stated that weighting the likelihood (thus, a pseudolikelihood) is “not considered fully Bayesian.” This is because it is not “generative,” i.e. it is not “a proper probability distribution for the parameters” (Bayesian Data Analysis, Gelman et. al. 3rd ed. p. 336). Bob Carpenter wrote up a nice illustration of this issues. One solution is to estimate parameters within cells (ideally with shrinkage) and post-stratify on known cell counts (MRP). The alternative is to model the relationship between the design variables and outcome (see again the article by Si, Pillai, and Gelman, above; or Chapter 8 of BDA).
And, as Corey’s RPubs example shows, weighting the likelihood doesn’t produce the same standard errors as the design-based survey estimates. Instead, it produces basically the same standard errors as the model-based frequentist analysis in
lmer. In that way, weighting is not sufficient to get wide enough standard errors and accurate coverage.
Note that the scale of the weights matter when estimating models in Stan. This is not true of frequentist analyses other than population totals (see page 100 of “Applied Survey Data Analysis” by Heeringa, West, and Berglund, 2017) This is important to know in general, since weights oftentimes represent number of units in the population. In Corey’s example, the weights are scaled to sum to N prior to estimation in
brms, so they are similar to the the
One “non-Bayesian” workaround may be to estimate a design-effect using the
survey package and then scale the weights down according to the design effect (e.g. if the design effect is 2, have the weights sum to N/2). This is in theory similar to the post-estimation adjustment approach in frequentist methods, where you scale the standard errors according to the design effect. From experience, this produces similar results to estimates based on the
survey package, but it never perfectly replicates the estimates. I haven’t seen any mathematical proofs of this working, nor have I seen it suggested elsewhere. So proceed with caution.