I’m aware of several discussions/threads on the forum discussing complex survey designs and what the weights are doing in brms (e.g., some insightful talk here: Are complex surveys feasible in brms?).

I have some projects in which I would like to get population-level estimates for certain parameters, and in frequentist approaches this can be achieved using survey weights and packages such as survey with the svyglm function. I am fully aware that survey weights are not considered ‘fully Bayesian’ and I will *also* be running assessments using full Bayesian MRP. However, there are some limits to what I can do with MRP, including correcting for biases in the sample for variables that I cannot combine into a workable post-stratification table. I also like to compute some follow-up parameters related to effect sizes that I cannot do with svyglm, nor with MRP, but could do with some form of ‘weighted regression’ using brms very nicely.

The weighting function in brm is essentially frequency weight, and therefore parameters estimated using survey weights (e.g., inverse probability weights) have accurate point estimates, but their error is overly optimistic as it isn’t corrected for the weighting procedure that would reduce the effective sample size.

Based on a comment from @simonbrauer I have experimented with ‘correcting’ the weights I feed into brms by calculating a general design effect, and then essentially penalising the sample size by dividing the weights by that design effect. I present some simulations below that show how this approach compares with svyglm (which I take to be ‘correct’ from a frequentist standpoint), and with typical weighting in brms (which does not behave as desired) - it seems that it works quite nicely in the sense that the errors using modified weights appropriately track what happens with proper survey weight analyses from svyglm, and I also show how the approach ‘correctly retrieves’ the true population value just as much as svyglm, and far more than typical brm weighting as the sample diverges from the population by increasing amounts.

Although I understand this is not ‘fully Bayesian’, I am wondering the extent to which it can be considered at least a reasonable approach. Note that I will often supplement these analyses with MRP estimates, so I won’t usually be just drawing straight conclusions only from these estimates in isolation - sort of like a multipronged approach to see how estimates converge using different methods.

Here is a plot showing how often the 95% intercept intervals contain the true parameter value across many simulations with different levels of bias (and hence weighting correction) in the data:

Here is a further plot showing how the typical widths of the 95% intervals with typical weighting are far too narrow, but the modified weighting approach closely tracks the width of the 95% confidence intervals from svyglm:

I’ve also uploaded all the code for these simulations, along with RData files so that you don’t actually have to spend the time running the sims, if you would like to check out how I have done this. GitHub - Jimbilben/Survey-Weighting-Simulation: This project looks at ways to calibrate brms weights to enable something similar to survey weights in a Bayesian or 'pseudo-Bayesian' regression model

I would really appreciate people’s thoughts on this. In particular I know there are lots of people who have been interested in or work in surveys at the moment e.g., @Corey_Sparks @Guido_Biele @bgoodri @jonah @lauren @maxbiostat @maurosc3ner (apologies if this is egregious tagging but your comments were all informative in previous posts about weighting/MRP!).

The key goal with this is really to have access to the full Bayesian suite of posterior predictions, adding priors and so on, but not generating estimates that are excessively optimisitic with regards to the precision we can reach, by incorporating some penalties based upon more typical survey weighting procedures - from my simulations this does seem to work.