Hierachical model causing too much shrinkage?

I use population-wide, 12-year data for examining regional differences in rehabilitation use (zero-inflated, lognormally-distributed outcome variable).

I analysed the outcome variable with 2 models:

m1 = brm(y ~ 1 + (1 | region), data = all_data, family = bernoulli())
m2 = brm(y ~ 1 + (1 | region), data = positive_values_data, family = lognormal())

And compared these models’ findings with estimates calculated on raw data.

As you can see, shrinkage can be observed, especially for region A on the y-axis. Interestingly, this is one of the largest regions in the data. The same happens when I use a wider prior for sd intercept.

As I have population-wide data and background knowledge, that this region is better than others, should I avoid using a hierarchical model for the analysis? As you can see, there are 3-fold regional disparities without shrinkage (raw data) and 2-fold disparities with shrinkage.

Any thoughts? Any recommendations?

If you can operationalize your knowledge about the quality of the measurement in the different regions as a statistical distribution, then you can fit a measurement error model. You can retain the hierarchical component but will tend to shrink everything towards the more certain measurements. If your knowledge about measurement quality can be operationalized as a Gaussian, then you can do this entirely within brms! The preferred way to do this is with mi in brms: mi: Predictors with Missing Values in 'brms' Models in brms: Bayesian Regression Models using 'Stan', but it remains somewhat more straightforward to do this via me in brms: me: Predictors with Measurement Error in 'brms' Models in brms: Bayesian Regression Models using 'Stan'

Alternatively, it is possible that you really do have too much shrinkage in your random effect. For example, if the true distribution over groups is a student-t distribution, then a Gaussian random effect provides too much shrinkage (edit: in the tails). t-distributed random effects are also achievable in brms; see @paul.buerkner 's comment from 13 May 2018 here Specify the RE distribution? · Issue #231 · paul-buerkner/brms · GitHub

3 Likes

Oops, sorry! Brain fart in the above post! Known measurement error in the response is handled in brms via resp_se, not me or mi (those are for measurement error in the predictors!).

Thanks! But how could I specify a prior only for the particular grouping variable level, meaning region A? This would correct the shrinkage. What would be the code for this?

The random effect is the prior on the regions, so unless I am misunderstanding you, you are asking about taking region A out of the random effect? Then it won’t be informed by the other groups, but it also will not be able to inform the other groups. Is that what you intend? I’m pretty sure you can achieve this in a principled way in brms, but I’ll need to have a think about how. The hackish way to do this, if you want a very weak prior on A, is just to add a fixed effect for membership in group A with a very weak prior. This will prevent A from sharing information with the other regions via the random effect term, but requires that you use a (potentially unreasonably) weak prior on A.

1 Like