Posterior random effects distribution

I’m interested in plotting the density distribution of random effects. A very similar topic (sampling from the random effects distribution) is discussed in some depth here:

Basically, there would be two approaches: (1) draw from a normal distribution with the estimated standard deviation sigma, (2) draw random samples from the empirical bayes estimates of random effects in your data.

However, both of these have downsides: (1) Sampling from the normal distribution does not incorporate any information from the data as to whether random effects may follow a somewhat different distribution. (2) Random samples from EB estimates should be affected by shrinkage and less varied than sigma.

Is there a way to sample from a latent random effects distribution (i.e., not shrinked) that still represents a posterior (i.e., incorporates information from the data on the actual distribution)? This would be ideal for visualizing posterior random effects distributions.

To clarify: Let’s say the “real” random intercept has a bimodal distribution with a variance of 0.5 in the population. The first approach would return a normal distribution with var = 0.5 (i.e., not include the bimodal part) and the existing RE estimates would return a bimodal distribution shrunk towards the average with var < 0.5 (due to shrinkage). Is there a way to visualize estimates of the latent distribution (bimodal, var = 0.5)?

I think there might be some confusion about how brms and Stan in general works.

empirical bayes” is a name of a family of computational methods that are however not used in brms. I guess you meant using the fitted model coefficients?

I don’t think there is a method that would fulfill your requirements directly. The main problem IMHO is that the standard random effects formulation used in brms assumes the distribution of random effects is normal. If that assumption is wrong, your inferences might be problematic. I think that the general pattern you are implicitly using here, i.e. “fit data assuming P, then analyze the posterior as if P is false” is inchorent for most use cases. It is possible that in specific cases, this would yield sensible answers, but I don’t think there are any gurantees it would do so generally.

A better approach would IMHO be to check (e.g. via posterior predictive checks) that the assumption of normality of random effects is not grossly violated. If it is not, you can use the fitted sigma of the random effects as a good summary. However, if the assumption is violated, you probably should be changing your model to accomodate this rather than trying to salvage the situation by analysing the fitted random effects.

Additionally, I don’t think there is any guarantee that if the the “real” distribution is bimodal, that the fitted random effects would also show bimodality or - more broadly - represent the “real” distribution well - that would depend on both how much data you have for the individual random effects (with little data, the hyperprior might smooth out all the “real” structure) and how many levels the random effects have (if you don’t have a lot of levels, the fitted random effects will not provide a lot of information about the “real” distribution).

Does that make sense?

Best of luck with your model!