Unbiased variance of fixed effect parameters

Hi,

I have a very general question that is more about Bayesian statistics than STAN per se, but this the best forum I can think of with skilled people to answer it.

Imagine that we have a situation where we want to estimate the variance (in the response) that arising from fixed effect parameters (i.e. not the “explained variance”, that would be the variance arising from the estimators).

To me more precise, this is in a biological settings where we are analysing a phenotypic trait \mathbf{y} using linear modelling (technically, we are using a random-slope model, but my question stands for this simpler case), such that:

\mathbf{y} = \mathbf{X}\mathbf{\beta} + \mathbf{e},

where \mathbf{X} is the design matrix, \mathbf{\beta} is the vector of parameters of the model and \mathbf{e} are the residuals. So, nothing out of the ordinary.

As I mentioned, we are interested in the variance arising from fixed effect parameters, i.e. \text{V}(\mathbf{X}\mathbf{\beta}). Of course, we do not have access to the real \mathbf{\beta}, only estimates of it, so \hat{\mathbf{\beta}}. So, since:

\hat{\mathbf{\beta}} = \mathbf{\beta} + \tilde{\mathbf{\beta}},

where \tilde{\mathbf{\beta}} is the error in estimating \mathbf{\beta}, we know that \text{V}(\mathbf{X}\hat{\mathbf{\beta}}) is a biased measure of \text{V}(\mathbf{X}\mathbf{\beta}). This is an ancient problem in, e.g. quantitative genetics, when relying on ANOVA model to estimate a between-group variance component.

In a frequentist settings, there is a fix to that, using \mathbf{\Sigma_{X}}, the variance-covariance matrix of \mathbf{X} and \mathbf{S}_{\beta}, the variance-covariance matrix of error (i.e. the matrix that contains the squared standard errors, and covariances, associated with \hat{\mathbf{\beta}} not sure it has a proper name?), an unbiased estimator of \text{V}(\mathbf{X}\mathbf{\beta}) is:

\text{V}(\mathbf{X}\hat{\mathbf{\beta}}) - \text{tr}(\mathbf{\Sigma_{X}} \mathbf{S}_{\beta}).

In a Bayesian settings, the problem is that the link between the “standard error” (i.e. posterior standard deviation) and the sampling variance is lost (or rather weakened), due to the influence of the prior. The above “fix” should, at least approximately, hold for non-informative, flat, priors, but is expected over-correct when the prior becomes more and more informative. I have checked this using simulated data and a simple linear model in brms.

So, my question is: is it possible to obtain an unbiased estimate of \text{V}(\mathbf{X}\mathbf{\beta}) in a Bayesian context, accounting for the impact of the prior distribution? If that estimate could provide a posterior distribution for \text{V}(\mathbf{X}\mathbf{\beta}), that would ideal!

One more thing: I do realise that \text{V}(\mathbf{X}\hat{\mathbf{\beta}}) is a convergent estimator for \text{V}(\mathbf{X}\mathbf{\beta}) and that we should expect the level of bias to be relatively low for common sample sizes. However, this is for a general methodological paper, and I have no control on the sample size of people out there, so I would like to provide as robust an estimator as possible. Also, since the “fix” above exists in a frequentist settings, it would be a shame to not provide something as robust in a Bayesian settings.

Anyhow, sorry for the long post, happy to hear any thoughts on the subject!

We’re pretty busy just answering Stan-related questions.

In an MCMC context, the standard error is the posterior standard deviation divided by the square root of the effective sample size—in practice we need to estimate both terms. Standard error goes to zero as the MCMC sample size increases.

Like regularized estimators in frequentist stats (e.g., using empirical Bayes or even just simple shrinkage), Bayesians are often OK trading bias for reduced variance. We’re usually focused on calibrated probabilistic prediction, specifically posterior predictive inference.