I am trying to compute the weighted residuals for my hierarchical bayesian model, and for that I think i’m required to calculate the covariance between data points within an individuals observations. I’m considering calculating the standard deviation of the residuals by subtracting each posterior draw of my predicted value from y, the dependant variable, and taking the standard deviation, doing this for all y values( I think this was recommended in another post, though I am not sure it applies here). Then divide the residuals by these standard deviations. Would this be an acceptable way to compute weighted residuals, as I have come accross little information regarding Bayesian residual diagnostics and would like to implement these in RStan.

There is a lot about diagnostics in **shinystan** and **bayesplot**, none of which address what you are talking about. I would emphasize that if you are subtracting a posterior predictive draw from an observed value, that would be considered an error rather than a residual (residual being the difference between an observed value and its conditional mean). I have a hard time imagining a situation where the weighting scheme you propose makes much sense from a Bayesian perspective but you can do it.

Thanks for your reply, it was most helpful.

I have tried to plot the weighted residuals (the way I suggested) against model predictions(expected mean rather than posterior draw) but not sure whether it can be interpreted the same way as in a frequeetist setting or whether it is ‘correct way ‘to compute weighted residuals. I have seen it be used in literature but not clear on how these weights may have been constructed. what would you suggest is the best way to compute these weighted residuals, also having some trouble understanding it’s utility in a Bayesian setting. Are there any other residual plots that can be used to check for violations of model assumptions such as normality of residuals or constant variance assumption?

I would do stuff with the unweighted errors