I would like to calculate the uncertainty of the posterior mean to an arbitrary quantile, if it makes sense.

(the goal is to check if two posteriors have different mean, similarly to a t-test if you will)

I understand that mean_se communicate the uncertainty of the posterior mean, but I would like to calculate it for an arbitrary precision.

If you’re modeling the two things separately I guess you’ve already assumed the two posteriors are independent.

I don’t know t-tests and differences of means and whatnot, but you could just do comparisons between the posterior samples to see what’s going on.

Maybe like:

```
(fit1_draws[, "a"] > fit2_draws[, "a"]) / nrow(fit1_draws)
```

But wouldn’t it be clearer to just report intervals for both estimates? This greater-than comparison seems like it might be harder to interpret.

The posteriors will probably have different means. Assuming the posteriors have means, that’s something we can compute to infinite precision (and it’s basically guaranteed one will be greater than the other). I don’t think this comparison is what you want.

What do you mean by arbitrary quantile?

mcse_mean is

```
sd(sample) / sqrt(ess(sample))
```

uncertainty over that value?

Thanks a lot for your answers. I realised than comparing mean of posterior rather than posteriors themself did not make a lot of sense.

Yet counting the greater than samples could be a good idea. The reason why some time we have to explicitly tell the probability of one element being greater than another (or zero) is to summarize data enought for non statisticians that wound not be able to easily make that judgment for themself for many elements.