Hi

I hope my response will be of help for you. Please see the table of parameters from a multiple regression

```
Estimate Error L-95%CI U-95%CI Eff.Sample RHat
Intercept 5.89 1.15 3.70 8.12 400 1
X1 0.26 0.04 0.19 0.34 400 1
X2 0.05 0.02 0.02 0.08 400 1
```

The next point about the underlying probability distribution is important to keep in mind. In my model, both X1 and X2 are assumed to have Nornal Distributions and verified (through plots) to meet that assumption as also in case of the Intercept

The ratio of Estimate divided by Error (so for Intercept it is 5.89 / 1.15 = 5.12; X1 it is 6.75 and for X2 it is 2.50) gives us an approximate Z-score of the parameter. If you see the parameter plots in your output you will hopefully notice that (hoping your assumption is for an underlying Normal Distribution) you see an approximate bell / normal curve. I do not know how to upload a picture of my parameter plots. For a perfect Normal Distribution, the 95% or p-value of 0.05 cut-off is 1.96. In my example, you can see that the ratio of Estimate / Error is greater than that cut-off for all three.

In addition to these two metrics, Brms also gives you L-95% CI and U-95% CI. In the table above, (the lower (L) and upper (U) confidence intervals), you see that in the range of any of the three estimates, 0 (zero) is not included. This is further evidence, based on the posterior samples drawn, that the combination of the estimates and their respective errors result in coefficients that are significant at p=0.05 level.

The key aspects being the distributions being Normal and the posterior sample draws of each parameter being close to normal so that the Z-score could be computed manually. As you gain more understanding of the Bayesian approaches, hopefully you will understand that the frequentist notion of a p-value has only a limited value. However, there is such a concept as Bayesian p-value as explained in this paper by Andrew Gelman Bayesian p-value

Sree