Hypothesis Tests with brms

Hey everyone!

What is the difference between computing manually some parameters from my model summary and doing the same thing through hypothesis testing? Since sigma’s in a Normal single-level model are in log-scale, I have to run: exp(sigma_Intercept) and exp(sigma_Intercept+sigma_treatment1) in order to compute the residual standard deviations of my two treatment groups. However, when acting similarly with the Hypothesis testing, I get slightly different values. The hypothesis is the following: c(“exp(sigma_Intercept) =0”," exp(sigma_Intercept+sigma_treatment1)=0" ).

Thank you in advance.

you should get the same results in both cases. if not please provide a minimal reproducible example.

Let’s take the first simple example from the vignette: Estimating Distributional Models with brms. Given the summary of the model, when I compute manually the two estimated residual standard deviations, I get: exp(sigma_Intercept)=exp(-0.10)=0.9048 and also, exp(sigma_Intercept+sigma_grouptreat)=exp(-0.10+0.79)=1.993. On the other hand, when I run the corresponding hypothesis test, I get: 0.92 and 2.02, respectively.

Any transformation should always happen on the posterior draws directly and not on any summary statistic. Only after the transformation you should summarize the results. For instance, the mean is not equivariant under non-linear transformation. Hence the results you see. The result of hypothesis is correct.

1 Like

In that case, how we could manually compute these two sigmas from the posterior draws, if we do not use the hypothesis test? I tried to do it manually using the mean of the posterior_samples for these two sigmas and then, transform them to their actual scale and again, I got: 0.90 and 1.997

thats the right way to go. you need to show the code you used exactly otherwise I can’t tell you where your mistake is.

psamples<-posterior_samples(fit1,"^b")
sigmaINT<-psamples[,2]
mean(sigmaINT) #-0.097
exp(mean(sigmaINT)) # exp(-0.097)=0.90

sigmaGROUP<-psamples[,4]
mean(sigmaGROUP) #0.789
exp(mean(sigmaINT)+mean(sigmaGROUP)) #exp(-0.097+0.789)=1.99

First transform (via exp) then summarise (via mean). not the other way round.

Thank you for your help! Really appreciate it.

When we perform the hypothesis test for the two sigma’s, we obtain the posterior estimate after the non-linear transformation, thus the standard deviation of each posterior distribution (Est.Error) changes compared to when they were in log-scale in the summary. Thus, if we use stanplot to visualize the contrast on their values, the standard deviation of each posterior distribution corresponds to the ones still being in the log-scale. Can we obtain new stanplot with the updated means and standard deviations after the transformation?

After transforming the posterior samples, I recommend passing them directly to the mcmc_* functions of the bayesplot package.

We can use the following: mcmc_areas(as.data.frame(posterior_samples(fit1,"^b_sigma")),transformations=“exp”,prob = 0.95).
Although, it does not correspond to the actual estimates of sigma’s in their original scale, but to the exp(sigma_Intercept) and exp(sigma_grouptreat). But, when extracting manually the exponentially transformed samples as columns from the posterior samples, then we can visualize each parameter separately as follows:
For the sigma_Intercept (original scale):
mcmc_areas(exp(posterior_samples(fit1,"^b")[,2])
For the sigma_grouptreat (original scale):
mcmc_areas(exp(posterior_samples(fit1,"^b")[,2]+posterior_samples(fit2,"^b")[,4]).

Thank you again for your help.