Output for family() in brms

Please also provide the following information in addition to your question:

  • Operating System: Windows 10
  • brms Version: 2.8

I am a bit confused about the output, related to “family”, generated by the summary() function in brms.
For example, if I choose

family=Gamma(link=“log”)

in the summary() output I get:

Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
shape 10.52 0.89 8.83 12.34 13619 1.00

Now, what is still confusing to me is why is it sufficient to only get an estimate for shape? What about the scale? Why is it not given explicitly?

In addition, if we consider

family=gaussian()

I understand it as a way to model the residuals, and that’s why when I get
output for family=gaussian():

Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sigma 28.56 1.33 26.09 31.31 112625 1.00

I understand it as saying that the residuals have a distribution N(0,sigma),
and, unsurprisingly, when I print the output of the predict()-function, I get estimates with
Est. Error approx equal to sigma.
But when it comes to interpreting the output of the family=Gamma(link=“log”)
I am not quite sure I know how to formulate it. What can we say about the Est. Error of the predictions, if we know that the shape of the response is shape=10.52?

I would really appreciate it if someone can give me some insight.

The Gamma distribution is a distribution with 2 parameters, usually called shape and scale or shape and rate. In a Gamma regression model (or Gamma GLM), the Gamma distribution is re-parametrized, so that you estimate the expected value of the outcome y, which is (using a “log”-link!) E[y]=\mu=\exp(\alpha + \beta x) (that translates to glm(y ~ x, family = Gamma(link="log)). The variance of a Gamma GLM is given by \text{Var}(y)= \phi \mu^2, where \phi is often called a dispersion parameter. It turns out that after re-parametrization the Gamma, the shape parameter is equal to the inverse dispersion. In your case, this means that \text{Var}(y)= \mu^2/10.52. You can see that the variance is different for all the different \mu estimated by the model – i.e. the errors are heteroskedastic: the error spread is not even. Compare this to the Normal regression model, where E[y]=\mu=\alpha + \beta x (note the “identity” link!) and \text{Var}(y)= \sigma^2. The variance does not depend on the different values of \mu estimated by the model – i.e. the model is homoskedastic: the error spread is even.

2 Likes