First post here, and a newbie to this space. Apologies up front for what are likely elementary questions. Would appreciate your input.

I am using brms & rstan to create models that are estimating a response variable using 1 continuous predictor. There is one grouping variable for random effects (both random intercepts & slopes).

I used the brms::get_prior call to evaluate the priors being used, and I was surprised to see a beta class that was separate from the “Intercept” class.

Can someone clarify the difference between the beta and Intercept classes in the get_priors function? (Screenshot below). Why is the intercept a separate class, given this is a population-level effect?

Using p <- get_prior(myModel) you can then change priors.

Intercept is the \alpha in your model, and you can set it separately, i.e., p[5] <- "normal(0,10)". If you don’t want to have an \alpha, i.e., general intercept, then I think you can use y ~ 0 + x1 + ... instead of the common convention y ~ 1 + x1 + ...

Paul has some default priors and very often they work nicely. However, as always, you should do prior predictive checks to see how they look like on the outcome scale. Simply changing your p and then use it to do prior predictive checks by adding sample_prior = "only", prior = p, will sample from your custom priors. Then run the results through pp_check() (check the help page for pp_check)

Thanks for your reply. I understand that I can change the priors, and have tinkered around with that. I guess I was trying to understand the theory behind the separate Intercept & beta classes.

Are you suggesting that the “Intercept” class is for fixed effects, but beta refers to group-specific random effects only? I do see that the beta class is utilized to produce both the population and group-specific parameters in my output, so I am not sure how that would be true.

The priors on the Intercept class specify the expectation and variance of varying intercepts by group whereas the priors on the b class specify the expectation and variance of the varying slopes by group.

In response to your question 2 above, the fact that you are using a Gaussian likelihood doesn’t imply that the prior on the variance parameter also must be Gaussian. In a simple linear regression model, for example:

y \sim \textbf{Normal}(\alpha + X\beta, \sigma)

you can use whatever priors you think are warranted for \alpha, \beta, and \sigma using any distribution that seems sensible in the data generating process you are modeling. I understand that Student’s T is used as a default for these parameters in brms in order to be “weakly informative”, that is to provide some central tendency but also provide fatter tails than the Gaussian, and therefore allow for the possibility of extreme values.