Setting a scaled beta prior

Hi all,

I have a dataset with multiple continuous variables that range from 0 to 5.
How can I specify a beta regression with such a bound in brms?
Specifically, how can I set scaled beta priors and a scale beta function for the family parameter?


Is there a reason for you not to scale the data set to to be between 0-1? You would need to be careful with the extremes though as described by the stan documentation:

Warning: If θ = 0 or θ = 1, then the probability is 0 and the log probability is −∞. Similarly, the distribution requires strictly positive parameters, α,β>0.

So potentially there has to be some value (i.e. 1e-9) added or subtracted for the edge cases.
Also is there a reason though you assume it to be beta distributed?

1 Like

@danielparthier thank you for your quick response!

The dataset consists of responses to a sort of Schwarz values questionnaire, which is for some values very skewed at the bounds of the scale (usually at the power values), and for others quite similar to student t or to normal distribution (See attached figures). Therefore, I thought that the Beta distribution is sufficiently flexible to capture such a variety of structures, and is a proper distribution to describe restricted range variables.


Do you know about an alternative prior distribution that is suited for a dataset like that?

Thank you very much,

I have to admit that questionnaire data is vastly out of my comfort zone, but from what I recall an ordered logistic regression might be more what you are after. I am sure someone with a background in this field will be able to back up this statement or be able to reject it :)

I strongly recommend you read this @Shai_Shachar :)

@torkar Thank you, I went through the paper and it seems applicable and relevant to my dataset. However, my DV consists of several items which are all ordinal, therefore I can’t pool them together by averaging.
Moreover, the prior specification of the predictors is still an issue.

How would you suggest to deal with that?

Thanks, Shai