Prior predictive simulation gives unreasonable results for lognormal() brms model

I am struggling to set reasonable weak priors for my lognormal brms model. I read that prior predictive simulation is a good way to play with different prior values and define reasonable ones for publishing.

Model

model = brm(bf(received_treatment_hours ~ treatment_method + offer + ( 1 | region)), data = data_no_zeros, family = lognormal(), cores = 3, chains = 3, sample_prior = "only", prior = prior)

Prior knowledge about the outcome variable is that it ranges from 1-130 hours and 98% of patients has this value under 30 hours. Thus, weakly prior should exclude differences over 50 hours or something. I also know that my priors are on log() scale.

Model default priors using get_prior() argument

##                  prior     class              coef  group resp dpar nlpar bound
## 1                              b                                               
## 2                              b             offer                             
## 3                              b treatment_methodB                             
## 4                              b treatment_methodC                             
## 5                              b treatment_methodD                             
## 6                              b treatment_methodE                             
## 7                              b treatment_methodF                             
## 8  student_t(3, 2, 10) Intercept                                               
## 9  student_t(3, 0, 10)        sd                                               
## 10                            sd                   region                      
## 11                            sd         Intercept region                      
## 12 student_t(3, 0, 10)     sigma

I tried different values for class b prior, finishing with very low value

prior = c(
prior(student_t(3, 0, 0.01), class = b),
prior(student_t(3, 0, 10), class= sd),
prior(student_t(3, 8, 10), class = Intercept))

But the result still looks unreasonable, including extremely wide CIs.

plot(conditional_effects(model, method = "predict"), points = TRUE, ask = FALSE)

What am I doing wrong? How to get prior predictive simulation plots with reasonable CI widths?

Data and Rmd file: https://www.dropbox.com/sh/2kwpmxip99hvkes/AAAn90mmKmiPdUzJrF2zUE0sa?dl=0

Using independent priors it is almost impossible to get reasonable prior predictive checks for models specifying the predictor term on the log-scale due to the resulting exponentiation. We are actively working on developing joint priors for such models so that you actually get reasonable priors and prior predictive checks in the end but this will take some more time.

5 Likes

Thank you! Would it be OK to use brms defaults for publishing? Model comparisons using loo() argument clearly preferred lognormal() model. Also, it had the best fit using pp_check() argument.

1 Like

I cannot tell you what is ok for publishing your data and analysis but is should be fine to go with default priors if the posterior and posterior predictive look sensible.

1 Like

Also, it is IMHO often reasonable to fit the model with a set of different priors and see how the inferences change. If they do not, no need to worry about prior choice.

1 Like