Slower and freez machine when run brm model with brms package

The total time with brms will be slower, since it first has to compile the model and then run it, whereas rstanarm has several models (like this one) precompiled that can be run immediately.

However, the running time of the model itself should be about the same. By running time, I mean the information in the output that looks like this:

Chain 1:  Elapsed Time: 0.012 seconds (Warm-up)
Chain 1:                0.014 seconds (Sampling)
Chain 1:                0.026 seconds (Total)

aha, many thanks @andrjohns for your help
I did my computations finally, and I did my comparison and I would like to share it with you if you can give me feedback

I found posterior OR under non-informative prior (model1, OR=0.57) is less than under posterior OR with skeptical prior (model2, OR =0.92), during my study the treatment effect. Do you think this is responsible? there is interpret for that, please? or I did not use of the brm arguments correctly?.

non-informative prior: zero mean an high variance
skeptical prior: zero mean and low variance.

model1 <-brm(death~treat, mydata, family = bernoulli(link = “logit”),
prior = c(set_prior(“normal(0,10)”, class = “Intercept”),
set_prior(“normal(0,10)”, class = “b”)),
inits = “random”,chains = 1, iter = 500,
cores = 1, control = list(adapt_delta = 0.9))

model2 <-brm(death~treat, mydata, family = bernoulli(link = “logit”),
prior = c(set_prior(“normal(0, 0.15)”, class = “Intercept”),
set_prior(“normal(0, 0.15)”, class = “b”)),
inits = “random”,chains = 1, iter = 500,
cores = 1, control = list(adapt_delta = 0.9))

Hi Rani,

Yes that looks correct. Because your informative priors are closely centered around zero, you’re specifying strong prior information of no effect. This is consistent with an OR of 0.92 (as an OR of 1 implies no effect)

Thanks @andrjohns for your help
its really nice from you :)

1 Like

No worries, good luck with your modelling!

1 Like