What is the difference between these two models?

Hi
I have read that in the fit_sleep1 model, the 1 at the start of the formula specifies that I want an intercept, but it can be omitted since it is included by default. Then both models should be equal.
However, these two models give different results.
What is the difference between these models?

library(brms)
library(lme4)
data("sleepstudy")

fit_sleep1 <- brm(Reaction ~ 1 + Days + (1 + Days | Subject), data = sleepstudy)
fit_sleep2 <- brm(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)

summary(fit_sleep1)
summary(fit_sleep2)

loo(fit_sleep1)
loo(fit_sleep2)

Thanks!

R version 4.2.2 (2022-10-31 ucrt)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19045)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.utf8  LC_CTYPE=English_United States.utf8   
[3] LC_MONETARY=English_United States.utf8 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.utf8    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] lme4_1.1-31  Matrix_1.5-1 brms_2.18.0  Rcpp_1.0.9  

loaded via a namespace (and not attached):
 [1] nlme_3.1-160         matrixStats_0.63.0   xts_0.12.2           threejs_0.3.3       
 [5] rstan_2.26.13        tensorA_0.36.2       tools_4.2.2          backports_1.4.1     
 [9] utf8_1.2.2           R6_2.5.1             DT_0.26              DBI_1.1.3           
[13] colorspace_2.0-3     withr_2.5.0          tidyselect_1.2.0     gridExtra_2.3       
[17] prettyunits_1.1.1    processx_3.8.0       Brobdingnag_1.2-9    curl_4.3.3          
[21] compiler_4.2.2       cli_3.5.0            shinyjs_2.1.0        colourpicker_1.2.0  
[25] posterior_1.3.1      scales_1.2.1         dygraphs_1.1.1.6     checkmate_2.1.0     
[29] mvtnorm_1.1-3        callr_3.7.3          stringr_1.5.0        digest_0.6.31       
[33] StanHeaders_2.26.13  minqa_1.2.5          base64enc_0.1-3      pkgconfig_2.0.3     
[37] htmltools_0.5.4      fastmap_1.1.0        htmlwidgets_1.6.0    rlang_1.0.6         
[41] rstudioapi_0.14      shiny_1.7.4          farver_2.1.1         generics_0.1.3      
[45] zoo_1.8-11           jsonlite_1.8.4       crosstalk_1.2.0      gtools_3.9.4        
[49] dplyr_1.0.10         distributional_0.3.1 inline_0.3.19        magrittr_2.0.3      
[53] loo_2.5.1            bayesplot_1.10.0     munsell_0.5.0        fansi_1.0.3         
[57] abind_1.4-5          lifecycle_1.0.3      stringi_1.7.8        MASS_7.3-58.1       
[61] pkgbuild_1.4.0       plyr_1.8.8           grid_4.2.2           parallel_4.2.2      
[65] promises_1.2.0.1     crayon_1.5.2         miniUI_0.1.1.1       lattice_0.20-45     
[69] splines_4.2.2        ps_1.7.2             pillar_1.8.1         igraph_1.3.5        
[73] boot_1.3-28          markdown_1.4         shinystan_2.6.0      reshape2_1.4.4      
[77] codetools_0.2-18     stats4_4.2.2         rstantools_2.2.0     glue_1.6.2          
[81] V8_4.2.2             RcppParallel_5.1.5   nloptr_2.0.3         vctrs_0.5.1         
[85] httpuv_1.6.7         gtable_0.3.1         assertthat_0.2.1     ggplot2_3.4.0       
[89] mime_0.12            xtable_1.8-4         coda_0.19-4          later_1.3.0         
[93] tibble_3.1.8         shinythemes_1.2.0    ellipsis_0.3.2       bridgesampling_1.1-2

Hello,
If you run both models with the same random seed (Random seed - Wikipedia) they should yield exactly the same results:

fit_sleep1 <- brm(Reaction ~ 1 + Days + (1 + Days | Subject), data = sleepstudy, seed = 1)
fit_sleep2 <- brm(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy, seed = 1)

If a seed isn’t set, then Stan sets the seed randomly: 15.4 General configuration options | Stan Reference Manual, which is why the two fits had different results when you ran them.

Oh, wow, so simple! Thanks!!

1 Like