Request to help understand an apparent discrepancy between tidybayes::add_predicted_draws and brms::posterior_predict

I am using the Howell1 dataset from rethinking package.

require(cmdstanr)
require(brms)
require(tidybayes)

data("Howell1")
d <- Howell1
d2 <- d[d$age > 18,]

d2$hs <- (d2$height - mean(d2$height))/ sd(d2$height)
d2$ws <- (d2$weight - mean(d2$weight))/ sd(d2$weight)

Building a simple brms model using one numeric and one categorical predictor

priors <- c(prior(normal(0,2), class = "Intercept"),
            prior(normal(0,2), class = 'b'),
            prior(cauchy(0,2), class = "sigma"))

m4.4 <- brm(formula = hs ~ 1 + ws + male, data = d2, family = gaussian,
          backend = "cmdstanr", prior = priors
          iter = 2000, warmup = 1000, chains = 4, cores = 4)

I am trying to understand how add_fitted_draws and add_predicted_draws work.

Considering add_fitted_draws:

i <- 4 # looking at the results for a particular row of the input dataset
y <- posterior_epred(m4.4)
x <- d2 %>% add_fitted_draws(model = m4.4, value = "epred")
x %>% as_tibble() %>% filter(.row ==i) %>% dplyr::select(epred) %>% cbind(fitdr = y[,i]) %>% mutate(diff = fitdr - epred)

Based on the documentation add_fitted_draw internally uses posterior_epred or its equivalent in brms and the results exactly match.

Now when I move on to do exactly the same between add_predicted_draws and posterior_predict, the results don’t match

yp <- posterior_predict(m4.4)
xp <- d2 %>% add_predicted_draws(model = m4.4, prediction = "pred")
xp %>% as_tibble() %>% filter(.row ==i) %>% dplyr::select(pred) %>% cbind(preddr = yp[,i]) %>% mutate(diff = preddr - pred)

I am pretty sure there is a gap in my understanding, please advice.

The sessioninfo is as follows:

R version 4.0.3 (2020-10-10)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.1 LTS

Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0

other attached packages:
[1] stringr_1.4.0 readr_1.4.0 tibble_3.0.4 tidyverse_1.3.0 MASS_7.3-53 bayesplot_1.8.0 cmdstanr_0.1.3 rethinking_2.13
[9] loo_2.4.1 gganimate_1.0.7 RColorBrewer_1.1-2 ggrepel_0.9.0 brms_2.14.4 Rcpp_1.0.5 rstan_2.21.2 StanHeaders_2.21.0-7
[17] cowplot_1.1.1 ggplot2_3.3.3 tidybayes_2.3.1 ggdist_2.4.0 modelr_0.1.8 tidyr_1.1.2 forcats_0.5.0 purrr_0.3.4
[25] dplyr_1.0.2 magrittr_2.0.1

1 Like

Turns out that this could potentially be a bug with brms (2.14.4). I have filed a ticket at Seed setting not honored in posterior_predict in 2.14.4 · Issue #1073 · paul-buerkner/brms · GitHub.

2 Likes

The issue has been root caused by @paul.buerkner.

If you have set options(mc.cores = <more than 1>) , posterior_predict will evaluat in parallel by default, unless you change the core argument. On windows, parallel excution is done via parallel::parLapply and I don’t know how that function respects seeds, if at all. When executing the code in serial (with 1 core) the results are reproducible.

Once I set the mc.cores option 1, I no longer see the discrepancy between add_predicted_draws and posterior_predict.

I am closing the issue here.