- Operating System: macOS Sierra 10.12.6
- brms Version: 2.9.0

I am trying to determine if adding a new solver to our codebase has significantly changed `simulation_alive_times`

of our regression tests. We have 202 unique regression tests, that vary widely in simulation time. I have run our `baseline_case`

three times and our `baseline_noexodus`

test three times. This results in 1,212 observations equally balanced.

I have decided that the response family should be `Gamma(link = 'log')`

given the positive, skewed nature of the data.

My question is that when I use a multilevel model, accounting for repeated measures for the regression tests, my posterior distribution shows a seemingly promising decrease in `simulation_alive_time`

for the `baseline_noexodus`

case.

I have also noticed that my `eff.sample`

is a lot lower on the MLM model, the diagnostics don’t look as promising but the `pp_check`

fits closer to my data. What would cause this? I have included my model output below.

```
test <- brm(bf(simulation_alive_time ~ intervention),
data = compare_dat,
family = Gamma(link = 'log'),
prior = prior(normal(0, 1), class = "b"),
iter = 11000, warmup = 5500, chains = 5, thin = 5,
cores = 24)
summary(test)
```

```
Family: gamma
Links: mu = log; shape = identity
Formula: simulation_alive_time ~ intervention
Data: compare_dat (Number of observations: 1126)
Samples: 5 chains, each with iter = 11000; warmup = 5500; thin = 5;
total post-warmup samples = 5500
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept 8.56 0.05 8.46 8.66 5528 1.00
interventionbaseline_noexodus -0.02 0.07 -0.16 0.12 5407 1.00
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
shape 0.69 0.02 0.65 0.75 5855 1.00
Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
is a crude measure of effective sample size, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

```
test1 <- brm(bf(simulation_alive_time ~ intervention + (1|basename)),
data = compare_dat,
family = Gamma(link = 'log'),
prior = prior(normal(0, 1), class = "b"),
iter = 11000, warmup = 5500, chains = 5, thin = 5,
cores = 24)
```

```
Family: gamma
Links: mu = log; shape = identity
Formula: simulation_alive_time ~ intervention + (1 | basename)
Data: compare_dat (Number of observations: 1126)
Samples: 5 chains, each with iter = 11000; warmup = 5500; thin = 5;
total post-warmup samples = 5500
Group-Level Effects:
~basename (Number of levels: 190)
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
sd(Intercept) 1.33 0.07 1.21 1.47 291 1.01
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
Intercept 7.78 0.10 7.61 7.97 129 1.03
interventionbaseline_noexodus -0.14 0.01 -0.15 -0.12 5204 1.00
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat
shape 54.21 2.46 49.71 59.19 4857 1.00
Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
is a crude measure of effective sample size, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
```

```
test <- add_criterion(test, "loo")
test1 <- add_criterion(test1, "loo")
Warning message:
Found 5 observations with a pareto_k > 0.7 in model 'test1'. It is recommended to set 'reloo = TRUE' in order to calculate the ELPD without the assumption that these observations are negligible. This will refit the model 5 times to compute the ELPDs for the problematic observations directly.
loo_compare(test, test1)
```

```
elpd_diff se_diff
test1 0.0 0.0
test -2590.3 39.6
```

```
time.hyp <- hypothesis(test,
hypothesis = c("exp(Intercept + interventionbaseline_noexodus) - exp(Intercept) = 0"))
mcmc_areas(time.hyp$samples, prob = .95)
time1.hyp <- hypothesis(test1,
hypothesis = c("exp(Intercept + interventionbaseline_noexodus) - exp(Intercept) = 0"))
mcmc_areas(time1.hyp$samples, prob = .95)
```

I’m very new to Bayesian and `library(brms)`

so if I have made a lapse in judgement or it’s obvious I’m not quite getting something, please help me understand.