Hi,

I am currently working on modelling low-density lipoprotein aggregation, using a nonlinear mixed-effects model. My priors are weakly-informative (or vague, still a bit hesitant about how to describe them, despite reading this). I am currently stuck on the prior predictive check (using `sample_prior = "only"`

option in brms), and encounter divergences when I run my model.

```
Warning messages:
1: There were 227 divergent transitions after warmup. Increasing adapt_delta above 0.999999999999 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
2: Examine the pairs() plot to diagnose sampling problems
```

which I get when I run only 2 chains, with iterations = 3000 and warmup=1000.

Hereâ€™s the code Iâ€™m using:

```
fit_model <- function(Data){
fit<- brm(
bf(LDL ~ (1 + exp ((gamma-Time) / delta))/(15*exp((gamma-Time) / delta) + alpha),
alpha ~ 1 + (1|Pat/Cat),
gamma ~ 1 + (1|Pat/Cat),
delta ~ 1,
nl = TRUE),
prior = c(
prior(normal(3000,1000), class="b",lb=0, nlpar = "alpha"),
prior(normal(2.5,1), class="b",lb=0, nlpar = "gamma"),
prior(normal(0.2,0.25), class="b", lb=0, ub=1, nlpar = "delta"),
prior(student_t(3, 0, 59), class="shape"),
prior(normal(0,0.5), class="sd", group="Pat:Cat", coef="Intercept", nlpar="gamma"),
prior(normal(0,500), class="sd",group="Pat:Cat", coef="Intercept", nlpar="alpha"),
prior(normal(0,0.5), class="sd", group="Pat", coef="Intercept", nlpar="gamma"),
prior(normal(0,500), class="sd",group="Pat", coef="Intercept", nlpar="alpha")),
data = Data, family = Gamma(link="inverse"),
chains = 2,
iter=3000,
warmup = 1000,
cores = getOption("mc.cores",1L),
sample_prior = "only",
thin = 1,
control = list(adapt_delta = 0.999999999999, max_treedepth=15),
verbose = TRUE
)
return(fit)
}
fit <- fit_model(under1500_bl101_sos100)
```

Now, I think the main thing I am struggling with in this situation is that I donâ€™t actually understand what divergences mean in the context of prior predictive checks.

I tried to go back and read on divergences here and found this:

The primary cause of divergent transitions in Euclidean HMC (other than bugs in the code) is highly varying posterior curvature, for which small step sizes are too inefficient in some regions and diverge in other regions.

But, as far as I understand, when weâ€™re doing prior predictive checks we are only sampling from the prior and donâ€™t take the data into consideration, so what do divergences mean in this case, if above it seems they are defined in the context of the posterior? Is there something basic that I donâ€™t understand?

I would really appreciate it if someone can give me some directions, even if itâ€™s pointing out to some papers and/or blog posts. I spent a significant amount of time trying to understand it, but I seem to just be getting more confused. Thanks!