This post is based on @avehtari asking for examples of high rhat values on twitter. Turns out my simulation study is also producing those :)

**This is not a question for how to make the model work!**

This is just a fun showcase of really bad rhat values for Aki (and everyone else interested.)

Here is the brms fit object for you to play around with :)

fit_object.rdata (1.6 MB)

## Dataset

The dataset of size 100 is generated from the following model:

y \sim lognormal(\mu_y, 2)

\mu_y = 0.25x + 0.5z_1 + 0.8z_2

x \sim N(\mu_x, 0.5)

\mu_x = 0.5z_1 + 0.8z_3

z_1, z_2, z_3 \sim N(0,0.5)

z4 \sim N(\mu_{z4}, 0.5)

\mu_{z4} = y + 0.5x

The DAG looks like this:

## Model

The dataset is then fit with the following model:

y \sim Gompertz(\mu, \beta)

log(\mu) = 1 + x + z1 + z2 + z4

With flat priors on everything, 2 chains, iter = 2500, warmup = 500

## Software

I am using R (4.2), cmdstan(r) (2.29.2 / 0.5.2) and brms (2.17.4) and my own Gompertz custom family you can find here.

## Fit

```
summary(fit)
Family: gompertz
Links: mu = log; beta = identity
Formula: y ~ x + z1 + z2 + z4
Data: dataset (Number of observations: 100)
Draws: 2 chains, each with iter = 2500; warmup = 500; thin = 1;
total post-warmup draws = 4000
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept -1.47 0.56 -2.03 -0.91 NA 2 NA
x -1.27 0.57 -1.84 -0.70 66142364047444.67 2 NA
z1 -0.11 0.33 -0.44 0.22 NA 2 NA
z2 0.36 1.54 -1.18 1.91 NA 2 NA
z4 0.02 0.00 0.02 0.03 66142364047444.67 2 NA
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
beta 0.54 0.12 0.42 0.66 66142364047444.67 2 NA
Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
Warning messages:
1: Parts of the model have not converged (some Rhats are > 1.05). Be careful when analysing the results! We recommend running more iterations and/or setting stronger priors.
2: There were 1029 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
```