Am I doing something wrong or is it a bug in autocorrelation implementation?

The very same model that computes normally in five minutes, takes hours when autocorrelation term is added with a single lag. In fact the original model is a distributed lag one, so it does pretty much the same (single lag of dependent variable is among predictors). Can’t understand why such difference in computation time.

Also, the first time I try fit it (with autoregression term added instead of explicitly specified lag variable), chains did not converge and error summary included suggestion to increase max_treedepth. I have set it to 15, and now it just takes forever, despite I have reduced number of chains to just two.

Specification is as follows:

```
mod1 = brm(
bf(y ~ x1 + x2 + ar(p = 1, cov = TRUE),
phi ~ x2),
data = data1,
family = Beta(link = "logit", link_phi = "log"),
prior = prior1,
warmup = 2500,
iter = 5000,
chains = 2,
control = list(adapt_delta = 0.9, max_treedepth = 15),
seed = 1234
)
```

Latest R, latest rstan, latest brms (stable release). Mac OS 11.1