Hi. I have a linear mixed effects model

The “xt” and “xt^2” terms are highly correlated, x is a fixed covariate and t represents time.

My stan code for this part is

```
theta = alpha[1] * X + alpha[2] * X .* time + alpha[3] * X .* square(time) + u[1][subject] + u[2][subject] .* time + u[3][subject] .* square(time);
```

The model fitting is not bad, all reaching convergence, and I only got warnings about transitions after warmup that exceeded the maximum treedepth. My concern is the computation time, it took about 150 seconds to run one chain. If I remove “xt^2”, it would take 45 seconds to run one chain; if I remove both “xt” and “xt^2”, it would take only 15 seconds to run one chain. (2,000 iterations)

Are there other ways to reduce computation time without removing covariates?

Any advice is welcome, thank you!

Some diagnostics: for get_num_leapfrog_per_iteration(), what I obtained are mostly 511, some 1023.

pairs plot

```
Inference for Stan model: 84a5589a580f1afb0030bfd212537eb6.
4 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=4000.
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
alpha[1] 0.47 0.01 0.23 0.03 0.32 0.47 0.62 0.94 1300 1
alpha[2] 1.01 0.02 0.68 -0.30 0.54 1.00 1.48 2.32 1088 1
alpha[3] -0.35 0.01 0.47 -1.27 -0.65 -0.35 -0.02 0.54 1125 1
Samples were drawn using NUTS(diag_e) at Sat Jun 27 16:28:41 2020.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at
convergence, Rhat=1).
```

BTW, I tried metric = dense_e, the computation time increased to 180 seconds.