Ornstein–Uhlenbeck correlation structure in brms

Hi, I am wondering how to implement Ornstein–Uhlenbeck correlation structure in brms.

This is how I am doing it, but not entirely sure if accurate:(fit.ou.brms ← brm(y ~ X1 + X2 + X3 + gp(times, cov = “matern12”),
data = dat))

1 Like

This looks right to me, as long as what you want is for a response that decomposes additively into an intercept, effects of X1, X2, and X3, a stationary OU process over times, and an iid residual (i.e. a “nugget”).

1 Like

Amazing, thank you. I was hoping to implement the same type of continuous correlation structure available in the nlme package.

Does this typically require more iterations to achieve convergence, or is increasing the sampler’s “adapt_delta"usually sufficient to resolve convergence issues?

There are multiple possible reasons for poor convergence. Stan provides advanced diagnostics (divergences, efmi, rhat, ess) that can help to pinpoint why convergence is poor, if convergence is indeed poor.

Ok thanks. regarding the nugget, I beleive in nlme that the nugget is not included. Which this equivalence can be reached in glmmTMB via the dispformula = ~ 0 (see below). So for brms does this just mean fixing the sigma to a small constant like: brm(y ~ X1 + X2 + X3 + gp(times, cov = “matern12”), sigma = 1e-6,
data = dat))

Continuous AR(1)

glmmTMB:

(fit.ou ← glmmTMB(y ~ X1 + X2 + X3 + (1|group) + ou(numFactor(times) + 0 | group),
dispformula = ~ 0, # 0 indicates no nugget estimate
REML = TRUE, # default: FALSE
data = dat))

nlme:

(fit.lme.car1 ← lme(fixed = y ~ X1 + X2 + X3,
random = ~ 1 | group,
correlation = corCAR1(form = ~ times | group),
data = dat))

I’m not completely up on the latest developments in brms, so take this with a grain of salt.

I don’t think that it is possible to literally remove the nugget from a GP model in brms. Brms provides gaussian processes on the linear predictor, and does not provide a way to suppress the error term to literally zero. However, there are two potential workarounds:

  • Constrain the residual variance to be tiny via the prior. This may or may not work well. It is likely to work best when the data are consistent with a tiny residual variance, and if this weren’t the case, it might be ill advised to suppress the residual anyway.
  • If your times are equally spaced, then the OU process should be equivalent to and AR(1) process, and brms does provide a way to model residuals via an AR(1)