- Operating System: MacOS
- brms Version: 2.15.0
At the time of this writing, Paul’s list on GitHub issue #403 (see here) indicates brms can fit models with unstructured errors. This is one variant of an array of error structures (e.g., compound symmetry) and Paul has directed people to learn about these capabilities by executing help("autocor-terms")
. Although that does lead one to helpful documentation for structures such as compound symmetry (see cosy()
) and AR1 (see ar()
), it’s not clear, to me, how one would fit a multilevel growth model with an unstructured error matrix.
Use case
In Chapter 7, Singer and Willett (2003) apply a variety of error structures to a growth model. The structural part of their baseline model is:
where \text{opp}_{ij} is test score for the i^\text{th} person on the j^\text{th} occasion and \text{cog}_i is the sole time-invariant covariate, which is each person’s baseline level on some measure of cognitive skill. The notation (\text{cog}_i - \overline{\text{cog}}) is meant to indicate \text{cog}_i has been mean-centered. The data were collected over four waves and \text{time}_{ij} is coded in integers 0 through 3.
Baseline model: The conventional multilevel growth model.
Singer and Willett’s first step (pp. 244–246) was to fit a conventional multilevel growth model. Using default priors, we can fit this in brms like so:
# load
library(brms)
# download the data
opposites <- read.table("https://stats.idre.ucla.edu/stat/r/examples/alda/data/opposites_pp.txt", header = TRUE, sep = ",")
fit.standard <-
brm(data = opposites,
family = gaussian,
opp ~ 0 + Intercept + time + ccog + time:ccog + (1 + time | id))
If you’re curious, here’s the summary:
print(fit.standard)
Family: gaussian
Links: mu = identity; sigma = identity
Formula: opp ~ 0 + Intercept + time + ccog + time:ccog + (1 + time | id)
Data: opposites (Number of observations: 140)
Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
total post-warmup samples = 4000
Group-Level Effects:
~id (Number of levels: 35)
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept) 36.22 4.92 27.98 46.85 1.00 1255 1829
sd(time) 10.69 1.82 7.48 14.72 1.00 1159 1408
cor(Intercept,time) -0.44 0.16 -0.71 -0.09 1.00 1520 1888
Population-Level Effects:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept 164.51 6.50 151.65 177.32 1.01 890 1254
time 26.95 2.14 22.78 31.16 1.00 1803 2512
ccog -0.10 0.53 -1.17 0.97 1.00 1105 1533
time:ccog 0.43 0.17 0.09 0.76 1.00 1563 1954
Family Specific Parameters:
Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma 12.92 1.17 10.93 15.48 1.00 1414 2063
Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
The data, if you’re concerned, are downloaded directly from the UCLA institute for digital research and education (see here and perhaps more importantly here).
The model under question: The conditional growth model with an unstructured error covariance matrix.
I’m looking to fit the alternative to this baseline model which contains an unstructured error covariance matrix. Who knows how?