Hi all,

I am trying to

latent cluster-mean centermy predictors in a multilevel model instead of using the observed means to center them, in order to avoid bias in the resulting parameter estimates.

(If we’re able to solve this, I’d write it up properly and submit to AMPPS as a practical tutorial, and I’d be more than happy to have coauthors! 😃 I have a longer blog post draft exploring this, massively WIP but with code and data, here)

## Background

The issue relates to centering observed predictor variables in hierarchical models of multiple people’s data in order to separate within-people and between-people associations. If one uses the raw values of observed predictors in hierarchical models, the resulting coefficients will be mixtures of within- and between-person associations. This is not optimal because psychologists, like me, like to know what is going on *within a person*. As a solution, psychologists have long known to person-mean center their variables in order to study within-person (causal) processes that are isolated from between-person associations (e.g. Enders and Tofighi, 2007).

However, especially if you have few observations per person, centering on the *observed* mean will bias your parameter estimates, because the actual mean is not an observed thing but rather a latent quantity that should be appropriately estimated. The solution to this, *latent* person-mean centering (Asparouhov & Muthen, 2010, McNeish & Hamaker, 2020), is available in the commercial MPlus software. (The relevant equations are in McNeish and Hamaker, which I’ve found online.)

The problem with this is, of course, that ~~it’s not in Stan~~ ~~it’s not in brms~~ most people don’t have access to MPlus because it is not FOSS. My goal is to figure out how to latent mean center predictor variables in hierarchical models of longitudinal data with brms, so that the procedure, which provides unbiased estimates, would be available to as many researchers as possible.

## Analysis

Say I have many measurements over time from many individuals on their `urge`

to smoke and `dep`

ression, and I am interested in how depression and urge to smoke on a previous occasion predict the urge to smoke *within-person*. Here’s some data from McNeish & Hamaker, 2020 (my blog code loads and processes this automatically in R if you’d like to follow along):

person | time | urge | dep |
---|---|---|---|

1 | 1 | 0.34 | 0.43 |

1 | 2 | -0.48 | -0.68 |

1 | 3 | -4.44 | -1.49 |

1 | 4 | -4.19 | -0.74 |

1 | 5 | -0.91 | -0.52 |

2 | 1 | 1.65 | 0.68 |

2 | 2 | 0.31 | 1.49 |

2 | 3 | 0.46 | 0.03 |

2 | 4 | -1.09 | -1.02 |

2 | 5 | 1.67 | 1.07 |

(Just showing a few observations for 2 people in that table.) A knee-jerk multilevel model of this is

where Urge _{ti} is the urge to smoke at time *t* for person *i*. We then have population-level effects (those with bars) and person-specific deviations (without bars). As noted above, if we use the observed Urge (t-1)_{ti} and Depression _{ti}, the resulting parameter estimates will be biased. (Specifically, they will suffer from Nickell’s (negative bias in autoregressive effects) and Lüdtke’s (bias in other time-varying effects) biases [McNeish & Hamaker, 2020].)

*Latent* mean centering pivots on recognizing that (McNeish & Hamaker, 2020, p. 618)

The observed values of urge to smoke (on a previous occasion) and depression are sums of the latent person means (with superscripts b) and their occasion-level deviations (superscripts c).

## Code

In case you didn’t notice, I am following the MPlus tutorial of McNeish & Hamaker, 2020, whose MPlus code for this model is here. It is not at all transparent to me, but maybe someone understands it:

I have tried to replicate this in brms with the following code (see my blog for reproducible code):

```
m4_data <- d %>%
# Create lagged variable
group_by(person) %>%
mutate(urge1 = lag(urge)) %>%
ungroup()
m4_latent_formula <- bf(
urge ~ alpha + phi*(urge1 - alpha) + beta*(dep - depb),
alpha ~ 1 + (1 | person), # Latent intercept of urge
# urge1b ~ 1 + (1 | person), # This is alpha! Weird & awesome (M&H2020 p.618)
depb ~ 1 + (1 | person), # Latent mean of depression
phi ~ 1 + (1 | person), # Slope of latent mean-person centered lagged urge on urge
beta ~ 1 + (1 | person), # Slope of latent mean-person centered depression on urge
nl = TRUE
) +
gaussian()
# Hacky way to set some pretty random priors
p <- get_prior(m4_latent_formula, data = m4_data) %>%
mutate(
prior = case_when(
class == "b" & coef == "Intercept" ~ "normal(0, 1.5)",
class == "sd" & coef == "Intercept" ~ "student_t(7, 0, 1)",
TRUE ~ prior
)
)
m4_latent <- brm(
m4_latent_formula,
data = m4_data,
prior = p,
control = list(adapt_delta = 0.99),
file = "m4_latent"
)
```

(I specified the above code based on @martinmodrak’s suggestion here. I haven’t used the nonlinear syntax before so this is probably quite wrong (for other reasons too no doubt!) The model samples well but the estimates don’t quite match those reported in the MPlus tutorial of McNeish & Hamaker, 2020

variable | median | q2.5 | q97.5 |
---|---|---|---|

b_alpha_Intercept | -0.11 | -0.32 | 0.09 |

b_depb_Intercept | -0.10 | -0.22 | 0.04 |

b_phi_Intercept | 0.21 | 0.18 | 0.25 |

b_beta_Intercept | 0.78 | 0.61 | 0.94 |

sd_alpha_Intercept | 0.56 | 0.41 | 0.76 |

sd_depb_Intercept | 0.00 | 0.00 | 0.05 |

sd_phi_Intercept | 0.02 | 0.01 | 0.03 |

sd_beta_Intercept | 0.75 | 0.57 | 0.99 |

sigma | 1.14 | 1.10 | 1.19 |

The estimates for phi and beta are the same, but depb and alpha are way off. The variances seem very similar though? I don’t know why, I think I’ve specified the right model. Maybe MPlus has weird priors?

Help needed :)

## Footnote

This issue has been discussed here before, with no resolution: