Latent mean centering (latent covariate models) in brms

The switcheroo idea can run, but you need to add the second non linear equation within the nlf() function. This is because in brms, when you add nl=T, it only treate the main (first) equation as non linear. Then any other non-linear equations need to be specify by nlf

But when I tested this, the model doesnt converged for a couple of tries

m4_latent_formula <- bf(
  # urge model
  urge ~ alphaU + phiU*(urge1 - alphaU) + betaU*(dep - alphaD),
  alphaU ~ 1 + (1 | person),
  phiU ~ 1 + (1 | person),
  betaU ~ 1 + (1 | person),
  
  # dep model
  nlf(dep ~ alphaD + phiD*(dep1 - alphaD) + betaD*(urge - alphaU)) ,
  alphaD ~ 1 + (1 | person),
  phiD~ 1 + (1 | person),
  betaD ~ 1 + (1 | person),
  nl = TRUE
) +
  gaussian()
3 Likes

Thanks for this @Mauricio_Garnier-Villarre! I’ve modified this code a bit and gave it a shot.

🎉⭐️ It works! ⭐️🎉

(I think 🤣)

Huge thanks to everyone who helped with this!

I don’t think there is a reason to write a whole other model for dep, because I just want to get the mean (for the sole purpose of using it in the first non-linear formula). So I edited your suggestion a bit to just estimate the mean with a nlf() spec:

m4_latent_formula <- bf(
  urge ~ alpha + phi*(urge1 - alpha) + beta*(dep - depB),
  alpha ~ 1 + (1 | person),
  phi ~ 1 + (1 | person),
  beta ~ 1 + (1 | person),
  nlf(depB ~ depBI),
  depBI ~ 1 + (1 | person),
  nl = TRUE
) +
  gaussian()

p <- get_prior(m4_latent_formula, data = m4_data) %>%
  mutate(
    prior = case_when(
      class == "b" & coef == "Intercept" ~ "normal(0, 1)",
      class == "sd" & coef == "Intercept" ~ "student_t(7, 0, 1)",
      TRUE ~ prior
    )
  )

m4_latent <- brm(
  m4_latent_formula,
  data = m4_data,
  # Prior is required
  prior = p,
  control = list(adapt_delta = 0.99),
  file = "m4_latent_v3"
)
variable Result (brms) Authors
b_alpha_Intercept (α) -0.11 [-0.32, 0.10] -0.01 [-0.18, 0.16]
b_phi_Intercept (ϕ) 0.21 [0.18, 0.25] 0.21 [0.17, 0.24]
b_beta_Intercept (β) 0.79 [0.61, 0.97] 0.80 [0.61, 0.95]
b_depBI_Intercept (DepB) -0.10 [-0.24, 0.04] 0.01 [-0.02, 0.04]
var_alpha_Intercept (σα2) 0.56 [0.41, 0.78] 0.60 [0.44, 0.83]
var_phi_Intercept (σϕ)2 0.02 [0.01, 0.03] 0.02 [0.01, 0.03]
var_beta_Intercept (σβ2) 0.76 [0.58, 1.04] 0.79 [0.61, 0.95]
var_depBI_Intercept (σDepB2) 0.01 [0.00, 0.05] 0.01 [0.00, 0.01]
sigma (σ2) 1.14 [1.10, 1.19] 1.14 [1.09, 1.19]

So the real “trick” here is to wrap the predicted parameter in nlf(), and this should be a general solution.

The results are not identical to the MPlus ones reported in the target paper, but I am not too worried about that, because there will be some differences in the priors and I could run longer chains etc. I’m going to test this a bit more before marking your post as a solution.

Amazing work. Thank you!

10 Likes

OK.

I’ve done some checks and this works afaict. I was cheeky and marked my own post as the solution (it was closest to my original question and works with the original code provided), but want to recognize that I wouldn’t have been able to do so without @Mauricio_Garnier-Villarre’s example code!

I’ve started a GitHub repository working on a tutorial-style manuscript for a psychology audience (not statisticians) on this topic with Joran Jongerling (who might be @Joran here [? 😀]). It is currently private but as I said before, I would be glad to invite people who would like to contribute to the manuscript as coauthors. @Mauricio_Garnier-Villarre @simonbrauer @e.m.mccormick please let me know if you’d like to join—I currently list you only in the acknowledgements section of draft 0.0001😀.

In any case this is marked as solved, huge thanks to everyone!

1 Like