(More) Advice on a Multilevel Time Series Model in BRMS

Hey All!

This is a follow-up to a message posted on the old forum, with link and responses here.

Basically, I am trying to fit a multilevel time series using BRMS and had one question about implemention and another concerning efficiency. The basic information concerning the paradigm itself can be found here (quoted from the above link):

I am trying to fit a multilevel time series model to some physiological data I have from a psychological experiment. Specifically, in this experiment 36 participants completed a cognitive task, which involved 30 trials for each of 3 conditions. For each trial, I have a time series with approximately 200 time points reflecting physiological arousal throughout the trial. This means I have 36 x 30 x 3 x 200 data points (minus some missing data points) that I would like to model hierarchically as a function of condition while treating subject as a random effect.

I was told last time I posted that I should try to find a faster computer, and am fitting the model using Google Compute as we speak on a Standard 8-core Machine with 30 GB of RAM running Windows Server. The code I am running at the moment is:

brm(formula = y ~ condition + s(time, by = condition) + (1 | id),
data = dat,
family = gaussian(),
chains = 8,
cores = 8,
iter = 2000) → bm1

Since beginning the fit, I spoke to someone with expertise in the mgcv package (which I believe the s() component is borrowed from above) who suggested that I should really be using s(time, id, bs=‘fs’, m=1, by = condition), which would allow the curve to vary as a function of time, subject and condition – which is an analog to a random slope in a more typical linear model (at present I include only a random intercept by subject). This is new to mgcv and does not appear to work in brms just yet. As an alternative, my colleague also proposed a combined predictor fit with the following code:

dat$IdCond ← interaction(dat$id, dat$condition, drop=TRUE)
s(time, IdCond, bs=‘fs’, m=1)

But this was presented as a less ideal option. My question would be, are there plans to implement something like either option in the near future?

My second question is a direct follow-up concerning efficiency. When I ran this model on my laptop, I found it took days to complete. So I tried moving to Google Compute. I can get more cores this way, which is great, but it is still taking ~4-5 days to fit, and I worry it is going to end with divergent transitions (it is still running). Is there any way to fit this model more efficiently? Or is this just the cost of using such a cutting edge approach? I ask in part because should I work out a manner in which to fit the random curves per subject, I anticipate it will increase the necessary time even more!

Cheers!
Jon

  • Operating System: Windows Server (Google Compute Cluster)
  • brms Version: 2.3.0 (fresh CRAN install)
2 Likes

Hi Jon,

All kinds of smooths set up via s() should be working in brms. When I try out your example with some dummy data, I get the following error from within mgcv:

Error in as(X * as.numeric(object$fac == flev[j]), "dgCMatrix") : 
  no method or default for coercing “numeric” to “dgCMatrix”

Is this error also what you see? Since this feature is new to mgcv, I think it is likely that this is a bug in mgcv. I will email Simon Wood about that.

With regard to your second question: Which model takes so long to converge? The one with

dat$IdCond <- interaction(dat$id, dat$condition, drop=TRUE)
s(time, IdCond, bs=‘fs’, m=1)?

In any case, I honestly don’t know how to improve sampling speed for these models, as I have little control over what mgcv gives me as output matrices.

Hey Paul!

Thanks for continuing to humour me on this!
I just checked again and the following code runs fine for me using mgcv 1.8-23:

bam(c_pupil~condition + s(time, id, bs="fs", m=1, by = condition), 
    data = dat) -> mgcv1

But if I run the equivalent code in brms 2.3.0:

brm(formula = y ~ condition + s(time, id, bs="fs", m=1, by = condition),
    data = dat,
    family = gaussian(),
    chains = 8,
    cores = 8,
    iter = 2000) -> bm1

I receive the same error as in your message. So I suspect your evaluation is correct. I look forward to hearing what Simon has to say.

As for the time it takes – I am currently only trying to fit:

brm(formula = y ~ condition + s(time, by = condition) + (1 | id),
    data = dat,
    family = gaussian(),
    chains = 8,
    cores = 8,
    iter = 2000) -> bm1

This has no random curves – only a random intercept – as it was something I started before my colleague recommended the approaches detailed in my recent message. But even that is taking forever. I think I just have too much data, and may need to downsample.

Cheers!
Jon

Has there been any progress on this? I’m also looking to use the “random curves” option (bs = ’ fs’), but I’m getting the same error message regarding a “dgCMatrix”.

Thanks!

Simon Wood has made some changes to mgcv that allow using “fs” splines with the latest CRAN version of brms some time ago. However, I am not sure if he released this new mgcv version yet. Did you try it out with the latest CRAN versions of both mgcv and brms?

Thanks, Paul. I updated both packages and the model is up and running now, although I suspect it will take too much time:

1000 transitions using 10 leapfrog steps per transition would take 8400.52 seconds.

Much time may be an understatement…

Yes, I might be fossilized by the time the model has run its course. Hopefully my descendants will check the results (on brms version 666.666) and send them back by time machine.