Short summary of the problem
I’m trying to run a suite of spline regressions using BRMS on some very large datasets. It’s painfully slow: a few hours per model if I pre-average across items (totaling about 2.5 days once all models are run), around 10 hours per model if I do an actual mixed effects model.
I’ve got an Apple M1 (and also access to a cluster), so I’m trying to figure out whether I can speed things up – particularly whether I can make use of the GPUs. I have seen this post and this post, but they mostly talk about whether it would be possible, not how to do it.
Following the BRMS documentation (here), I tried:
fit.congruent <- brm(accuracy ~ s(age, by=congruent) + congruent + (1|id),
family="bernoulli",
prior <- c(set_prior("normal(0, 1)", class="b")),
control = list(max_treedepth = 15, adapt_delta = .95),
data=temp, iter=1000, chain=4, cores=4, opencl = opencl(c(0, 0)))
The output was … well, there actually isn’t any output. If I try to inspect the output, here’s what I see:
> str(fit.congruent)
>
which is … odd?
From poking around, I suspect there’s more that I need to do to get this thing to work, but I’m having trouble tracking down documentation. I mostly see somewhat oblique discussions in this forum (oblique to me – no doubt they make sense to power stan users). And some of the discussion seems to indicate that probably using the GPU wouldn’t do any good anyway … maybe?
Is this something I want to pursue? If I want to pursue it, how do I pursue it? Is there a point-by-point tutorial somewhere? Could somebody please make one?
- Operating System: Big Sur
- brms Version: 2.15.0