That’s about the scale of a dataset I’m currently working on, except I’m doing:
x ~ (a+b+c+d+e+f+g)^3 + (1+(a+b+c+d+e+f+g)^3 | q | id)
y ~ (a+b+c+d+e+f+g)^3 + (1+(a+b+c+d+e+f+g)^3 | q | id)
z ~ (a+b+c+d+e+f+g)^3 + (1+(a+b+c+d+e+f+g)^3 | q | id)
But aside from over-regularization of the correlations that I’m still working on, it at least fits in a few hours. Mind you, all my variables are dichotomous and thereby benefit greatly from the reduced redundant computation trick I linked above.
Oh, and I guess it helps that while the raw number of observations is in the 30k+ range, there are only 60 levels of id.