qr_Q using tons of RAM?

I have a model where I’m using the QR reparameterization of a 300000x2 input matrix, and it seems that the initial transform operation requires a huge amount of memory. I’m on macOS 10.13 with 8GB, and when I try the model on multiple cores, the OS reports full application memory. When I use one core and look at activity monitor, it shows 60+GB of RAM being used, which is obviously a bug. Any ideas what’s going on? Should I expect this much RAM usage doing the transform on a matrix that size?

Yes, because it does the fat QR decomposition resulting in a Q matrix that is 30000x30000. It is much better to do the QR decomposition in basically any other software, specifically one that implements the thin QR factorization where Q would be 30000x2 and R would be 2x2.

1 Like

Ah, I see. I think this has the proper implementation of thin QR decomposition in R, yes?

You can just use the one that comes with R

Oh, yes, peeking at the rstanarm code, I see that qr.Q(qr(x)) is the built-in R way. Thanks!

Is this a “nobody has time to implement” kind of thing or is there a good reason we don’t have it?

We don’t have it because Eigen doesn’t have it. A thin QR isn’t a huge priority for Eigen because people who use Eigen typically are not constructing the Q matrix but rather are using the expression template of the columns of Q.

Thanks for the reminder. I know I saw some discussion around this before but didn’t find it.

Ah, Eigen (now?) has an example of how to do thin QR

MatrixXf A(MatrixXf::Random(5,3)), thinQ(MatrixXf::Identity(5,3)), Q;
A.setRandom();
HouseholderQR<MatrixXf> qr(A);
Q = qr.householderQ();
thinQ = qr.householderQ() * thinQ;