qr_Q using tons of RAM?


#1

I have a model where I’m using the QR reparameterization of a 300000x2 input matrix, and it seems that the initial transform operation requires a huge amount of memory. I’m on macOS 10.13 with 8GB, and when I try the model on multiple cores, the OS reports full application memory. When I use one core and look at activity monitor, it shows 60+GB of RAM being used, which is obviously a bug. Any ideas what’s going on? Should I expect this much RAM usage doing the transform on a matrix that size?


#2

Yes, because it does the fat QR decomposition resulting in a Q matrix that is 30000x30000. It is much better to do the QR decomposition in basically any other software, specifically one that implements the thin QR factorization where Q would be 30000x2 and R would be 2x2.


#3

Ah, I see. I think this has the proper implementation of thin QR decomposition in R, yes?


#4

You can just use the one that comes with R


#5

Oh, yes, peeking at the rstanarm code, I see that qr.Q(qr(x)) is the built-in R way. Thanks!


#6

Is this a “nobody has time to implement” kind of thing or is there a good reason we don’t have it?


#7

We don’t have it because Eigen doesn’t have it. A thin QR isn’t a huge priority for Eigen because people who use Eigen typically are not constructing the Q matrix but rather are using the expression template of the columns of Q.


#8

Thanks for the reminder. I know I saw some discussion around this before but didn’t find it.


#9

Ah, Eigen (now?) has an example of how to do thin QR

MatrixXf A(MatrixXf::Random(5,3)), thinQ(MatrixXf::Identity(5,3)), Q;
A.setRandom();
HouseholderQR<MatrixXf> qr(A);
Q = qr.householderQ();
thinQ = qr.householderQ() * thinQ;