I guess any time you have linear predictors, then you could throw in a QR reparameterization and see if that gets you anywhere (page 125 of the 2.17.0 manual).
I don’t think you can get away from the solve if that’s what the model says. It’s hard for me to check the sparse implementation off the top of my head but let me write a couple eqs. and you can check if we’re on the same page.
We have:
y = (I - \rho W)^{-1} (\beta_0 + X \beta + W X \theta + \epsilon)
Assuming \epsilon at one point was IID normal noise with standard deviation \epsilon, then this is the same as saying:
z \sim N(\beta_0 + X \beta + W X \theta, \epsilon^2 I)
y = (I - \rho W)^{-1} z
where N is a multivariate normal and I is the identity matrix.
We want a distribution on y so we can write our likelihood, so just multiplying the (I - \rho W)^{-1} transform through:
y \sim N((I - \rho W)^{-1}(\beta_0 + X \beta + W X \theta), \epsilon^2 (I - \rho W)^{-1} (I - \rho W)^{-T})
The inverse comes out in the covariance so that you can use multi_normal_prec
in Stan:
y \sim N((I - \rho W)^{-1}(\beta_0 + X \beta + W X \theta), \epsilon^2 ((I - \rho W)^T (I - \rho W))^{-1} )
multi_normals formulated as precision matrices can be way easier to work with cause you have the inverse of the covariance (it’s just ((I - \rho W)^T (I - \rho W))). You still gotta worry about the determinant though, but maybe that’s easy. W is fixed, right? What is \rho?
Now the trouble is the term on the inside.
Just substituting in placeholder vars A and b, this looks like we’re saying the mean of the normal is:
(I - A)^{-1} b
or
\mu = (I - A)^{-1} b
which corresponds to:
\mu =A \mu + b
which we could solved with fixed point iteration depending on the properties of A. I suspect by doing the yhat = p * W * y + B0 + ...
thing you’re basically doing one fixed point iteration. If the problem is easy enough, maybe that’s good enough.
Any of that make sense? I’ve probably screwed something up, or missed the point of something, so don’t take this stuff as totally true haha. I’ve just seen this thread a couple times and keep being too lazy to finish my response. Thanks for keeping us posted on your progress!