W is a binary spatial contiguity matrix (e.g. if a and b are near, W_{a,b} = 1 and W_{b,a} = 1)
\tau is a spatially varying precision parameter
X are the covariates
This model accounts for the spatial auto-correlation (correcting the bias of \beta), and for overdispersion (through the variability).
Although the fit goes very for in-sample data (ofc for the random effects!), I don’t really understand how to use stan for new data. How can I modify the stan model to generate new random effects? E.g. random effect for a new area that was not used in the training phase.
data {
int<lower = 1> n;
int<lower = 1> p;
matrix[n, p] X;
int<lower = 0> y[n];
matrix<lower = 0, upper = 1>[n, n] W;
}
transformed data{
vector[n] zeros;
matrix<lower = 0>[n, n] D;
{
vector[n] W_rowsums;
for (i in 1:n) {
W_rowsums[i] = sum(W[i, ]);
}
D = diag_matrix(W_rowsums);
}
zeros = rep_vector(0, n);
}
parameters {
vector[p] beta;
vector[n] phi;
real<lower = 0> tau;
real<lower = 0, upper = 1> alpha;
}
model {
phi ~ multi_normal_prec(zeros, tau * (D - alpha * W));
beta ~ normal(0, 1);
tau ~ gamma(2, 2);
y ~ poisson_log(X * beta + phi);
}
generated quantities {
vector[n] log_lik;
vector[n] y_pred;
real mu;
for (i in 1:n) {
y_pred[i] = poisson_log_rng(X[i] * beta + phi[i]);
}
}
In this stan model (the simplest one) y_pred is for the in-sample data.
Hi, I only have shallow understanding of CAR, but I think procedure for drawing a single sample for new areas would be:
Let’s assume W_prime is W extended to contain the new locations, similarly phi_prime
Take a single posterior sample
Using tau, alpha from the sample to compute W_prime and draw phi_prime ~ N(0, tau * (D - alpha * W_prime) given phi_prime[1:n] = phi I.e. this is a draw from a multivariete normal with a given covariance matrix, conditional on known values of a subset of the parameters. This should be multivariete normal, but getting the covariance/precision conditional on phi will require some linear algebra (or Googling :-)
Plug in beta and X to compute predictions
Note that you can do that in both generated quantities in Stan or you can do that in R (or whatever is your host language is).
Not much, I am not strong in linear algebra :-) But I believe Wikipedia has exactly what you want:
Though this is for covariance matrices not precision matrices. The simplest approach would be to invert the matrix first, then compute the new covariance matrix then invert back - there probably is some simplification possible, but it would take me hours to get to it :-)
This is an older thread - but I have a couple follow-up questions about instruction #3.
i) It seems like D would also need to be extended to D_prime, is that correct?
ii) It seems like phi_prime would be length (n+1) and therefore can not equal to phi which is length n, do you agree?
iii) In my attempts to relate this problem to the referenced conditional distributions equations in Wikipedia, would sigma[1,1] be phi and sigma[2,2] be phi_prime?
Thank you!
Yes, this is why I write that phi_prime[1:n] = phi, i.e. the matching elements are set to equal what was fit in the model, the “new” elements are treated as unknown and drawn conditional on the known elements.
No. \Sigma is the whole covariance matrix (\Sigma = \left[\tau(D^{\prime -1} - \alpha W^\prime \right]^{-1}). phi would correspond to x. Also note that Wiki has the extension in other direction, i.e. it is treating the last elements of x as known and the first as unknown. So in their notation, \Sigma_{2,2} is the covariance matrix of the original model, i.e. \Sigma_{2,2} = \left[\tau(D^{ -1} - \alpha W \right]^{-1}
Also note that the wiki says that:
This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix.
Which should be easy, because you actually start with the precision matrix, which is the inverse of the overall covariance matrix…