Do I need a Jacobian Correction for matrix inversion?

Hi everyone,

I’m trying to estimate an NxN matrix A that satisfies A*x = b, where I know the vectors x and b (of dimension N). But it also must satisfy D*xm = bm, where D is the submatrix of A comprised of the first M rows and columns (D = A[1:M,1:M]), where I also know the vectors xm and bm (of dimension M).

To do this I set some priors for A, calculate inverse(A)*b, and compare this to the observed x. I also create the submatrix D = A[1:M,1:M] from A, and calculate inverse(D)*bm.

What I’m unsure of is whether or not I need to do a Jacobian correction for this, and if so, how exactly. Not only is it a complex nonlinear transformation of A, but the entries of A appear both in inverse(A) and inverse(D).

Any thoughts would be much appreciated!

Thanks,
John

MODEL:

data {
  int N;   
  int M;  
  vector[N] x;              
  vector[N] b;
  vector[M] xm;
  vector[M] bm;
}
parameters {
  real<lower=0> sigma;
  matrix[N,N] A;   
}
transformed parameters{
  matrix[M,M] D;
  vector[N] x_est;
  vector[M] xm_est;

  // create the submatrix of A, using the first M rows and cols
  for(i in 1:M){
    for(j in 1:M){
      D[i,j] = A[i,j];
    }
  }

  // get the solution for the full matrix
  x_est = inverse(A)*b;

  // get the solution for the submatrix
  xm_est = inverse(D)*bm;
}
model {

  sigma ~ normal(0,1);

  // set priors on A
  for(i in 1:N){
    for(j in 1:N){
        A[i,j] ~ normal(0,1);      
    }
  }

  // compare the fit for the full matrix
  for(i in 1:N){
        target +=  normal_lpdf(x[i] | x_est[i], sigma); 
  }
  // compare the fit for the submatrix
  for(i in 1:M){
        target +=  normal_lpdf(xm[i] | xm_est[i], sigma); 
  }  
}

As written, no, you do not need a Jacobian correction because the priors are being placed directly on the elements of \mathbf{A} rather than some nonlinear transformation thereof such as \mathbf{A}^{-1}.

Thanks for the reply. This makes sense, although it’s unclear to me why, for example, if I constrained the entries of \mathbf{A} I would then have to do a correction. That is, suppose I constrain the first row of A to be between 2 and 7 by defining an (unconstrained) \mathbf{Araw}, and then transforming this into \mathbf{A}:

parameters {
  real<lower=0> sigma;
  matrix[N,N] Araw;   
}
transformed parameters{
 matrix[N,N] A;

for(i in 1:N){
 for(j n 1:N){
  if(i>1){
   A[i,j] = Araw[i,j]
  }
  else{
   A[i,j] = 5*inv_logit(Araw[i,j])+2
  }
}

My understanding here is that I would have to do a Jacobian correction. And yet I likewise am still only placing a prior on \mathbf{A}. What differentiates this scenario from the case there the transformation is much more complex?

In the later scenario, \mathbf{A} (or at least part of it) is a transformed parameter rather than something declared in the parameters block. In the model block, you need to define the log-kernel of what is in the parameters block given what is in the data and transformed data blocks. A Jacobian correction allows you to accomplish that indirectly when you put priors on things in the transformed parameters block, but rarely is it necessary for a user to do that.