# Avoid repeating computation of the inverse

Hello,

Suppose I have a transformed parameter \beta that is a function of another transformed parameter that is a matrix inverse A^{-1}. In the code below you can see that I have used inverse(A) multiple times as well as diagonal(A) multiple times.

1. Does stan cache inverse(A) and diagonal(A) in the code below so that it doesn’t have to recompute the inverse and the diagonal twice (i.e. one for each time the diagonal and inverse functions appear)?

2. If this does not happen, then is there a way that I can store inverse(A) and diagonal(A) in a temporary variable so that Stan does not have to recompute each time these functions are used?

That is I have something like

parameters {
real param1;
}

transformed parameters{
matrix[N,N] A;
vector[N] beta;
for(i in 1:N){
for(j in 1:N){
A[i,j] = -0.5*param1*dot_product(X[i,:],X[j,:]);
}
}

beta = 0.5*inverse(A)*diagonal(A) + (2-rep_row_vector(1.0,N)*inverse(A)*diagonal(A))*rep_vector(2.0,N);

}

1 Like

Try this

parameters {
real param1;
}

transformed parameters{
matrix[N,N] A;
vector[N] beta;
for(i in 1:N){
for(j in 1:N){
A[i,j] = -0.5*param1*dot_product(X[i,:],X[j,:]);
}
}
{
vector[N] tmp = A \ diagonal(A);
beta = 0.5 * tmp + (2 - rep_row_vector(1.0, N) * tmp) * rep_vector(2.0, N);
}

}

• Putting it in the { }  means that the variable tmp won’t be outputted.
• Using A \ b should always be more efficient than inverse(A) * b because inverse(A) computes and stores the entire inverse, whic his wasteful.
3 Likes

Also if A = -0.5 * param1 * X * X' then you can write that efficently as A = -0.5 * param1 * tcrossprod(X)

(Edited because I forgot the Stan syntax for transposes)

2 Likes