This is about the code in multi_normal_cholesky_lpdf. Is there good reason to do the inverse of L alone and not in a solve? Specifically this line in stan-math:

```
const matrix_partials_t inv_L_dbl
= mdivide_left_tri<Eigen::Lower>(value_of(L_ref));
```

I ask because it seems it would be more performant to

\begin{aligned}
(Y - \mu)^T \Sigma^{-1} (Y - \mu) &= (Y - \mu)^T L^{-T}L^{-1} (Y - \mu) \\
&= (L^{-1}(Y - \mu)) ^T (L^{-1}(Y - \mu))
\end{aligned}

set X := Y - \mu and use the triangular view in Eigen to solve for Z = L^{-1}X. Something like

```
Z = L.triangularView<Eigen::Lower>().solve(X)
```

and then do `rows_dot_self(Z)`

.

There’s some precedence for this method in `pytorch`

, see the source

```
def log_prob(self, value):
if self._validate_args:
self._validate_sample(value)
diff = value - self.loc
M = _batch_mahalanobis(self._unbroadcasted_scale_tril, diff)
half_log_det = self._unbroadcasted_scale_tril.diagonal(dim1=-2, dim2=-1).log().sum(-1)
return -0.5 * (self._event_shape[0] * math.log(2 * math.pi) + M) - half_log_det
```