I recently read @avehtari’s FAQ on cross-validation, and have been working through the Mesquite example from ROS, where the outcome variable is log transformed. In that case study, \log (Y) is subtracted from the log likelihood of the log-transformed model before comparing log-likelihoods with the untransformed model. I’m struggling to see where this comes from.

In the cross-validation FAQ, there’s a comment that you need to correct by the Jacobian after transformation, which makes sense to me. However, I’m confused about where the \log in particular comes from. If we have some normal random variable X and Y = \log X, then f_y(y) = f_x(x(y)) \cdot \left \vert \frac{\partial x(y)}{\partial y} \right \vert = f_x(\exp y) \cdot \exp(y) where f_x is the Gaussian density for x, and x(y) = \exp(y). It seems we’d want to correct the density for each point by \exp(y) and correct the log density by \log \exp (y) = y rather than \log y, so I’m a bit lost.

This SO question mentions a paper by Akaike that arrives at the same transformation, but also does not show the derivation itself. I imagine I’m missing something obvious about the setup of the problem and would love some pointers.