Question about jacobian adjustment for ordered vector

Hi -

This is a fairly straightforward question for the Stan team (I know @Bob_Carpenter knows the answer for sure). I am implementing a function in brms (long story) in which I need to create my own ordered vector. Following the Stan manual, I created a vector that uses the inverse transform. For two reals cut1 and cut2, I combine them in the following way:

real cut1;
real cut2;
vector[2] cut_points;

cut_points[1] = cut1;
cut_points[2] = cut1 + exp(cut2);

So far, easy peasy. Manually constructed ordered vector works great in the function I wrote. The question is about putting a prior on the cutpoints. My understanding is that I only need to worry about the Jacobian adjustment if I put a prior on the transformed (ordered) vector. As the recommended prior is a Normal over the difference in the elements of the ordered vector, this would seem like what I need to do:

target += normal_lpdf(cut_points[2] - cut_points[1] | 0,3);

However, now that I am calculating the log density of the transformed vector, instead of just using it in downstream calculation, I need to factor in the Jacobian adjustment, which according to the Stan manual is equal to exp(cut2) as I only have two cutpoints. On the log scale then I end up with is as the final prior:

target += normal_lpdf(cut_points[2] - cut_points[1] | 0,3) + cut2;

I’m posting this because I’m pretty sure that’s right but I’m not 100% sure, and fitting the model with/without the Jacobian adjustment doesn’t tell me which fits better.

Thanks for any help on this!

Bob

1 Like

The differencing of the cutpoints has a Jacobian matrix that is the identity, so you don’t need to add anything because of that. But using exp to make the differences between the cutpoints positive is non-linear. That Jacobian is diagonal but you still need the derivatives.

2 Likes

So every time I use the cutpoints to calculate the log-likelihood I’ll need to add the derivatives.

Add the log-derivatives from constrained to unconstrained. Anytime you find yourself adding zero (or some other constant), then you didn’t really have to do that.

1 Like

Ah gotcha! Sorry I misunderstood what you wrote at first. The difference in cutpoints is a linear transformation, hence the Jacobian is the identity matrix. Essentially, I was right in thinking that the exp would require a derivative adjustment.

Thanks much!

Just as a follow-up: when I run the model with/without the Jacobian adjustment, I don’t see much if any bias in the cutpoints, but the bulk/tail ESS is noticeably different. The adjusted Rhats are the same, but the ESS for the model with the adjustment is considerably higher.

Anyhow, just a comment on diagnosing these issues.