Hello all,

Is it possible to directly expose the gradient of the log-likelihood in Stan for every transition and to define it as a variable in the model?

I saw that the last mention of this was more than 4 years ago:

Thanks!!

Hello all,

Is it possible to directly expose the gradient of the log-likelihood in Stan for every transition and to define it as a variable in the model?

I saw that the last mention of this was more than 4 years ago:

Thanks!!

1 Like

No

1 Like

Thanks for the answer! So how would one access the gradient or calculate (and store) it on the fly? Is that even possible?

You would have to either do it analytically and / or write your own C++ (very carefully).

1 Like

+1.

If you do it analytically or numerically, you can do it in Stan.

If you’re willing to get to C++, anything is possible.

ok thanks! Is there an example how to analytically or numerically calculate the gradient in Stan?

C++ is over my head atm (unless it is easier to implement there).

Thanks again!

and based on this discussion:

is it still possible? Because I didn’t see the gradient info for each iteration as one of the outputs in the diagnostic file from CmdStan.

It is in the columns that start with `grad`

, although that is the gradient with respect to the unconstrained parameters, which may not be what you want.

I checked my `*diagnostic.csv`

again and I have columns that start with `g_*`

for different parameters. Is this the gradient in the unconstrained space? (I don’t see columns that start with `grad`

.)

Also, what does `p_*`

for different parameters mean?

yes

momentum

Ok, thanks! That’s super helpful. Now is there a way to access the gradient values for each iteration as I sample my model? I don’t need the values for each leapfrog step, so the unconstrained space and per-iteration gradient value is exactly what I need.

1 Like

I see :) thanks!!