I didn’t check the code super closely, but there are two things missing:
- You should be looking for the gradient in output with respect to one of the inputs. That input needs to be declared as a stan::math::var. That’s what this bit of code in the previous code I wrote was:
Matrix<stan::math::var, Dynamic, 1> E_w = this->weights;
- Once you call lp[i].grad, you should go collect the adjoints from each of those input variables. That’s what this bit of code in the previous code was:
for(int i = 0; i < E_w.size(); i++) {
jac(d, i) = E_w(i).adj();
}
But again you should just do these calculations by hand. Nothing magic about autodiff.
There was another thread recently where a dude does some autodiff stuff you might like to mess around with: Gradient after transformation (math library)