Problem with computing Hessian in mixed mode (math library)

Hi,
I’m able to get the gradient functional to work in reverse mode, but I have a hard time trying to compute Hessians in mixed mode. What happens is that I get compiling errors if there are matrix operations in my functor that mix double and T types. The compiler error is:
no type named 'ReturnType' in 'Eigen::ScalarBinaryOpTraits<double, stan::math::fvar<stan::math::var>

What seems to be causing this is that Eigen (?) cannot handle adding (or other binary operations) involving a matrix that contains double types and and ‍fvar<var>. That is, when I perform operations using both double and T matrices that get promoted to fvar<var> in the hessian functional. This doesn’t happen happen when I just do the gradient functional in which case I can add two matrices of double and T (to be promoted to var inside the gradient functional. Here’s a minimal example to reproduce the issue

#include <iostream>
#include "stan/math/mix/mat.hpp"
#include <Eigen/Dense>

using namespace Eigen;
using namespace stan::math;

int main() {
  Matrix<double, Dynamic, 1>x(2);
  x << 1, 1;
  Matrix<var, Dynamic, 1>v(2);
  v << 2, 2;
  // fvar<var> is what's used inside hessian functional
  Matrix<fvar<var>, Dynamic, 1>fv(2);
  fv << 3, 3;
  // this line is fine
  v  = v + x;
  // compilation fails on this line
  fv = fv + x;
}

Maybe this is happening because I’m using the library in a way that I’m not supposed to, in which case you should pardon me. But I wasn’t able to find documentation on how to properly call the stan math hessian functional from C++ code.

I’m using Stan Math Library 2.17 with LLVM 6 on macOS 10.13.

I found this thread in the discussions which is the most similar to the problem I have:

I wonder if the recommended workaround is still the same as the ones suggested there.

The problem is that our fvar autodiff hasn’t been fully tested. We should’ve never included it in the code base as is. Our intention has been to thoroughly test and fix everything, but we have never found the time. So we haven’t had any of the Stan algorithms rely on forward-mode autodiff.

Most of the scalar and probably functions are thoroughly tested for higher-order. Just not the matrix functions. We should have lots of warnings in the code.

I see. Thank you for the clarification, Bob. I’m really impressed by the elegance of Stan math library compared to the other autodiff libraries out there and I hope this feature will get ready at some point too. If I can be of any help in testing and finding bugs I’d be happy to help.

How are your template metaprogramming skills and matrix derivatives? What we need to do is take the testing framework I wrote for scalar functions with single outputs and extend it in general to Jacobians and then matrix inputs and outputs. I know how to do it, but not well enough to lay out exact instructions.