Help testing release candidate for std::complex

Bob I goofed around with some of the metaprogramming for this. I can make a PR with just the metaprogramming stuff onto your branch or you can just take the below files

is_complex.hpp - require_complex_t/etc - scale/value_type specialization

The metaprogramming helps remove a good bit of stuff

I’m looking around to determine what Eigen normally does if it knows it has, say, a complex<double> matrix times a regular double matrix. It looks like this is one of the calls, where we see numext::realref applied to the alpha factor.

We can also see that there is, indeed, a get_factor defined for the case where the To type is the matching real type of a corresponding complex type. In that case, it uses numext::real.***

MathFunctions.h has the implementations of real and real_ref.

*** If I’m reading it right, this implies that if the double matrix were, instead, a var matrix, then the general matrix routine would apply the real function to the complex<var> alpha factor to get a var compatibleAlpha factor.

This can be done with a special function that exists in my branch called val, that allows demoting fvar, var, etc to an arbitrary level. For example, fvar<var> could stop at var rather than going down to double:

namespace Eigen{
namespace internal{
template<class T1,class T2/*,std::enable_if_t<!std::is_same<T1,T2>::value&&
 (stan::is_fr_var<T1>::value || stan::is_fr_var<T2>::value)>* =nullptr*/>
struct get_factor<std::complex<T1>,T2>{
 EIGEN_DEVICE_FUNC static EIGEN_STRONG_INLINE T2 run(std::complex<T1>const&x){
  return stan::math::val<T1,T2>(numext::real(x));
 }
};
}  // namespace internal
} // namespace Eigen

Unfortunately, it gets deeper into the algorithm, and keeps trying to assign complex<var> to double elsewhere (similar to Bob’s first error novel, but not exact).

../eigen-git-mirror\Eigen/src/Core/products/GeneralBlockPanelKernel.h:527:23: fatal error: assigning to 'double' from incompatible type 'Eigen::internal::conj_helper<std::complex<stan::math::var>, double, false, false>::Scalar' (aka 'complex<stan::math::var>')
    tmp = b; tmp = cj.pmul(a,tmp); c = padd(c,tmp);
                   ~~~^~~~~~~~~~~
../eigen-git-mirror\Eigen/src/Core/products/GeneralBlockPanelKernel.h:1671:13: note: in instantiation of function template specialization 'Eigen::internal::gebp_traits<std::complex<stan::math::var>, double, false, false, 1, 0>::madd<std::complex<stan::math::var>, double, std::complex<stan::math::var>, Eigen::internal::FixedInt<0> >' requested here
            EIGEN_GEBP_ONESTEP(0);
            ^
../eigen-git-mirror\Eigen/src/Core/products/GeneralBlockPanelKernel.h:1652:22: note: expanded from macro 'EIGEN_GEBP_ONESTEP'
              traits.madd(A0, rhs_panel, C0, T0, fix<0>); 

This indicates that, at least for dynamic matrices, if one of them is complex and the other isn’t, and at least one of the underlying scalar types is an AD type, then both matrices should probably be cast to an AD version of their corresponding type (for example, complex double times var → cast complex double to complex var → then run multiplication).

I will see if it is possible to code it that way.

That looks great. If it’s a PR, then I can comment. I have a couple comments/questions already:

  1. std::complex<T> only takes a single template parameter T but there’s a type with a parameter pack used for it.

  2. The return type of a vectorized complex function would be a complex type, not the value type (value type is what they call the type of the real and imaginary components of a complex number).

I’m worried this PR is way too big already. Would you mind making a PR that doesn’t depend on complex with all the other fixes? There’s some stuff in my branch such as isinf, iterator_traits, etc. that does not depend on complex. I was going to stage that as a single traits update PR, but it could be combined with your updates, some of which supersede what I did, I think. I haven’t even broken it into files—it’s all sitting in std_complex.hpp so I could get my head around what I was changing w.r.t. complex.

If you create the PR, I’m also happy to fill in doc where necessary.

For the gemm/gemv templates customization, it looks like at that point, the data is already treated as arrays, so we don’t get to cast the entire matrix (once) at that point. I’m afraid of what it might mean. Would we end up with promotions done repeatedly?

So, I was looking for another way to accomplish the same thing. I noted in the matrix base implementations, a lot of the operators are member functions with one (non-implicit) argument. I thought perhaps it would be possible to define the binary version of the operators (like *) in the case where the complex type of one matrix and the scalar type of the other matrix are ummatched, and have it do the cast automatically. However, according to the CPP reference on operators and function overloading, I’m not sure how I could make a binary function a better fit for the same operator than the member function, because the binary function needs to sometimes promote either or both of the left and right operands. Since the member function has an exact match for its implicit this pointer, it seems like there’s no way to guarantee a shorter conversion sequence than the existing Eigen member functions.

Obviously, we could just tell people to use m1.lazyProduct(m2) or explicit casting as mentioned above from StackOverflow if they run into this error with dynamic sized matrices. (Fixed size doesn’t have the problem.) Also, in the same SO post, ggael said Eigen 3.4 was going to move to disable the advanced machinery we’re talking about here for custom types, though I’m on develop and it still happened to me, so it is possible that hasn’t landed yet… So, eventually, this last problem should just go away. I’ll comment on SO and point here in case ggael wants to add anything.

That file also contains a bunch of macro calls like:

EIGEN_BLAS_GEMV_SPECIALIZE(scomplex)

which defins a run() function for the specified type and also a specialization of struct general_matrix_vector_product.

Then later, they use more macros to map between Eigen types (first arg) and BLAS type (second arg) and the BLAS function to use.

EIGEN_BLAS_GEMV_SPECIALIZATION(double,   double, dgemv_)
EIGEN_BLAS_GEMV_SPECIALIZATION(float,    float,  sgemv_)
EIGEN_BLAS_GEMV_SPECIALIZATION(dcomplex, double, zgemv_)
EIGEN_BLAS_GEMV_SPECIALIZATION(scomplex, float,  cgemv_)

to define another run function and anotehr specialization of struct general_matrix_vector_product.

That’s where the offending BLASFUNC is defined.

You mean code the algorithm you’re using? It’s legal to promote up to autodiff types or to complex types or both. Although it’s costly, I’d much rather get something working then worry about optimization later.

Do you mean the stuff like var operator/(const var& a, T b) cleaning in your PR? Yeah I can rip those out and tidy them up a bit

I was really just talking about the stuff up front in std_complex. I probably touched a bunch of other stuff to get things compiling and sure, pulling all those fixes out would be fantastic.

I made some comments in this post above. I think we were typing around the same time.

The last part of my post comments above touch on this. I think for now, we need to use the workarounds for dynamic matrices ggael gave on SO. He mentioned he intends to turn off the high powered machinery that is causing this problem in eigen 3.4. Presently, it’s still occurring on develop, but if he turns it off, the problem will go away (similar to how the fixed size just-works right now without using lazyProduct or explicit casting). He may comment here or on SO. I’ll keep a lookout on SO and relay anything he posts there.

@philipm are you aware of any complex matrix math implemented inside the dco/c++ package? We have fixed size matrices like Matrix<complex<fvar<var>>,2,2> * Matrix<double,2,1> working. However, we don’t have it working with dynamic size matrices unless we use m1.lazyProduct(m2), or unless we use m1*m2.cast<fvar<var>>(). We’re looking for ways to do the required cast automatically without having to tell users about the workarounds.

Yes, I have used complex matrix math with dco/c++ types. I have a working example that uses dynamic size matrices (below). However this will not (currently) work with the version of dco that you can download from the NAG website. I have beta-tested an upcoming dco/c++ release. Support for complex arithmetic / linear algebra is one of the new features in that release.

Note, I am not part of the core development team for dco/c++. If you have questions about the internals of dco/c++, or would like a timescale for future releases / feature development, you can try asking @PPeltzer or email info@stce.rwth-aachen.de.

#include <Eigen/Core>
#include <Eigen/Eigenvalues>
#include <unsupported/Eigen/dco>

typedef dco::ga1s<double> DCO_MODE;
typedef DCO_MODE::type DCO_ST;
typedef Eigen::Matrix<DCO_ST, Eigen::Dynamic, 1> DCO_VT;
typedef Eigen::Matrix<DCO_ST, Eigen::Dynamic, Eigen::Dynamic> DCO_MT;
typedef Eigen::Matrix<std::complex<DCO_ST>, Eigen::Dynamic, 1> DCO_CVT;
typedef Eigen::Matrix<std::complex<DCO_ST>, Eigen::Dynamic, Eigen::Dynamic> DCO_CMT;

using namespace std;
using namespace Eigen;

int main(){
  try{
    DCO_MODE::global_tape = DCO_MODE::tape_t::create();

    DCO_MT A = DCO_MT::Random(6,6);
    cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl;

    EigenSolver<DCO_MT> es(A);
    cout << "The eigenvalues of A are:" << endl << es.eigenvalues() << endl << endl;

    DCO_CMT V = es.eigenvectors(); // this line causes a compiler error
    cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl;

    complex<DCO_ST> lambda = es.eigenvalues()[0];
    cout << "Consider the first eigenvalue, lambda = " << lambda << endl;

    DCO_CVT v = es.eigenvectors().col(0);
    cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl;
    DCO_CVT x1 = A * v;
    DCO_CMT A_CMT = A;
    DCO_CVT x2 = A_CMT * v;
  } catch (const std::exception& e){
    std::cout << " a standard exception was caught, with message '"
              << e.what() << "'\n";
  }

  return 0;
}

Do you have any cross-type examples, where the matrices multiplied aren’t all the same type? Specifically, we are looking for the case where the complex scalar type in one matrix does not match the non-complex scalar type in the other matrix with which it is operating (multiplying). In involves intercepting and modifying the internal behavior of Eigen, which is why I asked since you guys seem to know a lot about Eigen.

This line of code multiplies matrices of different types.
DCO_CVT x1 = A * v;

I’m probably not saying it clearly. We’re looking for the dynamic case where the comlpex scalar type matrix (Matrix<complex<sometype>,-1,-1>) is multipled by a matrix that does not use the same underlying type (Matrix<NOT_sometype,-1,-1>).

I think I understand now. The dco folk often make a distinction between passive variables and active variables, e.g., a stan::math::var would be active type and a double would be a passive type.

In the context of my little example I tried adding,

Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> A_double = Eigen::MatrixXd::Random(6,6);
DCO_CVT x1_cvt = A * v;

That compiles and runs on my machine.

I noticed that the example on the last line says A * v rather than A_double * v. Can you reconfirm? I’m testing here with Matrix<double,-1,-1> * Matrix<complex<var>,-1,-1> and it isn’t compiling. I would suspect the first problem your compiler would report is that ScalarBinaryOpTraits<double,DCO_ST,BinaryOp>::ResultType would not be defined (unless they had enabled that, and if they had, I would expect to see one of the errors mentioned above, like the one for get_factor).

Yes, it works with,

    Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> A_double = Eigen::MatrixXd::Random(6,6);
    DCO_CVT x1_cvt = A_double * v;

Sorry, should have spotted that typo myself!

@philipm Thanks, is this using Eigen-AD or standard eigen?

Eigen-AD