Switch from `std::vector` to `Eigen::Tensor` in compiled models?

Yup. That part is perfectly clear.

The part that isn’t clear at all is what’s the gain? (I really don’t know what we gain from this. Not saying that there aren’t any; I just don’t know what they are.)

To summarize the potential problems:

  • maintenance: shifting API means that single use of Eigen::Tensor may not be compatible across different Eigen versions, so the math version would then have to be linked to an Eigen version. (Right now, we don’t have that sort of requirement, but maybe we should?)
  • RStan installation: not sure if RcppEigen already ships Eigen::Tesnor. If it’s an incompatible version with Stan Math, then the installation process for RStan becomes much more complicated.
  • dependency management: we’re now in a wag the dog situation where the Stan Math library wouldn’t be able to shift off of Eigen versions because of RStan’s dependency on RcppEigen’s version.
  • compiler issues: given how much effort we put in to work around multiple versions of Eigen and different OSes and compilers, I’m going to assume getting Eigen::Tensor to work is going to be difficult. I’m guessing we haven’t figured out how to compile against g++7 yet? Given how much trouble PyStan already has compiling models with Eigen, I’m going to guess this is going to make it worse.

Anyway, I’m not trying to disparage. I just want to know that the pros (or even the potential pros) outweigh the list of things I know we’re going to face.

1 Like

Pro: Mostly memory locality and reduced malloc overhead (and hence reduced fragmentation in theory) and a bit of reduced memory usage. This will make any vectorized operations on the collection faster.

Con: You won’t get values by reference any more, so it may require copying or a whole lot of work writing expression template views. It’ll take arithmetic to look things up (though that’ll almost certainly be faster than the pointer chasing we use now).

I don’t think they do here.

As you said earlier, I think if we need efficient 3D tensor operations, we should write 3D tensors. Then arrays will remain std::vector with all the pros and cons that entails.

Since the elements of a std::vector are stored contiguously there’s no penalty to converting things into Eigen at a later point and keeping the public function signatures simple (i.e., std::vector). (https://stackoverflow.com/questions/247738/is-it-safe-to-assume-that-stl-vector-storage-is-always-contiguous)

Not requiring interfaces to know anything about Eigen seems to me like an overwhelming advantage for maintainability.

1 Like

While vector<double> is memory-contiguous, vector<vector<double>> is not. Each of the inner vectors will do its own malloc and the outer vector will store the elements of the inner vector type, which only includes a pointer to the inner values.

Sorry that Eigen’s proving to be such a thorn in PyStan’s side. I wish I could be more help there.

1 Like

No vector<vector> s are used in any signatures in the generated
stan model code as far as I can tell. Do you see them somewhere?

For example, log_prob’s signature is:

template <bool propto__, bool jacobian__, typename T__>
T__ log_prob(vector<T__>& params_r__,
              vector<int>& params_i__,
              std::ostream* pstream__ = 0) const;

Seems like it’s just vector we need to worry about.

[edit: escaped code; added const]

The only reason we still have vector<T> is that Allen says it’s hard to use Eigen types in Python. We’re also generating this signature, which I believe we use everywhere else other than PyStan:

    template <bool propto, bool jacobian, typename T_>
    T_ log_prob(Eigen::Matrix<T_,Eigen::Dynamic,1>& params_r,
                std::ostream* pstream = 0) const;

It also removes the integer parameters.

P.S. Allen, you can use three back ticks to open and close code blocks.

It’s not just Python specifically that will prefer std over Eigen. It’s
any programming language that is not R. So Go, Rust, Julia, etc… To the
extent that they do support interfacing with C++ code they’re far more
likely to support calling functions which have std::vector arguments
than calling functions which have Eigen arguments.

Aren’t you going to have to bind the function you want to call anyway with things like the standard streams? All that’s required for Eigen is this:

std::vector<double> x;
...
Eigen::VectorXd x2 = Eigen::Map<Eigen::VectorXd>(x.data(), x.size());

Internally, everything is done with VectorXd types, so we need to do this copy anyway if things are called with std::vector<double>.

If we’re not on the same page we’re at least not far apart.

What you just wrote does not involve a copy. data() returns a pointer
and Map doesn’t do any copying. I would propose that code such as the
code block you included be kept inside Stan C++ code. Interfaces should
only have to know about flat, memory-contiguous std::vector`s.

It involves an allocation and copy in the assignment to VectorXd — that’s just the way expression templates work. The copies are carried out in the operator()= function (which may just autodelegate to a copy constructor).

How about a compromise where there’s a standalone function that deals with it:

namespace stan {
namespace model {

template <bool propto, bool jacobian, typename T, typename M, typename RNG>
T log_prob(const M& m, const std::vector<T>& y, RNG& rng, std::ostream& o) {
  return m.log_prob<propto, jacobian, T>(
     Eigen::Map<Eigen::VectorXd>(y.data(), y.size),
     rng, o);
}
}  // namespace model
}  // namespace stan

Then we can keep the model code simple.

Perfect. That’s exactly what I had in mind.

I’m still surprised about the copy. Surely there’s a way to avoid that.
From what I’ve read it seems like people are at least behaving like you
aren’t making a copy. See


and

Not sure why you’re surprised as it’s standard operating procedure for expression templates. The VectorXd instance manages its own memory using RAII. Construction copies as does assignment—it has to, because the result winds up managing its own memory. You can also look at the code.

The Map wraps other memory—that doesn’t need a copy.

At least that first link is also copying to std::vector for the same reason—it manages its own memory (unless you override the allocator for std::vector to not have the default behavior).

This is also made clear in the doc for operator= in Matrix (for which VectorXd is a typedef for Matrix<double, -1, 1>), which explicitly states that a copy is involved.

Oops, forgot the link to the doc for Eigen::Matrix:

https://eigen.tuxfamily.org/dox/classEigen_1_1Matrix.html

which states in the doc for operator=, i.e.,

Matrix& Eigen::Matrix< Scalar, _Rows, _Cols, _Options, _MaxRows, _MaxCols >
::operator=(const EigenBase<OtherDerived>& other);

that it

Copies the generic expression other into *this.

What about the code block which you provided which doesn’t contain an assignment:

namespace stan {
namespace model {

    template <bool propto, bool jacobian, typename T, typename M, typename RNG>
    T log_prob(const M& m, const std::vector<T>& y, RNG& rng, std::ostream& o) {
  return m.log_prob<propto, jacobian, T>(
     Eigen::Map<Eigen::VectorXd>(y.data(), y.size),
     rng, o);
}
}  // namespace model
}  // namespace stan

Is there still a copy here? (It’s this block I was responding to)

It contains an implicit constructor because the function it is calling takes a VectorXd& argument. Whenever you call an argument, you’re essentially assigning to the function argument variables.

The assignment operator operator=(const T&) and the copy construtor T(const T&) almost always do the same thing. And it’s almost always a copy, hence the name.

The way to reason about it is to keep track which container classes manage memory for their elements. Map does not manage memory, but std::vector and Eigen::Matrix do. A pointer in C or C++ does not manage memory.

Whenever you construct a container or assign to a container that manages its own memory, there’s a copy involved.

C++11 has move semantics which avoid some of this copying by transferring control of the memory from the right-hand-side element to the left hand side. This invalidates the right-hand side object.

I should also add that the compiler is clever in some function return statements, allowing reuse of the memory for an object instead of a default copying assignment. That won’t work here becuase it’s not like types and the lifespan of the object being copied isn’t controlled by the function.