Vectors/arrays, interpolation, other things

Maybe we can make a variadic version of this?

The lack of a variadic version of vector concatenation has come up very recently, although the interest there was for use in the generated quantities.

1 Like

This one is really simple actually. For the simpler case with vectors of the same type I threw together a prototype that reduces the copies and a simple test. In terms of the math library this can be done pretty easy.

The tougher problems is how do you specify a variadic function in function signatures file. The add() function for signatures is not variadic either. We could add signatures for up to N parameters, or something. If someone has an idea on how to solve this on the Stan side, I would be happy to help on the Stan Math side. Variadic functions are a fun challenge to me :)

It would be great to have concatenation. The big thing, though, would be if we could treat vectors and 1-D real arrays as equivalent, and treat matrices and 2-D real arrays as equivalent. I spend lots and lots of time when coding, going back and forth between real theta[N]; and vector[N] theta;. Sometimes I need to code something as a vector (so as to do some matrix algebra), other times I need to code something as an array (so as to feed something into a function, or because I’m doing logistic regression so my data are an integer array rather than a (nonexistent) integer vector. It would be great for these to be identical so I wouldn’t keep running into errors and needing awkward hacks to go back and forth.

1 Like

So basically we should make overloads of all (or majority) functions that you can do on vectors to also support real arrays and vice-versa?

I suspect the efforts of @Stevo15025 to make functions more general should help us a lot here. Can you list a few that dont work on both?

Within some limits you can code up something with Stan already now which does nice concatenation:

// takes as input an array of the form {a,b,c,d} where a,b,c,d are
// real 1d arrays. These are then returned as concatenated 1d array
real[] concatenate_real_array(real[,] elems);
real[] concatenate_real_array(real[,] elems) {
  int num_elems = size(elems);

  if (num_elems == 1)
    return(elems[1]);

  if (num_elems == 2)
    return(to_array_1d(append_row(to_vector(elems[1]), to_vector(elems[2]))));
  return(to_array_1d(append_row(to_vector(elems[1]), to_vector( concatenate_real_array(elems[2:num_elems]) ))));
}

This allows you to write

concatenate_real_array({a,b,c,d});

and it will glue things together. It’s a neat application of recursion.

1 Like

Wasn’t the plan for Stan 3 to remove the distiction between reals and vectors? I thought I read something like that perhaps on the wiki.

Mcol:

That would be great, to remove the distinction between reals and vectors. Would we also remove the distinction between 2-d arrays and matrices?

Hi, here’s a very simple example of why this improvement will make a big difference in the real world.

First version of model:

data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real eta[J];
}
model {
eta ~ normal(0, 1);
y ~ normal(mu + tau*eta, sigma);
}

Doesn’t compile because I can’t multiply real tau by array eta.

I changed eta to a vector[J] and it worked.

At this point you might say: Fine, I should’ve just used a vector in the first place, so why am I complaining?

My response to this (hypothetical) question is the following: Often I’ve tried to make everything into vectors, but then in other places (for example, function calls), I need reals. So I’m always going back and forth, guessing which I need to use.

Also I spend a lot of time teaching this stuff, and when I’m not teaching I’m often writing code for articles and books, and I want this code to follow good practice. But I’m not sure what “good practice” is! Define everything as an array and then convert to vectors and matrices? Define everything as a vector or matrix and then convert to arrays? Etc. This is turning into a big stumbling block. So I look forward to the day when users (including me) don’t have to worry about it.

P.S. I feel the same way about separation of declarations from distributions. When working with people, I’m always having to go back and forth between declarations and prior distributions. I’m looking forward to blockless so we can do:
real theta[J] ~ normal(0, 1);
etc.

1 Like

Can a, b, c, d be of different length? Because real[,] is a square structure.

Torsten has this as part of a grant we had. Should be trivial to port it over.

Can you share the code? Was gonna give this to someone to work on. If you have something then no reason they start from zero.

I can create a PR if you want it.
Scalar version

  template <typename T0, typename T1, typename T2>
  typename torsten::return_t<T0, T1, T2>::type
  inline linear_interpolation(const T0& xout,
                              const std::vector<T1>& x,
                              const std::vector<T2>& y) {
    stan::math::check_finite("linear_interpolation", "xout", xout);
    stan::math::check_finite("linear_interpolation", "x", x);
    stan::math::check_finite("linear_interpolation", "y", y);
    stan::math::check_nonzero_size("linear_interpolation", "x", x);
    stan::math::check_nonzero_size("linear_interpolation", "y", y);
    stan::math::check_ordered("linear_interpolation", "x", x);
    stan::math::check_matching_sizes("linear_interpolation", "x", x, "y", y);

    using stan::math::value_of;

    typename torsten::return_t<T0, T1, T2>::type yout;
    if (xout < x.front()) {
      yout = y.front();
    } else if (xout > x.back()) {
      yout = y.back();
    } else {
      int i = std::lower_bound(x.begin(), x.end(), xout) - x.begin() - 1;
      yout = y.at(i) + (y.at(i+1) - y.at(i)) / (x.at(i+1) - x.at(i)) * (xout - x.at(i));
    }
    return yout;
  }

Vector version

  template <typename T0, typename T1, typename T2>
  std::vector<typename torsten::return_t<T0, T1, T2>::type>
  inline linear_interpolation(const std::vector<T0>& xout,
                              const std::vector<T1>& x,
                              const std::vector<T2>& y) {
    stan::math::check_nonzero_size("linear_interpolation", "xout", xout);
    stan::math::check_finite("linear_interpolation", "xout", xout);

    std::vector<typename torsten::return_t<T0, T1, T2>::type> yout(xout.size());
    std::transform(xout.begin(), xout.end(), yout.begin(),
                   [&x, &y](const T0& xi) { return linear_interpolation(xi, x, y); });
    return yout;
  }
1 Like

This’ll work for now! Thanks

Note the possible cancellation and loss accuracy in substraction and division. Not a problem for my models but could be a surprise when it happens.

I think it would be better to have interpolation with continuous gradient, like cubic spline which shouldn’t be much extra work if linear interpolation is there.

To me as soon as we leave simple linear world design of interpolating function becomes a much more coordinated effort. For example if we decide to do cubic, we are faced with choices like spline vs hermite on polynomial type or clamped vs natural vs periodic on end-point constraint, each having its use cases.

Linear interpolation may be simple to design, but it can be complicated to diagnose inference as discontinuos gradient is inherently incompatible with HMC. Even if you don’t get divergences, do you know that it works in cases you have used it?

This decision is worth making. I suggested cubic splines as the simple choice, but we can start another thread discussing other choices. I hope we are discussing just interpolation and not extrapolation.

If you mean when the interpolation point steps across interval boundary then yes that’s a problem. In my models (interpolation in time) the interpolation point and intervals are data, only the dependent values are params, just like NONMEN’s LIN method. In this case the gradient discontinuity is not an issue.

1 Like

This has been a long running issue – should we include functions that can safely be used with data but not parameters (typically because of gradient issues such as those arising at the cusps in the linear interpolation). Maybe even something that would be better posed within the scope of the new compiler.