Interpolation needs integer conversion

I’m modeling my non-decreasing function F as a vector F, which is defined on a mesh containing T+1 nodes. Now I’m trying to implement interpolation to approximate F(x).

However, a type mismatch is getting in the way, and there seems to be no direct way to convert real to int. Any suggestions?

  real interpolate(real[] F, real x, int T) {
    real flo = floor(x*T); // possible values: {0 ,..., T-1}
    real lambda = x*T - flo;
    int i = flo + 1; ///  <---- ERROR: TYPE MISMATCH
    real interpolate = lambda * F[i] + (1 - lambda) * F[i+1];

I think instead of direct assignment to the variable, you may do the if statement and determine which index you have for you x*T-point.

For example, I use b-splines in my code, and there is no such issue (see Implementation of B-splines and derivatives in Stan · GitHub or relevant discussion). In that code they have something like “(knots[ind] <= t[i]) && (t[i] < knots[ind+1])”

You’d have to write your own function for that. I put together a very basic example in this post: Factorial, tgamma & matrix building - #2 by andrjohns


The rounding is rather brutal in the sense that you require to have the function being defined at unity steps. I use instead a binary search to find the right index in a sorted vector of times. Once you have that, then you can use that index to define your interpolation function:

int find_interval_elem(real x, vector sorted, int start_ind) {
  int res;
  int N = num_elements(sorted);
  int max_iter = 100 * N;
  int left_ind = start_ind;
  int right_ind = N;
  real left = sorted[left_ind ] - x;
  real right = sorted[right_ind] - x;
  int iter = 1;

  if(N == 0) return(0);
  if(0 <= left)  return(left_ind-1);
  if(0 == right) return(N-1);
  if(0 >  right) return(N);

  while((right_ind - left_ind) > 1  && iter != max_iter) {
    int mid_ind;
    real mid;
    // is there a controlled way without being yelled at with a
    // warning?
    mid_ind = (left_ind + right_ind) / 2;
    mid = sorted[mid_ind] - x;
    if (mid == 0) return(mid_ind-1);
    if (left  * mid < 0) { right = mid; right_ind = mid_ind; }
    if (right * mid < 0) { left  = mid; left_ind  = mid_ind; }
    iter = iter + 1;
  if(iter == max_iter)
    print("Maximum number of iterations reached.");

Then you do

int time_idx = find_interval_elem(current_time, sorted_time_vector, 1);

If you now have a vector of function responses which goes along with sorted_time_vector, then you can do whatever interpolation you want. This scheme is more general and due to the binary search it is logarithmic in speed in the number of data-points.

EDIT: The above function is the same as in R called find_interval


By “if statement”, do you mean a for-loop (or a binary search) to find the relevant index?

yes, you are correct. @wds15’s response was quite close to the idea

Specifically logged in to thank you for the find_interval_elem function. It is genius in its simplicity yet super powerful. I vote for it to be included in the base Stan.

Thanks! Glad you find the function useful (I was not sure the intent of the function was understood).

I thought a few times to bring this into Stan core…but I think that there ain‘t be a big speed gain anyway. Having it all tested and doced would be great, of course. It‘s just a lot of work…maybe we start with an issue for it?