This is a really simple question. I have a set of x and y coordinates as data. I perform some manipulations on them which are defined in a transformed parameters block, and then in the model block I feed them to cov_exp_quad. After trying several different data structures the following code snippet works:

```
transformed parameters {
vector[N] xc = x - x_c;
vector[N] yc = y - y_c;
vector[N] yhat = -xc * sin(phi) + yc * cos(phi);
vector[N] xhat = -(xc * cos(phi) + yc * sin(phi))/ci;
row_vector[2] xyhat[N];
xyhat[:, 1] = to_array_1d(xhat);
xyhat[:, 2] = to_array_1d(yhat);
}
```

Except for some model specific details the rest just follows the examples in Michael Betancourt’s and Rob Trangucci’s tutorials.

The question is whether this data structure is as efficient as possible, and if not what is? It works in the sense that cov_exp_quad is correctly calculating the Euclidean distance between points (x,y) and (x’, y’).