We gots this:

data {

int y[2];

real n[2];

}

parameters {

real<lower=0> theta[2];

}

model {

for (i in 1:2){

y[i] ~ poisson(n[i]*theta[i]);

}

}

Ugly, huh?

But, I can’t do this:

```
y ~ poisson(n * theta);
```

cos it returns an error, I can’t multiply two arrays.

And this don’t work neither:

```
y ~ poisson(n .* theta);
```

cos, dontja know, they’re arrays, not vectors.

Yeah, yeah, I could make them vectors. But I’m thinking more generally, why can’t we just all get along and be able to do componentwise operations on arrayz?

Thanks!

Like you say, the correct thing to do in current Stan would be to write

```
data {
int y[2];
vector[2] n;
}
parameters {
vector<lower=0>[2] theta;
}
model {
y ~ poisson(n .* theta);
}
```

I agree, however, that the `vector`

vs `real[]`

distinction is confusing for people who are new to Stan. @mitzimorris and I were just discussing whether we could just have `real[]`

be treated as `vector`

. (And `real[,]`

as `matrix`

.)

The only obstruction to doing that currently are the implementations of the algebraic and ode solvers as well as map_rect (MPI), which rely on `real[]`

being implemented with `std::vector`

under the hood, rather than `Eigen::Vector`

. Perhaps @charlesm93 and @wds15 could comment on how essential that is to their functioning. Could we modify them to work with `Eigen::Vector`

and `Eigen::Matrix`

instead under the hood? If so, then we could possibly address Andrew’s frustration.

if we make array declarations look like declarations for all the other container types (vector, row_vector, matrix), where the dimensions are declared on the type, not on the variable name, would this make it easier to accept the use of vectors in this situation? in which case, the above model would be written as:

```
data {
int[2] y;
vector[2] n;
}
parameters {
vector<lower=0>[2] theta;
}
model {
y ~ poisson(n .* theta);
}
```

this change to the syntax is needed for other reasons. and of course, arrays of real would also be declared `real[2] n`

.

for container types, the dimensions are specified with the container dimensions first, followed by the array dimensions, and the compiler should be able to recognize both a list of container dimensions followed by a list of array dimensions and a single list of dimensions where the initial dimensions are the container dimensions and all subsequent dimensions are array dimensions.

e.g.:

```
int[3,5] a; // 2-d array of ints
vector[2][3,5] x; // 2-d array of vectors
vector[2,3,5] y; // also a 2-d array of vectors
```

As a reminder to anyone who may read this post, we distinguish between vectors and 1D-arrays because:

(i) vectors work better for matrix operations (as implemented in `Eigen`

)

(ii) arrays work better as containers, i.e. when we try to access individual elements.

In the case of the algebraic solver, we perform matrix operations on the parameter vector \theta to compute a Jacobian matrix. We also access its individual elements to write out the algebraic equation. On the other hand, we do not perform matrix operations on x_r and x_i, which only serve as containers for coefficients in the algebraic equation.

I agree the distinction is confusing. We could have one type, and then do some manipulations under the hood to convert something into an array or a vector. Except we probably won’t always be able to optimize.

All that said, arrays are, in a sense, a more general type of objects than vectors or matrices. If we allow users to write down 3D-arrays, it makes sense there should also be able to write 1D and 2D-arrays. On the other hand, I’m not sure we should get rid of the vector and matrix types.

Finally, regarding @mitzimorris’s proposition, I find it less confusing to seperate size of the element and size of the array, as is currently done, but maybe that’s just out of habit.

What we can do, regarding @andrewgelman’s initial query is overload the distribution functions to handle both arrays and vectors (and essentially convert arrays to vectors under the hood).

@charlesm93, interesting. So are you saying that element access in Eigen vectors is inefficient? Was that the reason to use arrays for the signature of the solvers?

@charlesm93, do you mean that this is OK:

```
vector[2][3,5] x; // 2-d array of vectors
```

and this is confusing:

```
vector[2,3,5] y; // also a 2-d array of vectors
```

I quite agree - much prefer the former to the latter.

Yes, the first formulation is clearer than the second one. But overall my preference is for `vector[2] x [3, 5]`

. In the latter, there is no ambiguity about which dimension comes first: i.e. specify the type of the element, the name of the variable, and then “expand” it into an array.

the semantics of indexes on multi-dimensional arrays of container types (vectors, row_vectors, and matrices) makes my head hurt, nonetheless, we persist -

if we want to add tuple types to the language, we need to change the way that array types are declared - per the (preliminary) spec https://github.com/stan-dev/stan/wiki/Functional-Spec:-List-Tuple-types:

**Issue:** To make this fly, we need to have a way of declaring sized types contiguously. Right now, declarations like `int x[3]`

split the `int`

and `[3]`

I was very confused that Eigen element access would be slower than `std::vector`

's so I benchmarked and found that to be true! But the function we should be calling in most cases, `coeff`

, is actually a more fair comparison to element access in a `std::vector`

because it doesn’t do the bounds checking that normal element access in Eigen does. Here’s the benchmark: https://github.com/seantalts/perf-math/blob/master/element_access.cpp

Here are the results on my computer; I encourage you guys to try it yourself too by cloning that repo and running `make element_access; ./element_access`

```
halair ~/scm/perf-math (master) $ ./element_access
2018-10-05 09:39:59
Running ./element_access
Run on (4 X 2200 MHz CPU s)
CPU Caches:
L1 Data 32K (x2)
L1 Instruction 32K (x2)
L2 Unified 262K (x2)
L3 Unified 4194K (x1)
--------------------------------------------------------------
Benchmark Time CPU Iterations
--------------------------------------------------------------
BM_EigenElementAccess 87 ns 87 ns 6709094
BM_EigenCoeff 19 ns 19 ns 38823751
BM_VectorElementAccess 10 ns 10 ns 73679557
```

2 Likes

@seantalts, @charlesm93, I wonder if that means it may be worth supporting `vector`

interfaces for the ODE and algebraic solvers. Really, I am wondering if there is any reason why we should be using `std::vector<double>`

in Stan.

Also, I wonder whether we should be using `coeff`

more in our code base. All the error checking has already happened in Stan because of its stringent type system.

Hi all. I’d also like to add that one thing that makes this confusing is that there are no integer vectors or matrices. The difficulty here is that for reasons such as described in the above thread, we’re encouraged when using continuous data to declare things as vectors and matrices. But then when moving to logistic regression we have to switch to arrays. And then we get some of these incompatibilities. For example, if we declare y as a 2-dimensional integer array, I don’t think Stan will let us do y ~ poisson(x), where x is a 2-dimensional real array. It’s not the biggest deal in the world, but it comes up a lot that I’m jiggling my arrays, trying to decide whether to use to_vector, etc.