Hi Stan team -

I use the `append_row`

function a lot for concatenating vectors/reals. Often I end up with nested `append_row`

commands in order to pack different vectors together, such as for a `map_rect`

function.

I’ve recently learned though that I could do something similar by simply creating a vector with brackets:

```
[val1, val2, val3... valN]
```

What I’m wondering is if there is any speed/complexity trade-off between these two ways to compose vectors? For example, if I had 10 distinct values, would 10 nested `append_row`

statements make more sense or one set of brackets? Also, given that I’m often doing this in the transformed parameters block, does this matter at all for gradient evaluation/the AD stack?

Thanks for any insight -

Bob

1 Like

10 nested append_row calls would be very inefficient as append_row(a,b) creates a new container with `size(a)+size(b)`

elements and copies both a and b in the new container.

So if a,b and c all have 10 elements the nested calls `append_row(append_row(a,b),c)`

will make 10+10 copies for the inside call and 20+10 for the outside call, 50 copies all together. [a,b,c] should only make 30 copies.

`append_row(append_row(append_row(a,b),c),d)`

makes 90 copies instead of 40, and so on.

In your case of 10 scalar elements this amount to 2+3+4+5+6+7+8+9+10 = 54 copies instead of 10.

So just a quick follow-up, thanks so much for your comments! Do you get the same overhead if the vectors given to `append_row`

are zero-length? I know it works fine but didn’t know if it had less overhead.

Glad to help.

In that case the overhead is that you create N zero-length containers vs 1 zero-length container, where N is the number of append_row calls. Obviously, no copy overhead. I dont think this overhead should be noticeable if N isn’t really in the hundred-thousands.

Also, matrices are stored by column, so manipulating them by row is less efficient than by column. That shouldn’t matter too much for appending, though, as it’s going to copy the whole thing anyway.