I’m looking at `stan/math/prim/prob/neg_binomial_lccdf`

, and I found this bit of code:

```
VectorBuilder<!is_constant_all<T_shape>::value, T_partials_return, T_shape>
digammaN_vec(size(alpha));
VectorBuilder<!is_constant_all<T_shape>::value, T_partials_return, T_shape>
digammaAlpha_vec(size(alpha));
VectorBuilder<!is_constant_all<T_shape>::value, T_partials_return, T_shape>
digammaSum_vec(size(alpha));
if (!is_constant_all<T_shape>::value) {
for (size_t i = 0; i < size(alpha); i++) {
const T_partials_return n_dbl = value_of(n_vec[i]);
const T_partials_return alpha_dbl = value_of(alpha_vec[i]);
digammaN_vec[i] = digamma(n_dbl + 1);
digammaAlpha_vec[i] = digamma(alpha_dbl);
digammaSum_vec[i] = digamma(n_dbl + alpha_dbl + 1);
}
}
```

I find it strange that we loop over the size of `alpha`

but inside the loop we read `n_vec`

: if `alpha_vec`

is a scalar and `n_vec`

is a vector, wouldn’t we be computing the wrong quantity (and ignoring a number of values from `n_vec`

)?

Also the other *cdf functions for neg_binomial use the same pattern.