# Proper Vectorization of Model

I am working on a model like the one below and I am trying to come up with the right way to vectorize the `V[k]` inference in the model.

``````V[k] ~ binomial_logit(I[,kk[k]] , logit_p);
``````

where `logit_p` is a vector of length N and `I[,kk[k]]` is array of length N.

Stan Code:

``````data {
int<lower=0 > N;
int<lower=0 > K;
int<lower=0 > J;
int<lower=0> V[K,N];
int<lower=1> I[2,N];
int<lower=1, upper=J> jj[K];
int<lower=1, upper=2> kk[K];
}

parameters {
vector[N] theta;
vector<lower=0>[K] alpha;
vector[N] gamma[J];
vector[K] beta;
}

model {
beta~ normal(0,1);
theta~ normal(0, 1);
alpha ~ lognormal(0, 1);
for (i in 1: J){
gamma[i] ~ normal(theta,1);
}

{
vector[N] logit_p;
for (k in 1:K) {
logit_p = alpha[k]*gamma[jj[k]]  + beta[k]

V[k] ~ binomial_logit(I[,kk[k]] , logit_p);
}
}
}

``````

Add a semicolon at the end of the logit_p line so it looks like:

``````logit_p = alpha[k]*gamma[jj[k]]  + beta[k];
``````

and swap the indices on I (so that the kk term goes first).

Oops thanks.

Ok. That makes perfect sense.
Would there be any measurable improvement for eliminating the `for` loop?

The gamma for loop? Probably not. These loops translate directly to C++. It’s should be fast.

But usually with these things just try it and see how it goes. If it helps, great, if it doesn’t do anything whatevs.