 Vectorization and non-centered parameterization

Hi,

I am still learning and I want to know how to make this following code even more efficient.

The original code is:

``````data {
int<lower = 0> K;
int<lower = 0> N;
int<lower = 1, upper = K> kk[N];
vector[N] x;
int<lower = 0, upper = 1> y[N];
}
parameters {
matrix[K,2] beta;
vector mu;
vector<lower=0> sigma;
}
model {
mu ~ normal(0, 2);
sigma ~ normal(0, 2);
for (i in 1:2)
beta[ , i] ~ normal(mu[i], sigma[i]);
y ~ bernoulli_logit(beta[kk, 1] + beta[kk, 2] .* x);
}
``````

This is actually from the manual. Would the following revision be correct and faster?

``````data {
int<lower = 0> K;
int<lower = 0> N;
int<lower = 1, upper = K> kk[N];
matrix[N,2] x; //add a column of 1's for the intercept
int<lower = 0, upper = 1> y[N];
}

parameters {
matrix[K,2] beta_e;
row_vector mu;
row_vector<lower=0> sigma;
}

transformed parameters {
matrix[K,2] beta;
vector[N] alpha;

beta = rep_matrix(mu, N) + rep_matrix(signa, N).*beta_e;

alpha = rows_dot_product(beta[kk,],x[kk,]);
}

model {
mu ~ normal(0, 2);
sigma ~ normal(0, 2);
to_vector(beta_e) ~ std_normal();

y ~ bernoulli_logit(alpha);
}
``````

Basically, I am moving all for-loops and I am using non-centered parameterization. Please advise. Your suggestion is much appreciated.

Looks vaguely fine to me. Best way to tell if things are faster or not is to just run them. If you want to check if they’re the same model, fit them both to the same data, or generated data from one and fit it to the other, etc.

You’d have to build incremental models to figure out whether performance gains/losses came from the non-centered parameterization or vectorization (or bugs in either implementation).

sigma is misspelled