 # Testing Reduce Sum

I’m trying to implement the new `reduce_sum` parallelisation, following the doc up here, but I don’t think the slicing is doing what I want.

The term I’m trying to parallelise is:

``````  for(m in 1:M)
y[m] ~ ordered_logistic(ystar[ii[m],jj[m]], thresholds[gg[m],jj[m]]);
``````

So I’ve defined the reduce function as:

``````functions {
real reduce_func(int start, int end,
int[] y,
int[] ii,
int[] jj,
int[] gg,
matrix ystar,
vector[,] thresholds) {
real lp = 0;
for(m in start:end)
lp += ordered_logistic_lpmf(y[m] | ystar[ii[m],jj[m]], thresholds[gg[m],jj[m]]);
return lp;
}
}
``````

With the model call as:

``````  target += reduce_sum(reduce_func, y, grainsize,
ii,jj,gg,ystar,thresholds);
``````

But I’m getting the errors:

``````Chain 1 Exception: Exception: ordered_logistic: Random variable is 81, but must be in the interval [1, 5]
``````

Which makes me think that I have the arguments in the wrong order, or something similar. @wds15 or @bbbales2 can I bother you for a quick look for anything obviously wrong?

Thanks!

Full model:

``````functions {
real reduce_func(int start, int end,
int[] y,
int[] ii,
int[] jj,
int[] gg,
matrix ystar,
vector[,] thresholds) {
real lp = 0;
for(m in start:end)
lp += ordered_logistic_lpmf(y[m] | ystar[ii[m],jj[m]], thresholds[gg[m],jj[m]]);
return lp;
}
}

data{
int M;
int J;
int T;
int E;
int G;
int N[G];
int ii[M];
int jj[M];
int gg[M];
int g_all[sum(N)];
int y[M];
matrix[J,J] obs_corr[G];
}

transformed data{
matrix[E,E] eta_identity = diag_matrix(rep_vector(1.0,E));
matrix[J,J] y_identity = diag_matrix(rep_vector(1.0,J));
vector[G] ldet_obs;
int N_all = sum(N);
int grainsize = 1;

for(g in 1:G)
ldet_obs[g] = log_determinant(obs_corr[g]);
}

parameters{
ordered[T] thresholds_raw[G,J];
matrix<multiplier=5>[E,J] lam[G];
matrix[N_all,E] eta;
matrix[N_all,J] ystar_raw;
}

transformed parameters {
ordered[T] thresholds[G,J];

for(g in 1:G)
for(j in 1:J)
thresholds[g,j] = thresholds_raw[g,j] * 5;
}

model{
matrix[N_all,J] ystar;
int pos = 1;

to_vector(eta) ~ std_normal();
to_vector(ystar_raw) ~ std_normal();

for(g in 1:G){
int g_ids[N[g]] = segment(g_all,pos,N[g]);

for(j in 1:J) {
thresholds_raw[g,j] ~ std_normal();
lam[g,,j] ~ normal(0,5);
}

ystar[g_ids,] = eta[g_ids,] * lam[g] + ystar_raw[g_ids,];
pos += N[g];
}

target += reduce_sum(reduce_func, y, grainsize,
ii,jj,gg,ystar,thresholds);
}
``````
1 Like

Try:

``````for(m in start:end)
lp += ordered_logistic_lpmf(y[m - start + 1] | ystar[ii[m],jj[m]], thresholds[gg[m],jj[m]]);
``````

The y inside reduce_func is a subset of the y on the outside (it is of size `end - start + 1` if that makes sense).

2 Likes

Nailed it thanks! I didn’t twig that the whole `y` array wasn’t being passed, even though the doc says it will be sliced. Another case of RTFM

It helps to name the 3rd argument of the partial reducer function as y_slice to remember yourself that it is a subset. I tend to define two inidices in these functions - m running from start to end and m_slice which is m-start+1. This way you get it right.

2 Likes