Rather than cluttering up other threads, I thought it might be helpful to start a topic for this discussion specifically.

In trying to understand what expectations of predictions represent, I’ve been trying to compute them in generated quantities, with little success.

I am currently working on a beta binomial model. I used `brms`

first to generate the Stan code then modified for my use case

In looking through github, I found this:

```
posterior_epred_beta_binomial <- function(prep) {
# beta part included in mu
trials <- data2draws(prep$data$trials, dim_mu(prep))
prep$dpars$mu * trials
}
```

versus posterior_predict_beta_binomial:

```
posterior_predict_beta_binomial <- function(i, prep, ntrys = 5, ...) {
rdiscrete(
n = prep$ndraws, dist = "beta_binomial",
size = prep$data$trials[i],
mu = get_dpar(prep, "mu", i = i),
phi = get_dpar(prep, "phi", i = i),
lb = prep$data$lb[i], ub = prep$data$ub[i],
ntrys = ntrys
)
}
```

So I attempted to add that to the generated quantities block:

```
functions {
/* compute correlated group-level effects
* Args:
* z: matrix of unscaled group-level effects
* SD: vector of standard deviation parameters
* L: cholesky factor correlation matrix
* Returns:
* matrix of scaled group-level effects
*/
matrix scale_r_cor(matrix z, vector SD, matrix L) {
// r is stored in another dimension order than z
return transpose(diag_pre_multiply(SD, L) * z);
}
}
data {
int<lower=1> N; // number of observations
array[N] int Y; // response
array[N] int nTrials; // number of trials
int<lower=1> K; // number of coefficients
matrix[N, K] X; // design matrix
// data for group-level effects subjects
int<lower=1> nSubj; // number of subjects
int<lower=1> mSubj; // number of coefficients per level
array[N] int<lower=1> Subj; // subject index
// subject-level noise-level values
vector[N] NL_0;
vector[N] NL_1;
vector[N] NL_2;
vector[N] NL_3;
int<lower=1> nCor; // number of subject-level correlations
}
parameters {
vector[K] b; // regression coefficients
real<lower=0> phi; // precision parameter
vector<lower=0>[mSubj] subj_sd; // subject-level standard deviations
matrix[mSubj, nSubj] subj_z; // standardized subject-level effects
cholesky_factor_corr[mSubj] L; // cholesky factor of correlation matrix
}
transformed parameters {
matrix[nSubj, mSubj] r_subj; // actual subject-level noise effects
vector[nSubj] r_0;
vector[nSubj] r_1;
vector[nSubj] r_2;
vector[nSubj] r_3;
// compute actual subject-level effects
r_subj = scale_r_cor(subj_z, subj_sd, L);
r_0 = r_subj[ : , 1];
r_1 = r_subj[ : , 2];
r_2 = r_subj[ : , 3];
r_3 = r_subj[ : , 4];
// initialize linear predictor term
vector[N] mu = rep_vector(0.0, N);
mu += X * b;
// add more terms to the linear predictor
for (n in 1 : N) {
mu[n] += r_0[Subj[n]] * NL_0[n] + r_1[Subj[n]] * NL_1[n]
+ r_2[Subj[n]] * NL_2[n] + r_3[Subj[n]] * NL_3[n];
}
mu = inv_logit(mu);
}
model {
real lprior = 0; // prior contributions to the log posterior
// priors
lprior += normal_lupdf(b | 0, 1);
lprior += gamma_lupdf(phi | 0.01, 0.01);
lprior += normal_lupdf(subj_sd | 0, 1);
lprior += lkj_corr_cholesky_lupdf(L | 1);
target += lprior;
target += std_normal_lupdf(to_vector(subj_z));
// likelihood
for (n in 1 : N) {
target += beta_binomial_lupmf(Y[n] | nTrials[n], mu[n] * phi, (1 - mu[n]) * phi);
}
}
generated quantities {
vector[N] ypred ;
vector[N] epred ;
vector[N] log_lik ;
// compute subject-level correlations
corr_matrix[mSubj] Cor = multiply_lower_tri_self_transpose(L);
vector<lower=-1, upper=1>[nCor] cor;
// extract upper diagonal of correlation matrix
for (k in 1 : mSubj) {
for (j in 1 : (k - 1)) {
cor[choose(k - 1, 2) + j] = Cor[j, k];
}
}
for (i in 1:N){
epred[i] = mu[i] * nTrials[i] ;
}
for (i in 1:N){
ypred[i] = beta_binomial_rng(nTrials[i], mu[i] * phi, (1 - mu[i]) * phi);
}
for (i in 1:N){
log_lik[i] = beta_binomial_lpmf(Y[i] | nTrials[i], mu[i] * phi, (1 - mu[i]) * phi);
}
}
```

The samples of ‘epred’ are nothing like what I get by using `posterior_epred`

from `brms`

.

`brms`

is giving me epreds in the 2000-3000 range, my generated quantities are in the 0 to 2 range.

I’m clearly missing something, but I’m not sure what.

Can anyone assist with how I can calculate comparable epreds?