Posterior predictive checking logistic binomial with random effects

I would like to know if the block of generating quantities is correct for my model.

[Please include Stan (1.1 KB)
ram and accompanying data if possible]


n <-c(1,1,1,1,1,2,2,2,2,2,2,2,2,3,3,3,3,4,4,4,4,6,6,6,7,7,7,8,8,8,9,9,9,9,9,9,9,10,10,10,10,11,11,12,12,13,13,14,14,15,15,15,15,16,16,16,17,17,17,17,17,17,17,18,18,18,18,19,20,20,20,21,21,22,24,24,25,25,25,26,26,27,27,28,28,32,33,33,33,33,35,35,35,37,38,38,38,40,40,40,41,42,43,43,44,45,45,47,47,48,51,51,56,57,60,61,61,61,68,69,69,73,74,75,75,79,91,99,104,133,152)



# BUGS surgical example (Vol 1, Example 4)
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores(logical = TRUE))
chains = 4
iter = 1000

sourceToList = function(file){
  source(file, local = TRUE)
  d = mget(ls())
  d$file = NULL
# Data are the same for all models
data = sourceToList("")
init = rep(list(sourceToList("surgical.init.R")), chains)

programa2 <- "
data {
  int<lower=0> N;
  int r[N];
  int n[N];
parameters {
   real mu;
   real b[N];
   real<lower=0> sigmasq;
transformed parameters {
  real<lower=0> sigma;
  real<lower=0,upper=1> p[N];
  sigma = sqrt(sigmasq); 
  for (i in 1:N)
    p[i] = inv_logit(b[i]);

model {
  mu ~ normal(0.0, 1000.0); 
  sigmasq ~ inv_gamma(0.001, 0.001);
  b ~ normal(mu,sigma);
  r ~ binomial_logit(n,b);

generated quantities {
  vector[N] r_post;
  vector[N] p_post;
  for (i in 1:N){
    p_post[i] = inv_logit(b[i]); 
    r_post[i] = 1.0 * binomial_rng(n[i],p_post[i]);
surgical_stanified2 = stan(model_code = programa2, data = data, chains = chains,
                iter = iter)

surgical_stanified_summary2 =
  summary(surgical_stanified,probs = c(.025,0.975),
          pars = "r_post")$summary[,c(1,3,4,5,6)]


[edit: escaped source]

the generated quantities seem OK at first glance. Note that you can format source code in the posts using backticks (`).

However your priors are quite different from what is usually recommended in Stan (see and the parametrization through sigma^2 is also probably not needed. Not all good practices for BUGS are also good practices in Stan and direct recoding of BUGS models may not be the best approach.

I am not sure about your use case, but weakly informative priors such as

mu ~ normal(0, 10);
sigma ~ normal(0, 10);


mu ~ normal(0, 10);
sigma ~ cauchy(0, 5);

might work better (really depends on what your data are).

1 Like

thank you very much for the answer and suggestions