Network meta-analysis random effect model for binary data

Hi I am trying to replicate the winbugs code (posted earlier at https://groups.google.com/forum/#!topic/stan-users/yKOOfY7hGVg) in Stan. Ii runs relatively fast, but I am getting warning on “Bayesian Fraction of Missing Information was low” for all the chains. I would appreciate if you could help to re-parametrize the model. i tried to non-centered it (see below) but I am still getting warnings… i am attaching the database that I used.

The BUGS code is

model{                              
for(i in 1:ns){                      # LOOP THROUGH STUDIES
    w[i,1] <- 0    # adjustment for multi-arm trials is zero for control arm
    delta[i,1] <- 0             # treatment effect is zero for control arm
    mu[i] ~ dnorm(0,.0001)           # vague priors for all trial baselines
    for (k in 1:na[i]) {             # LOOP THROUGH ARMS
        r[i,k] ~ dbin(p[i,k],n[i,k]) # binomial likelihood
        logit(p[i,k]) <- mu[i] + delta[i,k]  # model for linear predictor
        rhat[i,k] <- p[i,k] * n[i,k] # expected value of the numerators 
      }
     
    for (k in 2:na[i]) {             # LOOP THROUGH ARMS
# trial-specific LOR distributions
        delta[i,k] ~ dnorm(md[i,k],taud[i,k])
# mean of LOR distributions (with multi-arm trial correction)
        md[i,k] <-  d[t[i,k]] - d[t[i,1]] + sw[i,k]
# precision of LOR distributions (with multi-arm trial correction)
        taud[i,k] <- tau *2*(k-1)/k
# adjustment for multi-arm RCTs
        w[i,k] <- (delta[i,k] - d[t[i,k]] + d[t[i,1]])
# cumulative adjustment for multi-arm trials
        sw[i,k] <- sum(w[i,1:k-1])/(k-1)
      }
  }   
d[1]<-0       # treatment effect is zero for reference treatment
# vague priors for treatment effects
for (k in 2:nt){  d[k] ~ dnorm(0,.0001) }
sd ~ dunif(0,5)     # vague prior for between-trial SD
tau <- pow(sd,-2)   # between-trial precision = (1/between-trial variance)
}  

The STAN code is

data {
  int<lower=1> ND; // Number of data points
  int<lower=1> NT; // Number of treatments
  int<lower=1> NS; // Number of studies
  int<lower=0> sampleSize[ND]; // Number of patients in study and treatment arm 
  int<lower=0> responders[ND]; // Number of responders in study and treatment arm
  int<lower=1, upper=NT> treatment[ND]; // Treatment used in study and treatment arm
  int<lower=1, upper=NS> study[ND]; // Study 
  int<lower=1, upper=NT> treatment_base[ND]; // The first or "base-case" treatment per study
} 
parameters {
  vector[NT] d; // Treatment effect for each treatment 
  vector[NS] mu; // One mu for each study
  vector[ND] delta;  
  real<lower=0> sigmasq_delta; 
}
transformed parameters {
  vector[NT] d_new=d; // Treatment effect for each treatment with values of 0 for treatment one
  real<lower=0> sigma_delta=sqrt(sigmasq_delta);
  real w[NS,NT]=rep_array(0.0, NS, NT);
  vector[ND] sw_vector;
  vector[ND] vard_vector;
  vector[ND] sqrt_vard_vector;
  vector[ND] delta_new=delta;
  vector[ND] md;
  
  d_new[1] = 0.0;
  
  for (i in 1:ND) {
    if (treatment[i] == 1) {
      delta_new[i] = 0.0;
      w[study[i],treatment[i]] = 0.0; 
      vard_vector[i] = 1E-2;
      sw_vector[i] = 0.0;
    } else {
      w[study[i],treatment[i]] = delta_new[i] - d_new[treatment[i]] + d_new[treatment_base[i]]; 
      vard_vector[i] = (pow(sigma_delta,2) * treatment[i])/(2 * (treatment[i]-1));
      sw_vector[i] = sum(w[study[i],1:(treatment[i]-1)])/(treatment[i]-1); 
    }
    sqrt_vard_vector[i] = sqrt(vard_vector[i]);
  }
  md[1:ND] = d_new[treatment[1:ND]] - d_new[treatment_base[1:ND]] + sw_vector[1:ND];
}

model {
  responders[1:ND] ~ binomial_logit(sampleSize[1:ND], mu[study[1:ND]] + delta_new[1:ND]);
  delta[1:ND] ~ normal(md[1:ND], sqrt_vard_vector[1:ND]);
  mu[study[1:ND]] ~ normal(0, 10);
  d[treatment[1:ND]] ~ normal(0, 10);
  sigmasq_delta ~ uniform(0, 10);
}

I tried non-centering of

delta[1:ND] ~ normal(md[1:ND], sqrt_vard_vector[1:ND]);

with introduction of zeta ~N(0,1) parameter

data {
int<lower=1> ND; // Number of data points
int<lower=1> NT; // Number of treatments
int<lower=1> NS; // Number of studies
int<lower=0> sampleSize[ND]; // Number of patients in study and treatment arm 
int<lower=0> responders[ND]; // Number of responders in study and treatment arm
int<lower=1, upper=NT> treatment[ND]; // Treatment used in study and treatment arm
int<lower=1, upper=NS> study[ND]; // Study 
int<lower=1, upper=NT> treatment_base[ND]; // The first or "base-case" treatment per study
} 
parameters {
vector[NT] d; // Treatment effect for each treatment 
vector[NS] mu; // One mu for each study
vector[ND] delta;  
real<lower=0> sigmasq_delta; 
real zeta[ND];
}

transformed parameters {
vector[NT] d_new=d; // Treatment effect for each treatment with values of 0 for treatment one
real<lower=0> sigma_delta=sqrt(sigmasq_delta);
real w[NS,NT]=rep_array(0.0, NS, NT);
vector[ND] sw_vector;
vector[ND] vard_vector;
vector[ND] sqrt_vard_vector;
vector[ND] delta_new=delta;

d_new[1] = 0.0;

for (i in 1:ND) {
if (treatment[i] == 1) {
delta_new[i] = 0.0; // Set delta_new to zero for control treatment
w[study[i],treatment[i]] = 0.0; 
vard_vector[i] = 1E-2;
sw_vector[i] = 0.0;
} else {
w[study[i],treatment[i]] = delta_new[i] - d_new[treatment[i]] + d_new[treatment_base[i]]; // adjustment for multi-arm RCTs
vard_vector[i] = (pow(sigma_delta,2) * treatment[i])/(2 * (treatment[i]-1));
sw_vector[i] = sum(w[study[i],1:(treatment[i]-1)])/(treatment[i]-1); 
}
sqrt_vard_vector[i] = sqrt(vard_vector[i])*zeta[i];
}
delta_new[1:ND] = d_new[treatment[1:ND]] - d_new[treatment_base[1:ND]] + sw_vector[1:ND]+sqrt_vard_vector[1:ND];
}

model {
responders[1:ND] ~ binomial_logit(sampleSize[1:ND], mu[study[1:ND]] + delta_new[1:ND]);
zeta~normal(0,1);
mu[study[1:ND]] ~ normal(0, 10);
d[treatment[1:ND]] ~ normal(0, 10);
sigmasq_delta ~ uniform(0, 10);
}
library(rstan)
library(rstantools)

options(mc.cores = parallel::detectCores())

rstan_options(auto_write = TRUE)

data = read.csv("Anticoagulant.csv", header=TRUE)

ND =nrow(data)
NT=max(data$treatment)
NS=max(data$study)

stan.dat <- list(ND=ND,NT=NT,NS=NS, sampleSize=data$sampleSize,
                 responders=data$responders,treatment=data$treatment,
                 study=data$study, treatment_base=data$treatment_base)


logit.nma  <- stan_model(file='NMA_binary.stan', 
                        auto_write = rstan_options(auto_write = TRUE))


set.seed(1234)
MakeInits <- function(){
  list( mu    = rnorm(NS, 0,1),
        d = rnorm(NT,0,1),
        delta    = rnorm(ND, 0,1),
        sigmasq_delta = runif(1,0,2))
}

t1=proc.time()
fit_nma <-  sampling( logit.nma, data = stan.dat, warmup=1000, iter = 3000, seed=1234,
                     chains =3, pars=c('delta_new', 'sigma_delta'))
t_stan=proc.time()-t1
```<a class="attachment" href="//cdck-file-uploads-canada1.s3.dualstack.ca-central-1.amazonaws.com/flex030/uploads/mc_stan/original/2X/6/62fb7ffee2b85aa2c33ae85f3acc20693251ad63.csv">Anticoagulant.csv</a> (1.8 KB)

Sorry, I can’t help you directly as I don’t really understand your model. But possible strategies to localize and fix the problem are:

  1. Create a simulated dataset with known true values of all parameters. It is useful for so many things (including checking for coding errors). If the errors disappear on simulated data, your model may be a bad fit for the actual observed data.

  2. Check your priors. If the model is sampling heavily in the very tails of your priors or on the boundaries of parameter constraints, this is a bad sign.

  3. Make sure your model is identifiable - non-identifiability and/or multimodality (multiple local maxima of the posterior distributions) is a problem. Case study - mixture models, my post on non-identifiable models and how to spot them.

  4. Move parameters to the data block and set them to their true values (from simulated data). Then return them one by one to paremters block. Which parameter introduces the problems?

  5. Introduce tight priors centered at true parameter values. How tight need the priors to be to let the model fit? Useful for identifying multimodality.

  6. Play a bit more with adapt_delta, stepsize and max_treedepth. Example

1 Like

Thank you Martin for your suggestions.

Dear yeva,

do you conduct your MMA with STAN well?

Thanks a lot,
Gentrany

Hi GentRany,
Yes, it run ok.

hi, could you share your code and data with me? Thank you so much.

yeva via The Stan Forums mc_stan@discoursemail.com 于2019年5月21日周二 下午11:39写道:

In an attempt to link network meta-analysis posts together, I include here a link to a new post I’ve made re: porting a WinBUGS MTC/network meta-analysis model of poisson random variables. Hopefully, it’ll make life easier for future users who are interested in this. Link: Inefficient sampling and divergent transitions in network meta-analysis (ported from WinBUGS)

Thanks a lot!

Benjamin Skov Kaas-Hansen via The Stan Forums <mc_stan@discoursemail.com> 于2020年6月23日周二 下午2:45写道: