Fitting Survival Model with Random Effect

I am trying to perform a meta-analysis of time-to-event outcomes. I’ve obtained individual patient data from 8 studies. I have 67,801 censored observations. I have 5481 events. I have one binary covariate.

I found that a gompertz model fit the pooled data well. I would like to conduct a random-effects meta-analysis in STAN.

My code runs… very slowly… and it does not mix well.

My priors are
sdalpha=10,
sdbeta0=log(10),
sdbeta=log(10),
sdtau=log(2)

I was hoping to get some suggestions on how to improve this model.

My Data is at

functions {

real gompertz_surv_prob (real t, real a, real b){
real S;
S = exp(-b/a*(exp(a*t) - 1));
return(S);
}

real gompertz_lpdf (vector t, real a, vector b){
  vector[num_elements(t)] tmp;
  for (i in 1:num_elements(t)) {
    tmp[i] = log(b[i])+a*t[i]-b[i]/a*(exp(a*t[i])-1);
  }
  return(sum(tmp));
}
  
real gompertz_lccdf (vector t, real a, vector b){
  vector[num_elements(t)] tmp;
  for (i in 1:num_elements(t)) {
    tmp[i] = -b[i]/a*(exp(a*t[i]) - 1);
  }
  return(sum(tmp));
}
  
}

data {
  int<lower=0> Nobs; 
  int<lower=0> Ncen; 
  vector[Nobs] yobs; 
  vector[Ncen] ycen;
  int <lower=0> stud_obs [Nobs]; 
  int <lower=0> stud_cen [Ncen];
  int Nstudy;
  int H;                          // number of covariates
  matrix[Nobs,H] Xobs;                  // matrix of covariates (with n rows and H columns)
  matrix[Ncen,H] Xcen;                  // matrix of covariates (with n rows and H columns)
  real <lower=0> sdbeta0;
  real <lower=0> sdbeta;
  real <lower=0> sdalpha;
  real <lower=0> sdtau;
}
parameters {
  real alpha;            // shape parameter
  real beta0;   
  vector [H] beta;
  vector [Nstudy] re;
  real <lower=0> tau;
}  
model {
  tau ~ normal(0,sdtau);
  
  beta0 ~ normal(0,sdbeta0);
  beta ~ normal(0,sdbeta);
  re ~ normal(0,tau);
  alpha ~ normal(0,sdalpha);
  
  
  target += gompertz_lpdf(yobs | alpha, exp(beta0+Xobs*beta+re[stud_obs]));
  target += gompertz_lccdf(ycen | alpha, exp(beta0+Xcen*beta+re[stud_cen]));
  
}
2 Likes

There is a post over here Hierarchical Gompertz model which dives into some of this.

In general if the model is running slow:
Simplify the model as much as possible and add complexity in steps
and
Reduce the amount of data by quite a bit. I usually pull out a few hundred or thousand data points.

It can be hard to troubleshoot complicated models and large data sets.