Meta-analysis with inference on measurement error variability

This is a bit of a follow-up to the thread here, where I’m now looking for feedback on the solution I’ve come up with. To recap, this is the 8schools scenario but in addition to modelling variability in the mean from school to school, I also want to model the variability in the within-school variability. That is, the standard 8schools model takes the observed standard deviation within a given school as an error-free measurement, while I want to express that there is likely some error in that measurement and partially-pool information on this quantity across schools. To do this you need to know the mean, SD and N (number of students observed) for each school (c.f. typical 8schools models, where the SD & N are collapsed to an SE before input to the model).

Below is R code to simulate some data and sample given the Stan model, which is also included below. Note that I attempt to achieve weak/data-informed priors by standardizing the means and log-SDs.

I seem to be able to get posteriors that correspond with whatever input parameters I use to simulate data, but I think there still might be something missing. Specifically, I feel like the line in the model expressing the observed SD as a sample from a population should include information about how much measurement effort was included (i.e. how many students were observed), because it makes sense to “believe” SDs from large samples more than those from small samples, no?

Edit: After some further playing, I’m sure something more needs to be done to account for measurement effort, as making K vary from school to school (replacing K = rep(100,times=N) with K = runif(N,2,100) in the R code) leads to a posterior on variability_sd_log that doesn’t match the input of sd_of_sds_log (the posterior distribution seems to overestimate the simulation parameter). Any ideas?

library(rstan)

#population parameters
mean_of_means = 100 #arbitrary value
sd_of_means = 20 #arbitrary value

mean_of_sds_log = 4 #arbitrary value
sd_of_sds_log = .1 #probably want this one to be small-ish

#measurement parameters
N = 1000 # ex. number of schools
K = rep(100,times=N) # ex. number of students observed per school

#generate true means & sds per school
true_means = rnorm(N,mean_of_means,sd_of_means)
true_sds = exp(rnorm(N,mean_of_sds_log,sd_of_sds_log))

#loop to generate raw data and consequent observed means and sds
obs_means = rep(NA,times=N)
obs_sds = rep(NA,times=N)
for(i in 1:N){
	individual_obs = rnorm(K[i],true_means[i],true_sds[i])
	obs_means[i] = mean(individual_obs)
	obs_sds[i] = sd(individual_obs)
}

#package data for Stan
data_for_stan = list(
	N = N # number of experiments
	, K = K # number of observations per experiment
	, obs_means = obs_means  # observed mean from each experiment (priors assume that these have been standardized to a mean of 0 and sd of 1)
	, obs_sds = obs_sds  # observed standard deviations in each experiment
)

#compile model
meta_mod = rstan::stan_model('meta.stan')

#sample model
post = rstan::sampling(
	object = meta_mod
	, data = data_for_stan
	, chains = 4
	, cores = 4
)
check_hmc_diagnostics(post)
summary(
	post
	, par = c(
		  'centrality_intercept'
		, 'centrality_sd'
		, 'variability_intercept_log'
		, 'variability_sd_log'
	)
)$summary


Contents of meta.stan:

data {
  int<lower=2> N;  // number of experiments
  vector<lower=2>[N] K ; // number of observations per experiment
  vector[N] obs_means;  // observed mean from each experiment (priors assume that these have been standardized to a mean of 0 and sd of 1)
  vector<lower=0>[N] obs_sds;  // observed standard deviations in each experiment
}
transformed data{
  //standardize means for easy weakly-informed priors
  real mean_of_means = mean(obs_means) ;
  real sd_of_means = sd(obs_means) ;
  vector[N] obs_means_z = (obs_means-mean_of_means)/sd_of_means ;
  // log then standardize sds for easy weakly-informed priors
  vector[N] obs_sds_log = log(obs_sds) ;
  real mean_of_sds = mean(obs_sds_log) ;
  real sd_of_sds = sd(obs_sds_log) ;
  vector[N] obs_sds_z = (obs_sds_log-mean_of_sds)/sd_of_sds ;
}
parameters {
  //centrality parameters
  real centrality_z_intercept;
  real<lower=0> centrality_z_sd;
  vector[N] centrality_noncentering_helper;
  //variability parameters
  real variability_z_intercept;
  real<lower=0> variability_z_sd;
  vector[N] variability_noncentering_helper;
}
transformed parameters{
  vector[N] true_experiment_mean = (centrality_z_intercept + centrality_z_sd * centrality_noncentering_helper) * sd_of_means + mean_of_means ;
  vector[N] true_experiment_log_sd = (variability_z_intercept + variability_z_sd * variability_noncentering_helper) * sd_of_sds + mean_of_sds ;
}
model {
  // priors for centrality
  target += std_normal_lpdf(centrality_z_intercept); //peaked at zero. Implements data-driven weakly-informed prior when data is standardized to mean=0 & sd=1
  target += weibull_lpdf(centrality_z_sd | 2, 1); //peaked around 1, positive and zero-avoiding. Implements data-driven weakly-informed prior when data is standardized to mean=0 & sd=1
  target += std_normal_lpdf(centrality_noncentering_helper);
  // priors for variability (these need tweaking!)
  target += std_normal_lpdf(variability_z_intercept); //peaked at zero. Implements data-driven weakly-informed prior when data is standardized to mean=0 & sd=1
  target += weibull_lpdf(variability_z_sd | 2, 1); //peaked around 1, positive and zero-avoiding. Implements data-driven weakly-informed prior when data is standardized to mean=0 & sd=1
  target += std_normal_lpdf(variability_noncentering_helper);
  // likelihood
  target += normal_lpdf(obs_sds_z | variability_z_intercept , variability_z_sd );
  target += normal_lpdf(obs_means | true_experiment_mean, exp(true_experiment_log_sd) ./ sqrt(K) );
}
generated quantities{
  //transform back to original data scales
  real centrality_intercept = (centrality_z_intercept * sd_of_means) + mean_of_means ;
  real<lower=0> centrality_sd = centrality_z_sd * sd_of_means ;
  real variability_intercept_log = (variability_z_intercept * sd_of_sds) + mean_of_sds ;
  real<lower=0> variability_sd_log = variability_z_sd * sd_of_sds ;
}

2 Likes

Sorry, short on time, but maybe @bbbales2 can spare some time to look into this?

1 Like

From: sampling - What is the standard error of the sample standard deviation? - Cross Validated

Looks like the standard error of a standard deviation estimate is pretty similar to the standard error of a mean, so I guess that’s how we’d vaguely expect these things to behave.

The problem is how the data is being passed in, I think. If you passed in all the actual measurements (instead of summaries) and wrote the model that way, I think you’d get the answer you expect (things with lots of samples would probably have better defined SDs).

But in eight schools the data is being summarized before it is passed in:

int<lower=2> N;  // number of experiments
vector<lower=2>[N] K ; // number of observations per experiment
vector[N] obs_means;  // observed mean from each experiment (priors assume that these have been standardized to a mean of 0 and sd of 1)
vector<lower=0>[N] obs_sds;  // observed standard deviations in each experiment

If obs_sds is 8-schools, I think those are standard errors of estimates of mean effects. That data tells us roughly our uncertainty in the obs_means. You’d want something like obs_sds_sds I guess if you wanted to do this for the sds as well.

1 Like

In the conventional analysis of 8schools, yes, you pass in the standard errors of the estimates of mean effects (i.e. SD/sqrt(number of samples contributing to SD) ), but in my extension I’m passing in simply the observed standard deviations, attempt to model the true/latent standard deviations (on the log scale, hence the parameter true_experiment_log_sd), and finally derive a standard error of the estimate of the mean by multiplication of exp(true_experiment_log_sd) by sqrt(number of samples contributing to SD).

Hi Mike,
I think best approach is to directly account for sampling distribution of variance (Chi-sq / Gamma). I did something like that in Stan hierarchical models for MA in toxicology paper where you may have N=2, 3 in some studies. There is Stan code in the supplement.
But from my simulations (I may be wrong on this), there’s little/no point in accounting for sampling distribution in variance once you get to higher N’s (say, >20). There was a simulation paper on this very issue in Plos One last year, so you can even quantify this.

2 Likes

PS: The paper is not open access, I think. If you want to check it more closely just let me know and I’ll share code.

Thanks, and yes, I’d love to peek at the code!

The model is below. It is a hierarchical model with a few hierarchies and meta-regression, so you can safely ignore almost all of it (if interested in toxicology application just scihub the paper), the part that you’re possibly interested in will be at the end, with lv[i] ~ . Once we have the sampling distribution , the true \sigma can then be modelled.

I am working on putting this “generic” approach (lv ~) in a meta-analysis with Stan package we’re developing (here), but was also thinking about these models for a couple of application papers (more tox/risk assessment, but also economics). Would love to talk applications/ideas around treating variance as unknown if you’re also thinking about this!

data {
  int  n;
  int  n_drugs;
  int  n_studies;
  int  n_groups;  
  int  drug[n];
  int  study[n];
  int  group[n];
  
  //observed data on means & variation:
  real ss[n];
  real lgm[n];
  real<lower=0> lv[n];
}

parameters {
  real mu_drug[n_drugs];     
  real mu_study[n_studies];  
  real<lower=0> sigma_study;
  real gamma_group[n_groups];
  real gamma_drug[n_drugs];
  
  // random effect of group:
  real logratio[n];
  real mu_group[n_groups - 1];
  real<lower=0> sigma_group;
  
  // fixed effect of group:
  // real logratio[n_groups - 1];
}

model {
  // Both true mu and sigma are simply introduced to clean 
  // up notation, rather than as extra parameters to output.
  // Hence they are defined here, in the model block.
  real sigma[n]; 
  real mu[n];
  for(i in 1:n){
    if(group[i] != 1)
      // random effect
      mu[i] = mu_drug[drug[i]] + mu_study[study[i]] + logratio[n];
      // fixed effect
      // mu[i] = mu_drug[drug[i]] + mu_study[study[i]] + logratio[group[i] - 1];
    else
      mu[i] = mu_drug[drug[i]] + mu_study[study[i]];
    sigma[i] = exp(gamma_drug[drug[i]] + gamma_group[group[i]]);
  }
  
  //priors dealing with means (mu):
  mu_drug ~ normal(0, 100);
  for(i in 1:n_studies)
    mu_study[i] ~ normal(0, sigma_study);
  sigma_study ~ normal(0, 5);
  gamma_drug ~ normal(0, 10);
  gamma_group ~ normal(0, 10);
  // random effect of group:
  mu_group ~ normal(0, 100);
  sigma_group ~ normal(0, 5);
  // fixed effect of group:
  // logratio ~ normal(0, 5);
  
  for(i in 1:n){
    // random effect of group:
    if(group[i] == 1)
      logratio[i] ~ normal(0, .0001);
    else
      logratio[i] ~ normal(mu_group[group[i] - 1], sigma_group);
    //observed quantities:
    lgm[i] ~ normal(mu[i], sigma[i]/sqrt(ss[i]));
    //chi-sq is Gamma(njk - 1 / 2, tau_j*(n_jk-1)/2)
    lv[i]  ~ gamma((ss[i] - 1) / 2, (ss[i]-1)/(2*(sigma[i]^2))); 
  }
}
2 Likes