Hm, I might have it:

```
data {
int<lower=2> N; // number of experiments
vector<lower=2>[N] K ; // number of observations per experiment
vector[N] obs_mean; // observed mean from each experiment (priors assume that these have been standardized to a mean of 0 and sd of 1)
vector<lower=0>[N] obs_sd; // observed standard deviations in each experiment
}
parameters {
//centrality parameters
real centrality_intercept;
real<lower=0> centrality_sd;
vector[N] centrality_z;
//variability parameters
real variability_intercept;
real<lower=0> variability_sd;
vector[N] variability_z;
}
model {
vector[N] true_experiment_mean = centrality_intercept + centrality_sd * centrality_z ;
vector[N] true_log_experiment_sd = variability_intercept + variability_sd * variability_z ;
// priors for centrality
target += normal_lpdf(centrality_intercept | 0, 1); //expects standardized data
target += weibull_lpdf(centrality_sd | 2, 1); //expects standardized data
target += normal_lpdf(centrality_z | 0, 1);
// priors for variability (these need tweaking!)
target += normal_lpdf(variability_intercept | 0, 1); // expects standardized data
target += weibull_lpdf(variability_sd | 2, 1); // expects standardized data
target += normal_lpdf(variability_z | 0, 1);
// likelihood
target += normal_lpdf(log(obs_sd) | variability_intercept , variability_sd );
target += normal_lpdf(obs_mean | true_experiment_mean, exp(true_log_experiment_sd)./sqrt(K) );
}
```

So, in addition to modelling the mean of each experiment as sampled from a latent normal distribution, I also model the variability of each experiment as sampled from a latent log-normal distribution, and in the last line I account for different experiments’ sample sizes. I think I need to work out the priors on the variability parameters, but generally does this look right? (tagging @matti in case they have input)