How to combine posterior draws/samples from separate datasets/models?

Is it valid to combine posterior draws/samples obtained from models that have been fitted to separate datasets? If so, what’s the best way to combine the samples?

I’ve tried two approaches to combining the posterior samples from the two datasets, and I wonder whether any one is a valid way to combine the posterior draws across the two datasets or if both approaches aren’t valid. And if both approaches are valid, which one is preferred? One approach uses brm_multiple and the other uses brm with (1|dataset).

To illustrate the question/problem, I’ve simulated two sets of posterior samples obtained from two datasets below. Both datasets have 100 posterior samples (i.e., each row is one posterior draw/sample). In dataset 1, the mean of the posterior mean is 100 (sd = 10). In dataset 2, the posterior mean is 200 (sd = 20).

Also, the posteriors obtained from the two approaches are quite different. For the first approach (brm_multiple), the posterior for b_Intercept is bimodal, with peaks at 100 and 200. However, the posterior is unimodal for the second approach (brm), with a single peak at 150.

Appreciate any help/advice anyone can provide. Thanks!

# simulate data from two datasets
datlist <- lapply(1:2, function(x) data.frame(dataset = x, parameter_posterior_sample = rnorm(100, 100 * x, 10 * x)))

# datlist - list of two dataframes
[[1]]
    dataset parameter_posterior_sample
1         1                  110.09560
2         1                   89.82479
3         1                  100.16066
4         1                  118.11959
5         1                  106.54350

...
[[2]]    
    dataset parameter_posterior_sample
1         2                   207.7451
2         2                   192.6553
3         2                   226.0222
4         2                   153.7518
5         2                   213.9057
...

library(brms)

# approach 1: brms_multiple
fit <- brm_multiple(parameter_posterior_sample ~ 1, data = datlist, chains = 2)

# approach 2: brm
dat <- do.call("rbind", datlist)  # bind two datasets into a single dataframe
fit2 <- brm(parameter_posterior_sample ~ 1 + (1 | dataset), data = dat, chains = 2)

plot(fit)  # b_Intercept is bimodal with peaks at 100 and 200
plot(fit2)  # b_Intercept is unimodal with peak at 150

Generally, no. The motivation to “combine posterior draws” from models fit to separate data sets implies that there are quantities on which both data sets inform that you are interested in, in which case it is best to structure a model with such quantities as explicit parameters and include both data sets in the likelihood so that they can properly jointly inform on those quantities.

3 Likes

My “generally” qualifier comes from acknowledgement of specific cases of a joint model like:

data{
  int nA;
  vector[nA] A;
  int nB;
  vector[nB] B;
}
parameters{
  real meanA;
  real<lower=0> sdA;
  real meanB
  real<lower=0> sdB;
}
model{
  meanA~std_normal();
  sdA~std_normal();
  A~normal(meanA,sdA);

  meanB~std_normal();
  sdB~std_normal();
  B~normal(meanB,sdB);
}
generated quantities {
  real meansDiff = meanA - meanB ;
  real sdsDiff = sdA - sdB ;
}

which reflects truly structure-independent sub models for A & B, thereby permitting splitting out as separate models and by-hand compute of the GQs thereafter. But I find even in that case it’s more straightforward to do the joint version, and thinking jointly also affords the option to parameterize to permit expressing priors on the GQs themselves, which is usually a good idea if these are the quantities of greatest ultimate inferential interest.

1 Like

@hauselin To support your intuition here, suppose the two datasets give different answers about the value of some particular parameter. One says that it’s somewhere near zero, and the other says that it’s somewhere near 10. Would you want to confidently conclude that the parameter is definitely somewhere near zero OR somewhere near 10, but NOT somewhere in the middle? If the two datasets are giving seriously different answers, then chances are that one model or the other is misspecified, or else that there is some conceptual issue and the quantities being estimated in the two models are not logically equivalent.

Suppose on the other hand that the two models give similar but not identical results. Then we would want our ability to bring both models to bear on the problem to increase the precision of our inference and narrow our posteriors. But combining posterior draws from different posteriors will always (I think?) result in a posterior that is broader than the narrower of the two posteriors that are being combined.

@mike-lawrence, the other scenario where I see posteriors getting combined is when uncertainty from a previous posterior is being carried forward by iterating a model over multiple draws from the previous posterior as data, and combining the posteriors for inference. This is especially useful when the margins of the previous posterior are not independent, which can make it prohibitive to confidently get a reliable parametric approximation to the joint posterior. It also achieves what the old cut function in BUGS was supposed to achieve (but failed to reliably deliver I think), which I guess had some use cases.

2 Likes

Thanks for your insights, @jsocolar and @mike-lawrence! Seems like whether it’s appropriate depends on the situation. For example, it seems perfectly fine to combine posterior samples from different imputed datasets with brm_multiple (as shown in @paul.buerkner’s vignette).

1 Like