(this is more of a “am I doing this right” question)
I have body-mass data for different invertebrate taxa across different experimental units (think plots, locations). These are body-masses of up to 10 individuals per taxon per unit (samples in some units contained >10 ind., in which case 10 were measured; when <10, all were measured). I need an estimate of the body-mass distribution of each taxon, but I want to allow for differences in mean body-mass between units. On the other hand, given that some units have <<10 individuals, I want to minimize small sample biases by partially pooling across units. If I am thinking this right, this would shrink units towards the overall mean.
At first glance this seems to work OK with simulated data. Now in my real data I will have some taxa for which only few individuals are found altogether (say 5 across all units). I am considering to additionally partially pool information on mu and sigma within higher taxonomic groups (eg. species within the same genus). I am aware of examples using phylogenetic distance but this may be overkill for my purposes. So for now I would like to just partially pool information within each group. If I got the documentation right, the way to do this in brms is with gr()
and by = group
.
# to define lognormal dist not on the log scale
myrlnorm = function(n, m, s) {
lnrm <- rlnorm(n,
meanlog = log(m^2 / sqrt(s^2 + m^2)),
sdlog = sqrt(log(1 + (s^2 / m^2))))
return(lnrm)
}
d = data.frame(# 10 units
u = rep(1:10, each = 10*5),
# 5 taxa
t = rep(rep(letters[1:5], each = 10), 10))
set.seed(123)
# dummy body-masses
d$b[d$t == 'a'] = myrlnorm(100, d$u/10, d$u/20)
d$b[d$t == 'b'] = myrlnorm(100, 2*d$u/10, 2*d$u/20)
d$b[d$t == 'c'] = myrlnorm(100, 3*d$u/10, 3*d$u/20)
d$b[d$t == 'd'] = myrlnorm(100, 4*d$u/10, 4*d$u/20)
d$b[d$t == 'e'] = myrlnorm(100, 5*d$u/10, 5*d$u/20)
# less than 10 ind. per unit
d[sample(1:500, .6*500),3] = NA
d = d[complete.cases(d),]
table(d$u, d$t)
d$u = as.factor(d$u)
#### partial pooling ####
mpp = brm(bf(b ~ 0 + t + (0 + t|u),
sigma ~ 0 + t),
family = lognormal(),
chains = 3,
cores = 3,
iter = 3000,
backend = "cmdstanr",
seed = 123,
data = d)
summary(mpp)
ranef(mpp) # unit differences from "pop-level" estimates of mu
plot(conditional_effects(mpp,
effects = "u:t",
re_formula = NULL))
ce = conditional_effects(mpp,
effects = "u:t",
re_formula = NULL)
ce[[1]] # estimate__ gives unit specific estimates for mean body-mass
# define groups
d$g[d$t %in% c('a','b')] = 'aa'
d$g[d$t %in% c('c','d','e')] = 'bb'
# make taxon c have very few individuals
n = sample(which(d$t=='c'),
.8*length(which(d$t=='c')))
d[n,3] = NA
d = d[complete.cases(d),]
table(d$u,d$t)
#### partial pooling within groups, group means varying by unit ####
mppgp = brm(bf(b ~ 0 + (1|gr(t, by = g)) + (0 + t|u),
sigma ~ 0 + (1|gr(t, by = g))),
family = lognormal(),
chains = 3,
cores = 3,
iter = 9000,
control = list(adapt_delta = 0.999),
backend = "cmdstanr",
seed = 321,
data = d)
summary(mppgp)
# "pop-level" mu and sigma for each taxon,
# followed by unit differences from pop-level mu
ranef(mppgp)
plot(conditional_effects(mppgp,
effects = "u:t",
re_formula = NULL))
ce = conditional_effects(mppgp,
effects = "u:t",
re_formula = NULL)
ce[[1]] # estimate__ gives unit specific estimates for mean body-mass
This last model has some divergent transitions even after taking adapt_delta to .999 but Rhats and chains look OK. I am not sure about the cor = TRUE
inside gr()
; “group-level terms will be modelled as correlated”. Is that something I want?
My main question is, does the model specification in the second model do what I want, or should it be different somehow?
To complicate things a bit more, let’s say we are dealing not with species within genera but with families within orders. It is less reasonable to assume that different families within the same order will have similar body-mass distribution. What if what I want the model to “learn” from other families in the same order is not mu and sigma but the relationship between mu and sigma. (I realize this may need opening a new topic).
I would appreciate any feedback.