Hi all,
Trying to do a quick proof of concept of loo as a replacement for DIC in network meta-analysis models I fit at work. I have yet to convince everyone to do a full switch to stan (mostly because I am the only one with any familiarity but it’s mostly via brms/rstanarm), so I hope it’s OK that this is a JAGS model for now. The question relates to the use of the loo package.
model
{
for (i in 1:NS) {
# If NAs exist in covariates, impute from the mean at each step
# Prior in baselines, will be normal for most but uniform for identity-binomial
mu[i] ~ dnorm(0,.0001) # vague priors for all trial baselines
#Code for the linear predictor and likelihood
for (k in 1:na[i]) {
r[i, k] ~ dbin(p[i, k], n[i, k]) # Observed number of events in arm k comes from binomial
#Code for the linear predictor
logit(p[i,k]) <- mu[i] + delta[i,k] # model for linear predictor
rhat[i, k] <- p[i, k] * n[i, k] # Expected number of events
# Calculte deviance for arm
dev[i, k] <- 2 * (r[i, k] * (log(r[i, k]) - log(rhat[i,
k])) + (n[i, k] - r[i, k]) * (log(n[i, k] - r[i,
k]) - log(n[i, k] - rhat[i, k])))
log_lik[i,k] <- logdensity.bin(r[i,k], p[i, k], n[i, k])
}
# Calculate residual deviance for trial
resdev[i] <- sum(dev[i, 1:na[i]])
# Code for random effects
}
totresdev <- sum(resdev[])
d[1] <- 0.00000E+00
for (k in 2:NT) {
d[k] ~ dnorm(0.00000E+00, 1.00000E-04)
}
# If a random effects model is used, code for different priors
# If a meta-regression model is used, code for Betas
# Transformed effects
# Treatment 1 baseline, based on average of NP trials including it.
for (i in 1:NS){
mu1[i] <- mu[i] * equals(t[i,1],1)
}
for (k in 1:NT){
logit(T[k]) <- (sum(mu1[])/NP) +d[k]
}
# Treatment effect estimates
for (c in 1:(NT - 1)) {
for (k in (c + 1):NT) {
# odds ratio
OR[c, k] <- exp(d[k] - d[c] )
# log odds ratio
lOR[c, k] <- d[k] - d[c]
# relative risk
RR[c, k] <- T[k]/T[c]
# risk difference
RD[c, k] <- T[k] - T[c]
better[c, k] <- 1 - step(d[k] - d[c]) # Outcome is bad
}
}
# Rank code depending on whether outcomes are good or bad
rk <- rank(d[]) # Outcome is bad
# All of the common rank based outputs
for (k in 1:NT) {
best[k] <- equals(rk[k], 1) # Best treatment has rank 1
for (h in 1:NT) {
prob[k, h] <- equals(rk[k], h) # Probability that given drug is in any rank
}
}
for (k in 1:NT) {
for (h in 1:NT) {
cumeffectiveness[k, h] <- sum(prob[k, 1:h]) # Cumulative rank (e.g. area under ranking curve)
}
}
for (i in 1:NT) {
SUCRA[i] <- sum(cumeffectiveness[i, 1:(NT - 1)])/(NT - 1) # Calculate SUCRA
}
}
I have simulated a meta-analysis dataset and the below code returns the simulated OR nicely. I calculate log density for each arm of a trial. The deviance calculation is standard code from the National Institutes for Health and Care Excellence so I have left it there for now.
Data was simulated with a fixed effect model, so there shouldn’t be any issues, but my results from loo are less than comforting
Computed from 120000 by 100 log-likelihood matrix
Estimate SE
elpd_loo -359.0 9.2
p_loo 52.6 6.0
looic 718.0 18.3
------
Monte Carlo SE of elpd_loo is NA.
Pareto k diagnostic values:
Count Pct. Min. n_eff
(-Inf, 0.5] (good) 24 24.0% 15548
(0.5, 0.7] (ok) 61 61.0% 660
(0.7, 1] (bad) 15 15.0% 36
(1, Inf) (very bad) 0 0.0% <NA>
See help('pareto-k-diagnostic') for details.
Is there any obvious reason for this? PSIS paper suggests something wrong with the model but I know that isn’t true in this case. Could this be a sign of something pathological in JAGS sampling that might otherwise be caught in stan? Is this purely an issue with the loo approximation or does it suggest I should be concerned about my DIC results as well?
Lastly (and hopefully for my own private mission), would more principled priors help here at all? Typically we can’t touch these because they are annointed by the gods, but I’m keen to change that if possible.
Edit: Lastly (really this time), if all I really want is to look at stacking weights, is this as problematic?