Joint prior for sensitivity and specificity

This topic is to discuss the recently publicised draft by our very own @andrewgelman and @Bob_Carpenter.

In the paper, the authors choose to model sensitivity and specificity via half normal priors

\gamma \sim \operatorname{Normal}^+(\mu_\gamma, \sigma_\gamma),\\ \delta \sim \operatorname{Normal}^+(\mu_\delta, \sigma_\delta).

and then choose

\sigma_\gamma \sim \operatorname{Normal}^+(0, \tau_\gamma),\\ \sigma_\delta \sim \operatorname{Normal}^+(0, \tau\delta).

You guys argue that a weakly-informative or non-informative prior on the standard deviations, encoded by something like \tau_\gamma = \tau_\delta = 1 doesn’t work, because it assigns non-trivial mass to sensitivity and specificity below 50%, which is not realistic. Very much agreed. What I did to solve this in my own analysis of very similar data was to restrict mass to the upper triangle of the space, i.e., doing something like

  real joint_beta_lpdf(real [] theta, real a1, real b1, real a2, real b2){
    real ans;
    if(theta[2] < 1 - theta[1]){
      ans = negative_infinity();
    }else{
      ans = beta_lpdf(theta[1] | a1, b1) + beta_lpdf(theta[2] | a2, b2) - beta_lccdf( 1 - theta[1] | a2, b2);  
    }
    return(ans);
  }

with the difference that I was using Beta priors. theta here are sens/spec. Questions are: (i) is there a reason you guys went for a “soft” constraint rather than a “hard” one? And (ii) is the choice of normals justified by making it easier to elicit prior information or something related to implementation of the hierarchical model?

1 Like

Good questions! Here are my responses:

  1. We recommend soft rather than hard constraints when we have soft rather than hard knowledge. In this case, we don’t absolutely know that spec and sens are greater than 50%. There could be tests that are worse than that. Conversely, to the extent that we believe spec and sens to be greater than 50% we don’t think they’re 51% either.

  2. I typically use normal rather than beta because normal is easier to work with, and it plays well with hierarchical models.

3 Likes