This topic is to discuss the recently publicised draft by our very own @andrewgelman and @Bob_Carpenter.
In the paper, the authors choose to model sensitivity and specificity via half normal priors
and then choose
You guys argue that a weakly-informative or non-informative prior on the standard deviations, encoded by something like \tau_\gamma = \tau_\delta = 1 doesn’t work, because it assigns non-trivial mass to sensitivity and specificity below 50%, which is not realistic. Very much agreed. What I did to solve this in my own analysis of very similar data was to restrict mass to the upper triangle of the space, i.e., doing something like
real joint_beta_lpdf(real [] theta, real a1, real b1, real a2, real b2){
real ans;
if(theta[2] < 1 - theta[1]){
ans = negative_infinity();
}else{
ans = beta_lpdf(theta[1] | a1, b1) + beta_lpdf(theta[2] | a2, b2) - beta_lccdf( 1 - theta[1] | a2, b2);
}
return(ans);
}
with the difference that I was using Beta priors. theta
here are sens/spec. Questions are: (i) is there a reason you guys went for a “soft” constraint rather than a “hard” one? And (ii) is the choice of normals justified by making it easier to elicit prior information or something related to implementation of the hierarchical model?