Gibbs Sampling in Stan

Background

I’m currently working on a project which will utilize methods found in the appendix of this paper. The paper outlines how Gibbs sampling can be used to estimate the prevalence (\pi) of a disease, the sensitivity (S) , and the specificity (C) from the results of a diagnostic test for which there is no gold standard to compare against. Details can be found in the “One Diagnostic Test” section of the paper.

The method proceeds as follows:

  1. Choose priors of the prevalence, sensitivity, and specificity. This paper uses beta priors for all three. Sample an initial point from each prior.

  2. Using the samples from step 1, draw “pseudo-data” from the conditional distribution.

Let a and b be the observed number of positive and negative test results. Let Y1 be the number of true positives and Y2 be the number of false negatives. If we knew the true prevalence, sensitivity, and specificity, then (from the appendix)…

Y1 \vert a , \pi, S, C \sim \operatorname{Binom} \left( a, \dfrac{\pi S}{\pi S + (1-\pi)(1-C)} \right)

Y2 \vert a , \pi, S, C \sim \operatorname{Binom} \left( b, \dfrac{\pi (1-S)}{\pi (1-S) + (1-\pi)C} \right)

We don’t have the true prevalence, sensitivity, and specificicty, so we use the samples drawn from the prior in step 1 to obtain inital estimates of Y1 and Y2.

If we knew the true Y1 and Y2 then, then the conditional distribution of the prevalence, sensitivity, and specificity would be

\pi \vert a,b,Y1,Y2, \alpha_{\pi} \beta_{\pi} \sim \operatorname{Beta}(Y1+Y2 + \alpha_{\pi}, a+b-Y1-Y2+\beta_{\pi})

S \vert Y1,Y2, \alpha_{S} \beta_{S} \sim \operatorname{Beta}(Y1+S, Y2+\beta_S)

C \vert a,b,Y1,Y2, \alpha_{C} \beta_{C} \sim \operatorname{Beta}(b-Y2+ \alpha_C, a - Y1 + \beta_C)

Here, \alpha_x, \beta_x are the parameters for the prior for metric x (i.e. prevalence, sensitivity, specificity). We don’t know the true Y1 and Y2, so we use the samples drawn from above.

  1. Repeat step 2, drawing from each conditional distribution and then using those draws to generate further daws.

It is quite easy to code this up in python or R, but I would like to use Stan so that I can utilize some of the diagnostics and parallelization in the rstan package.

Question

This problem doesn’t require the use of HMC. Can I still use Stan for this problem? My inital thoughts are “No”, due to this thread, but I thought I would double check. I know I can use the algorithm = 'fixed param' argument in rstan::stan to draw from a generated quantities block. However, I’m not sure how to make stan perform the sort of sampling I’ve described.

Your time and insight is appreciated.

Hi,
I don’t think Stan is the right tool for this. Perhaps something like Nimble or JAGS would be more appropriate.

You can still use a lot of the rstan diagnostics (like rstan::monitor), you just need to wrap the output from the other sampler into the correct format (a three dimensonal array specified in the help ?rstan::monitor)

1 Like

OK, thanks. Any information as to why Stan is the wrong tool for this?

To implement an MCMC scheme you need access to the previous state. The Stan language does not provide this (because that doesn’t make sense if your goal is to implement a log-density).

Sorry to uncheck @anon75146577’s answer, but there’s an example of how to code exactly this model in the latent discrete parameters chapter of the users guide. You can find this example and others in my latest paper, all coded in Stan.

Gibbs is actually a very bad way to fit these models—it’s super slow to converge. These models used to take 24 hours to fit in WinBUGS with very poor mixing and they now fit in like 30 minutes in Stan. Just be careful to use reasonable inits because there’s a non-identifiability. Duco Veen’s visiting us at Columbia from Utrecht and working on a case study that should be out soon.

P.S. Coincidentally, it was exactly this form of model that led me back from industry to academia. Andrew Gelman and Jennifer Hill helped me formulate the model (turns out Dawid and Skene beat me to the punch by a few decades), but I didn’t understand enough about Bayesian modeling and MCMC inference to diagnose what was going wrong with fitting it. I could fit in EM no problem (and have code on the web in both R and Python for doing that—it’s how Dawid and Skene originally fit these models).

1 Like

Hi Bob,

just a quick follow-up. Am i correct that the way to fit the Joseph 1995 model in Stan is to take the likelihood function from the paper, calculate it for all potential values of Y1 and Y2 (the two discrete states in the model) and sum it up?

Best, felix

Hi @Bob_Carpenter,

could you briefly comment on this? That would be great! I have trouble seeing the similarities between the Joseph and the David/Skene model :/

Best, Felix

I agree with Bob. This is a perfect model for Stan. Sure, you can’t do Gibbs sampling in Stan, but the goal here is not to do Gibbs sampling, the goal is to fit the model. You can fit the model in Stan so easily, as Bob explains.

@Bob_Carpenter Thanks for commenting and reviving this problem. I’ve taken a look at the DS.stan file in your linked paper. I’m not sure how to map my problem onto your code. Could you please explain how I could code up my model in Stan?

This is my try, which doesn’t give reasonable results unfortunately.

data{
  int cases;
  int n;
}

parameters{
  real<lower=0.5, upper=1> sens;
  real<lower=0.5, upper=1> spec;
  real<lower=0, upper=1> prev;
}

transformed parameters{
  real loglik;
  loglik = 0;
  
  for (i in 0:cases){
    for (j in 0:(n-cases)){
      loglik += 
        (i+j)*log(prev) + (n-i-j)*log(1-prev) + i*log(sens) + j*log(1-sens) + (n-cases-j)*log(spec) + (cases-i)*log(1-spec) + //loglikelihood from joseph et al 1995
        binomial_lpmf(i|cases, prev*sens/(prev*sens+(1-prev)*(1-spec))) +  //probability of i true positives out of observed cases
        binomial_lpmf(j|(n-cases), (prev*(1-sens))/(prev*(1-sens)+(1-prev)*(spec))); //probability of i false negatives out of observed noncases
    }
  }
}

model{
 sens ~ beta(90, 10);
 spec ~ beta(90, 10);
 prev ~ beta(10, 90);
 
 target  += loglik;
}

Take a look at this paper: http://www.stat.columbia.edu/~gelman/research/unpublished/specificity.pdf

2 Likes

Meaning that Gibbs could handle sampling for it? It helps to separate the model you want to fit and the algorithm you use to fit it.

Yes. That’s beause if you have p(\theta, u, v) for discrete variables u and v, then

p(\theta) = \sum_{m} \sum_n p(\theta, u = m, v = n).

The Dawid-Skene model is for the diagnostic tests with unknown gold standards. It’s not designed to estimate prevalence, but it has a prevalence parameter. It assumes the tests are categorical.

Ah, that’s because on the log scale you need log-sum-exp to sum, so it should be

target += log_sum_exp(loglik);

I’d also suggest rewriting the likelihood from the paper using regular density functions.

You might want to look at how Andrew and I coded this problem. We did it on the log odds scale, but it could’ve just as easily have been done with beta-binomial.

1 Like

thanks @Bob_Carpenter and @andrewgelman for looking into this! i read your sens/spec paper and it is intuitive that modeling the frequencies is easier/more straightforward than this approach. also, summation seems to become infeasible with relatively small n (like in the thousands) already.

i am looking into it because i’d like to run a regression on the latent states - could this be done differently? like modeling state probabilities?

Do you mean something like this?

If yes I can share that presentation and also some text/advice and possibly Stan and INLA code.

1 Like

would be great if you could share the presentation! my email is felix.fischer@charite.de

Thanks! Felix

I took a look at the paper @andrewgelman linked. It seems like the model presented in A.1 of that paper is well suited for the problem under the One Diagnostic Test header in my linked paper. Andrew and Bob’s paper continue to expand that model to cases where information of specificity and sensitivity are given by previous trials, and by previous studies (models A.2 and A.3 respectively).

If I’m not mistaken, I don’t think those models are not quite appropriate for my problem. In my problem, I have used two tests on a single sample of people. I can thus partition my sample into each permutation for the result of each of the two tests. Though I have information on the sens/spec of the form amenable to model A.1, I think the likelihood would be different. Am I correct in believing so? Am I to believe that @Bob_Carpenter 's model (the Dawid and Skene model) uses the appropriate likelihood? If I’m wrong, can I ammend any of the A.1 through A.3 models to fit my problem?

Yes, if you have 2 yes-no tests then you can work out the 4 probabilities and use a multinomial likelihood. Or if you have 3 yes-no tests you can do a multinomial with 8 categories., etc.

In particular, see Section 3.2 of this paper.

1 Like

I think that is two tests two populations. I have two tests, one population.

But now that I’m reading it, maybe I can make it work