Hi,

i have a binary outcome of a test in a large sample, i know the test is not perfect and wish to estimate the true prevalence. I use a logistic regression approach, where i marginalized the true disease status.

I have pretty good knowledge about the sensitivity/specificity of the test from earlier research and specify this as a beta prior. I am, however, puzzled by the fact that my sensitivity/specificity posterior is different from my prior. As far as i can see the data holds no information about the sensitivity/specificity and still, both get updated and in particular specificity moves all his way up to the boundary of the prior. so, i feel my resulting prevalence is based on a too optimistic specificity.

can someone explain to me what’s happening here?

```
data{
int status[2];
int Freq[2];
}
parameters {
vector<lower=.5, upper=1>[2] sensspec;
real<lower=0, upper=1> pie;
}
transformed parameters {
real loglik[2];
for (i in 1:2){
if (status[i] == 2){
loglik[i] = Freq[i]*log(pie*sensspec[1] + (1-pie)*(1-sensspec[2]));
} else if (status[i] == 1){
loglik[i] = Freq[i]*log(pie*(1-sensspec[1]) + (1-pie)*sensspec[2]);
}
}
}
model {
pie ~ beta(2,8);
for (i in 1:2){
sensspec[i] ~ beta(100, 10);
}
target += sum(loglik);
}
```