I guess my response to that would be, if one doctor is saying negative isn't necessarily negative, what's the point of testing? Or is that just bullshit and there are more conclusive tests that can be done?
See:
http://en.wikipedia.org/wiki/Bayes'_theorem
As discussed in Calculated Risks by Gerd Gigerenzer, there is a real problem with how statistical information, such as false positive/negative rates of medical testing, is understood by doctors and/or communicated to patients. If I remember correctly, the root of the problem is that a medical test is often described in terms of four conditional probabilities:
- The probability that the test returns a positive given that the patient has the condition (true positive)
- The probability that the test returns a positive given that the patient doesn't have the condition (false positive)
- The probability that the test returns a negative given that the patient has the condition (false negative)
- The probability that the test returns a negative given that the patient doesn't have the condition (true negative)
But this isn't quite the information the patient is interested in; he wants to know what the probability of having the condition is given a particular test result (i.e., P(A|B)), not what the probability of a particular test result is given the presence or absence of the condition (i.e., P(B|A)).
The takeaway is that a positive/negative test result has different implications depending on the incidence of the condition itself (e.g., high risk groups versus low risk groups). Even without Bayes' Theorem, this makes intuitive sense: for instance, a negative test result is more likely to be accurate if you're in a low risk group, because being in the low risk group means that your chance of even having the condition in the first place is lower.