There was shocking news from Himachal Pradesh in late August. A woman died because of severe mental distress after a private clinic “wrongly” diagnosed her as HIV positive. However, such a wrong diagnosis is not uncommon and we know this from our personal and social experiences. In fact, in today’s world, the overuse of diagnostic testing has been partially attributed to the fear of missing something important and intolerance of diagnostic uncertainty.
In his 1989 article in New England Journal of Medicine, J P Kassirer wrote: “Absolute certainty in diagnosis is unattainable, no matter how much information we gather, how many observations we make, or how many tests we perform.” The present discussion is towards understanding the nature and quantum of clinical diagnostic errors.
Let’s first examine the severity of the situation. A British National Health System survey in 2009 reported that 15 per cent of its patients
were misdiagnosed. According to a study published in the journal BMJ Quality & Safety in 2014, each year in the US, approximately 12 million adults who went for outpatient medical
care were misdiagnosed in hospital settings. This figure amounts to 5 per cent of the total adult patients
and, according to researchers, misdiagnosis has the potential to result in severe harm in about half of those cases.
A diagnostic error may be defined as “any mistake or failure in the diagnostic process leading to a misdiagnosis, a missed diagnosis, or a delayed diagnosis.” While delayed diagnosis is certainly an important concern, ‘misdiagnosis’ and ‘missed diagnosis’ (which is simply missing the presence of a disease) is also worrying. And it is apparent from numerous articles in different medical
journals that both of these errors are prevalent. In fact, medicine
in practice today is mostly statistical. In statistical language, these two types of errors are called “type I error” and “type II error”. The complement (i.e. one minus the error rate) is important in medical
statistics. The likelihood of a positive finding when the disease is present is referred to as “sensitivity”. On the other hand, the likelihood of a negative finding when a disease is absent is referred to as “specificity”. It is well-known that, nearly all signs, symptoms, or test results are neither 100 per cent sensitive or specific.
In fact, the two types of errors are natural in any statistical testing procedure. In any testing procedure, the validity/correctness of some hypothesis of prior belief is to be judged on the basis of data. This prior belief is called the “null hypothesis”, and is considered to be true unless and until there is strong data-based reason to think otherwise. A “type I error” is rejecting the null hypothesis incorrectly, and a “type II error” is failing to reject a null hypothesis. One is seeing an effect when there isn’t one (e.g., diagnosis of a serious disease when it is not there), and the other is missing an effect (e.g., missing to diagnosing a disease when it is present). Both are serious. However, it is delicate to decide which one is more serious. In many cases, type II errors are considered to be more serious than type I errors. The objective of a clinical experiment, or any statistical testing in general, is to minimise these errors. However, unfortunately, it is impossible to minimise both the errors simultaneously. For example, if one intends to reduce type II error, the testing procedure should be made sensitive to tiny indicators of the onset of disease. And that, in effect, would invariably enhance the type I error. On the other hand, in order to reduce the type I error, one needs to ignore minor indications of the onset of disease — only strong indication of disease would be considered, and that will automatically lead to missing some genuine cases of disease onset, resulting in the increase of type II error. Usually, most of the testing procedures are such that there are some pre-assigned type I error rate (say 5 per cent), and also some prefixed type II error rate (say 5 per cent, or 10 per cent, or 20 per cent), depending on the situation. But, remember that a 5 per cent type I error implies that 1 in 20 individuals without a disease would be diagnosed to be having a disease, and a 10 per cent type II error indicates that the procedure would miss finding the disease of 1 in 10 patients
having the disease. That’s a huge margin.
What is the takeaway then? Will such errors in medical diagnosis continue? Certainly, both type I and type II errors can be minimised to some extent with the uses of high-quality equipment and chemicals associated with the diagnosis. Also, the art of medicine
needs refinement. And, we might see further improvements in terms of reducing both types of errors with the advancement of medical and technological research. At least, the quest of science is in this direction.
The writer is a professor of statistics at the Indian Statistical Institute, Kolkata