Although I agree with the basic premise of this week’s Washington Post article “The Robot Doctor Will See You Now“–namely, that computers can augment medical care–the article misses the art-science balance so central to physician’s practice. He states:
If you’ve ever gone to a doctor with an odd set of symptoms and realized that your doctor has no clue what they signify, you too might have wished he had access to some heavy computing power.
The large majority of the time, at least in my experience as a non-robotic emergency room doctor, we do have a “clue” about what the symptoms signify, and we know how to narrow the uncertainty down sufficiently to make a plan for the patient, even if residual uncertainties persist.
The example the Post article provides is about cancer genomics, a technology relevant only to those (1) with conditions where genomics help personalize treatment and (2) who have access to the few doctors with the time and other resources to access these tools.
However, a “computing power” with broader and easier applicability than the one in the Post article is the focus of an article I co-authored this week. My co-authors and I focus on clinical prediction rules, using as a test case a prediction rule relevant to children in the emergency department for acute asthma exacerbations (about 700,000 each year). Clinical prediction rules are tools to support physician decision-making, combining best evidence from research as well as data from an individual patient’s history, physical examination and tests. These tools augment rather than supplant physician’s decision-making, helping to predict a patient’s probability of a specific diagnosis or outcome. Clinical prediction rules thus do not resemble the Post writer’s “Robot Doctor”.
The clinical prediction rule we use as our article’s example predicts the probability of hospital admission for children with asthma. It does not tell doctors what to do, but helps support their decision-making based on synthesis of multiple data points. Other prediction rules relevant to my clinical field predict
- the risk of appendicitis among children with abdominal pain
- the need for a CT scan (risk of head injury that needs emergent treatment) among children with head trauma (online version is here)
- the risk of bacterial meningitis among children with white blood cells in their spinal fluid
- the risk of septic arthritis among children with hip pain
- the need to treat acetaminophen (Tylenol®) overdose (online version is here)
- the need to treat neonatal jaundice (online version is here)
And there are others, which is great, and also is part of the challenge. How do we make sure we are using the best, most up-to-date version? Overall, however, clinical prediction models have inherent advantages over human-only clinical decision making.
- They can take into account many more factors than can a human brain.
- They are consistent–they will always give the same result regardless of the experience of the physician or the socio-demographics of the patient
- When used to support physician judgment, they provide more accurate assessment of outcomes than physician judgment alone.
Even less relevant to medical robots, a likely cause of the Post writer’s “no clue” observation is that some patients’ complaints are non-medical, albeit quite real and impactful for the patient. In medicine, and especially in the emergency department, physicians see a lot of social morbidity we have few tools to address. A colleague has noted that sometimes the best treatment he has for some patients is giving them some money from his own wallet to access basic resources for their child (food, shelter, transportation). In 2010, almost one-third of children living in households with incomes below the poverty level had visited an emergency department in the past year (30.2 percent), compared to 17.3 percent of children living in households at 200 percent or more of the poverty level.