By Steven J. Barker, PhD, MD, Chief Science Officer, Masimo Corp., Professor Emeritus of Anesthesiology, University of Arizona College of Medicine, Board Member, Patient Safety Movement Foundation
“When technology is master, disaster comes faster.” This quote is attributed to Piet Hein (1905-1996), though the actual source is uncertain. It’s a perfect introduction for my thoughts on technology in healthcare. Before describing that, who am I and why should you read further? My first career was in aerospace engineering: PhD and tenured professor. I made a mid-life switch to medicine: MD, residency in anesthesiology, 23 years as an academic department chair. I’ve made a career of combining these two disciplines, and there is no one more enthusiastic about bringing new technology into clinical medicine. My favorite lecture is called “Flying the Anesthesia Machine,” where I explore the lessons from aviation that can be applied to our operating room “cockpit.”
Given that introduction, the next few statements may seem odd. When new technology is applied to clinical practice, there are often unexpected consequences, just as there are unexpected side effects and interactions with new drugs. Sometimes the adverse side effects can outweigh the direct benefits of the new technology. These effects can include simply distracting the anesthesiologist (the ‘pilot’) from an urgent warning given by trends in traditional vital signs. There are case reports of anesthesiologists being distracted by the demands of the Electronic Medical Record (EMR), and thereby delaying a diagnosis. “Information Overload” is a real phenomenon, and the question becomes “how much is too much?” The answer depends on both the value of the new data and how we display it – how we get data from the machine into the caregiver’s (pilot’s) brain. So, when I look at any new technology, I ask: (1) how will it make my life easier? And (2) how will it lead me to the correct diagnosis and treatment faster? We may not know the answers to these questions until we’ve tested the new technology in clinical practice, but they must be answered before widespread adoption.
Aviation has worked on this issue for many years, and we can learn from that effort. For example, analog displays are interpreted more quickly by the human brain than digital ones. Thus, modern aircraft still display airspeed and altitude using analog “needle gauges” rather than digital readouts, which is what we see on most of our operating room monitors. An altitude alarm on a modern transport aircraft will actually say “too low, too low, pull up,” while a hypotension alarm in the OR will say “beep-beep-beep.” An important study by Loeb, et al, showed that when experienced anesthesia clinicians heard operating room alarm sounds, they were able to correctly identify the source of the alarm just one-third of the time! (1) It disappoints me that in 2020, our operating room alarms are still saying “beep-beep-beep” when Siri or Alexa can tell us in plain English that our front door is open.
The problem today is not simply “too much technology.” It’s much more complex than that. A major component is the proliferation of new data and not enough attention to how it is displayed and integrated with other data. Every time a new monitoring device is adopted, we simply add another set of numbers and trend curves to our already hopelessly cluttered patient status dashboard. One possible solution to this information overload problem is to combine multiple measured quantities into a “smart index” – a number whose trends indicate whether a patient is getting better or worse, an early warning when the patient is headed for trouble. For example, Masimo has developed a “Halo Index”, which is undergoing clinical studies. While I hope we are not headed towards the “anesthesia robot” (a separate discussion we must have), there is clearly a role for Artificial Intelligence (AI) in processing our multitudinous measurements into simple graphics or indices of patient status. Much more to come on this topic.
I’ve focused here on data-gathering technology, such as the monitors we use in the OR, because that is what I do. Technology can also be “interventional”, including robotic surgery, implantable pacemakers, artificial limbs, and anything else that does something in or to the patient. I’ve heard from my surgical colleagues that these tools are sometimes used where they offer no real advantage, the old “solution looking for a problem” scenario. I won’t offer an opinion on this, but maybe someone with some expertise in the area will.
So, why did I write this article? What is my message? I have two thoughts. In answer to: “are we adding too much technology, causing information overload and distraction?” I say, yes, sometimes. If we add new monitoring data just because we can, it may not improve patient safety and care, and may even be detrimental. On the other hand, we often can’t predict how a new technology might improve patient care until we actually try it. If the verdict on a new technology or drug is uncertain, we must proceed to clinical trial before deciding. As I said, there are often unexpected consequences and effects. Often these effects are negative and will diminish the technology’s value, but sometimes they are positive and can increase the value in totally unanticipated ways.
I’ll leave you with an example from the pharmaceutical world. In the mid-1990s, Pfizer developed an anti-anginal agent that reached the stage of controlled clinical trials. Halfway through the study, it was clear that the placebo and drug groups had no difference in the incidence of chest pain, but the drug group had a totally unexpected side effect. And you know the rest: sildenafil (Viagra) became the most successful new drug in recent history. The bottom line is this: if a new technology or drug shows any hope , then we should try it – with patients. They will provide the answer, and that answer may surprise us.
Loeb RG, et al. Anesth Analg 1992; 75, no. 4, 499-505.