AI is changing everything, but it might not be ready for health care.
Generative artificial intelligence (AI) is the most pivotal pragmatic change in how physicians interact with patients since the advent of the electronic health record (EHR). As health care grapples with a tremendous amount of provider burnout—and the resulting mass exit from the field—it is exciting to consider how generative AI could help restore the patient connections that energize so many physicians.
It is ironic to think that technology could revolutionize the human relationship between physicians and patients. However, suppose physicians no longer need to stare into a tablet or desktop monitor during patient visits. In that case, it is easy to imagine how much stronger our clinical interactions could be.
Some physicians are already testing the impact generative AI can have on their work, especially when used in conjunction with their EHRs. As they sit in an exam room with a patient, the technology can passively listen in the background to the conversation. This allows the physician to engage with the patient thoroughly yet also have a nice, tidy note at the end of the encounter that they can review and edit before adding it to the EHR record. In other words, the technology can simultaneously strengthen the patient relationship and lift some administrative burdens and stress off physicians’ shoulders.
Generative AI can be used to document a visit in whatever format is desired. It can distinguish various documentation components, such as the history and review of systems. It can also suggest a diagnosis and treatment plan. But this is where things start to get a little sticky.
Exercise cautious optimism
While at first blush generative AI seems pretty amazing, it is not foolproof. In fact, there are some serious concerns about the technology’s so-called “hallucinations,” a phenomenon where generative AI will sometimes make things up convincingly—such as fabricating an entire medical reference complete with authors, publication date, and findings.
Even without hallucinations, the technology can sometimes misunderstand critical context. For instance, I recently heard a story about a physician who integrated generative AI into her EHR. Not long afterward, she broke her leg. When a patient asked, “What happened to your leg?” during a visit, the technology duly recorded the ensuing conversation in the chart. You can probably guess the rest of the story: the technology incorrectly assigned a broken leg to the patient.
Stories like this one are certainly outliers, but they offer valuable insight. Indeed, generative AI is not perfect 100% of the time. Yet, as a colleague recently pointed out, neither are humans. The U.S. Institute of Medicine published its landmark report, “To Err Is Human: Building a Safer Health System,” back in 1999, and we still have not achieved error-free medicine. Perhaps it is not just humans who err. Is it realistic to expect 100% error-free AI?
For the time being, physicians should approach generative AI with equal measures of optimism and caution. We must balance all the benefits of generative AI with its current realities and limitations. Any time it is used, it requires watchful human oversight.
In many ways, generative AI is much like a second-year medical student. It is intelligent, inquisitive, and constantly learning. There are high hopes that it will eventually become a great resident and then an excellent attending physician. But for now, it remains a student who sometimes makes mistakes. Consequently, it needs careful, ongoing supervision.
Still, who would turn down assistance from a second-year medical student—especially one who works untiringly 24/7/365?
Keep a human focus
As more physicians start testing different use cases for generative AI, it will be interesting to see how the industry balances its benefits while avoiding over-reliance on it. Direct physician intervention will always be essential to ensure that generative AI appropriately augments physicians’ work without inaccuracies or hallucinations. Currently, physicians can best learn how to use it by limiting its adoption to specific use cases.
For example, face-to-face patient visits provide a tremendous opportunity to evaluate the strengths and weaknesses of new tools because physicians will have each interaction and their own conclusions fresh in mind. At the end of each visit, physicians could scan the technology-generated documentation. Once satisfied that the note aligns with their intent and experience with the patient, they could move on to their next patient, knowing their documentation is complete. Likewise, it would be easy to spot and fix any errors or hallucinations.
At the end of the day, generative AI is no different than any other tool that could help improve the care experience. It is powerful, but we cannot afford to lose human oversight in the moment. Moreover, we must not let the technology distract us from our fundamental commitment to make health care accessible and equitable for all patients. Physicians must remain thoughtful about finding ways to work together to support better behavioral health, social drivers of health (SDOH), and other very human needs.
Although there is little doubt that generative AI will transform health care eventually, that day is not yet here. In the meantime, let us keep focusing on the humans we serve.
Joe Nicholson is the Chief Medical Officer at CareAllies. There, Dr. Joe provides strategic direction, operational oversight, and thought leadership for CareAllies’ clinical programs and operations, including all value-based arrangements within IPA/CIN or ACO constructs.