Physicians will only be able to trust artificial intelligence when it's transparent.
As ChatGPT dominates discussions about potential of artificial intelligence (AI) to disrupt entire industries, an Australian doctor nervously recounted how the model supposedly “diagnosed his patient in seconds,” the Daily Mail reported. Of course, this scenario amounts to an exception rather than the rule, considering ChatGPT, created by developer OpenAI, isn’t designed for industrial uses that require that level of precision – let alone medically diagnosing patients.
The chatbot’s impressive success does, nevertheless, raise questions about AI’s involvement in health care.
In a sense, yes. ChatGPT’s two most pronounced breakthroughs in relation to health care will be in:
Social media is exploding with tips by physicians on how to use ChatGTP. Some examples include sending prescription instructions to patients, tapering down medication instructions, constructing a letter to insurance companies requesting approval for a medication or procedure, and writing the initial outline and abstract of a scientific paper. The list goes on, and we are only at the beginning of this historic pivot.
The real question is whether ChatGPT can think clinically about a patient. Can OpenAI’s model perform clinical reasoning in an evidence-based manner that can assist physicians in decision making?
That’s where it gets tricky.
One of the biggest hurdles ChatGPT faces – in health care and other sectors – is that it is built with the “black-box” machine learning approach, meaning it offers zero transparency into how the model produces its output. This majorly stymies the potential, for example, for writers and researchers to leverage ChatGPT beyond ideation, outlines, and short paragraphs because the model doesn’t trace back to its originating sources.
It’s not that ChatGPT or black-box AI more broadly were created with the purpose of deliberately creating mystery surrounding their decisions. Rather, it’s an implication of the methods through which the software is developed. Many black-box methods of creating the health care-geared AI models that power chatbots and clinical-intake tools produce their output by comparing each specific case with the countless patient records in their databases. In doing so, they in effect base their algorithmic decisions on big data, making it impossible to reason through or reference their decisions to a specific medical source.
We have all become accustomed to a hit-or-miss AI that produces output almost magically. It gets some things right and others wrong, but never explains its reasoning or references back to its sources.
What will it take for physicians to gain trust and adopt an AI based technology into their practices? Building explainable AI (XAI) starts with the data on which we train our models. Companies need to have transparency and explainability in mind early in their journeys to start off with the appropriate data – data on which the users of the intended software understand and already rely.
In the case of health care software that is aimed to work side-by-side with providers, that means data from peer-reviewed, high-quality medical literature. Standardized care based on reliable evidence is the key to high-quality care. AI systems built on that principle rely not only on the quantity of data they use, but also on the ability of the models to understand the content of these sources and apply it intelligently where needed in real time.
There are several ways in which XAI could benefit physicians when it comes to clinical reasoning tools:
When built in a transparent and explainable way, AI offers tremendous potential for improving the clinical-reasoning process, ensuring high-quality care while also making physicians’ lives easier. The black-box approach hinders the ability to develop models that win physicians’ trust, and for good reason. It’s time to steer the AI ship in the explainable direction. Until then, ChatGPT-like tools will be used mostly for the more administrative part of health care.
Michal Tzuchman-Katz, MD, is a cofounder and chief medical officer at Kahun Medical, a company that built an evidence-based clinical reasoning tool for physicians. Before cofounding Kahun in 2018, she worked as a pediatrician at Clalit Health Services, Israel’s largest HMO. She practices emergency medicine in Pediatric ED at Ichilov Sourasky Medical Center, where she also completed her residency. Additionally, she has a background in software engineering and led a tech development team at Live Person.