Banner

HIMSS23: The future of AI in health care

Article

Dreams of big AI is nothing new, but the risks require careful consideration

AI brain chip | © Shuo - stock.adobe.com

© Shuo - stock.adobe.com

Does AI offer unlimited potential for health care, or will the risks ultimately outweigh the benefits? The keynote panel discussion at HIMSS23 in Chicago, moderated by Cris Ross, CIO, Mayo Clinic, focused on artificial intelligence in health care and how the risks must not be ignored just to get the benefits.

Panelists were Andrew Moore, CEO, Lovelace AI; Kay Firth-Butterfield, CEO, Centre for Trustworthy Technology; Peter Lee, corporate vice president, Microsoft Corp.; and Reid Blackman, CEO, Virtue.

Ross spoke about how humans have always dreamed of intelligent machines, dating back to Homer’s Odyssey, which referenced autonomous ships that could navigate through mist and fog and avoid hazards. Using the story of Icarus, who flew too close to the sun and crashed to the earth, Ross posed the question whether just because we can do something with AI, should we actually do it?

Moore said that organizations should embrace AI, even on a small scale, and push forward with it. There should be a group that looks at the general technology platform and make sure physicians and others can use it to meet the needs of patients.

“Don’t wait to see what happens with the next iteration, start now,” says Moore. “Don’t wait for a small number of experts in Silicon Valley to do it for you.”

Lee said that while generative AI is quickly emerging in capabilities, it is important to understand what the implications for health care might be. If AI is 93% correct when asked a medical question, what about the other 7%?

While AI can help with physician notetaking, help write justification text for prior authorizations, or even roleplay a patient for medical students, Lee says there are also serious risks, including some we may not know about yet.

“It is the health care community that needs to own and decide whether or how to use the technology,” Lee said.

Reid had concerns about what’s actually behind technology like ChatGPT and that it may be tricking people into thinking they are dealing with an intelligent, reasoning device when it is more of a word-predicator that may not be giving results than can be explained.

“It’s magic that works, but for a cancer diagnosis, you need to know how you came to that diagnosis,” says Reid. “ChatGPT doesn’t give reasons. It looks like it, but it doesn’t. It just predicts the next set of it thinks make the most sense. It is a word predictor, not a deliberator.”

Firth-Butterfield pointed out that in terms of equity, ChatGPT poses serious concerns. Not everyone even has internet access to be able to use it, presenting one problem. Add in bias it might have built-in and concerns about informed consent and you have a bigger issue. And what about accountability when something goes wrong with it?

“Who do you sue?” Firth-Butterfield asks. “Maybe you can pass on the liability on to someone else, but what do you have to do to prove that? Do you have to prove you did your own due diligence?”

She also said that if organizations are going to use generative AI systems, they need to think about what data is going to be shared with those systems. One company shared confidential material with an AI system, and that information came out as an answer to someone else outside of the company’s system. “That’s the sort of thing you have to think about very carefully,” she says.

Lee says that AI needs to be examined in each sector, such as health care or education, and not globally, because usage will vary.

Moore said that big companies have been irresponsible of their use of AI and that it will lead to big problems. He says a good use of AI is helping guide users to sites or sources of information that can help them with the problem they are trying to solve. He added that while generative AI has been quickly adopted by many and it has made great strides in a short time, perfecting it will take much longer. He used the example of autonomous cars, where 93% of the time they could drive themselves, but each incremental .25% gain after that took a year, saying AI will likely follow a similar development path.

Firth-Butterfield, who was a signee of a letter urging a six month pause on further AI development until it is better understood, says ethics are being overlooked. She doesn’t believe AI will destroy the world, but designers need to make sure it is designed in a way that benefits humanity, and that the maximum number of people have access to the tools so as to not worsen health inequity. “Make sure everyone in the company understands AI,” she says. “Know what you want and that the risks are there. You don’t want to negatively affect your brand or lose patients. Don’t get blinded by ChatGPT.”

© 2024 MJH Life Sciences

All rights reserved.