• Industry News
  • Access and Reimbursement
  • Law & Malpractice
  • Coding & Documentation
  • Practice Management
  • Finance
  • Technology
  • Patient Engagement & Communications
  • Billing & Collections
  • Staffing & Salary

Artificial intelligence and medical professional liability: 3 Questions for decision makers

Blog
Article

While the developers of AI-based applications charge ahead, clinicians and healthcare institutions must temper their optimism with caution.

digital gavel | © sasun Bughdaryan - stock.adobe.com

© sasun Bughdaryan - stock.adobe.com

Rapid advancements in artificial intelligence—sometimes known as augmented intelligence, to reflect that AI-powered tools supplement human intelligence, but do not displace it—have inspired many to feel cautiously optimistic about AI’s potential to assist healthcare professionals.

Administrative lifts first, diagnostic help to follow

Some healthcare institutions have been quick to implement tools that lift administrative burdens, such as through applications to streamline scheduling and reduce wait times for cancer patients. There are also AI-powered applications that predict staffing needs. In terms of text generation, applications for clinical documentation have come a long way: A number of medical groups have embarked on the use of Nuance's DAX Copilot system, through which the clinical note is ready and available, as soon as the visit has ended, for the provider to edit. However, attempts to use generative AI to compose messages to patients have produced disappointing results so far.

In the more daunting arena of patient-facing care, healthcare systems are building on the success of clinical decision support in areas like medication safety. Image analysis is one area where healthcare systems are already deploying AI tools—sometimes for diagnosis, and sometimes for surgical teams’ preoperative planning—with further deployment of other patient-facing applications expected.

Determining responsibility—and liability

While the developers of AI-based applications charge ahead, clinicians and healthcare institutions must temper their optimism with caution—and with questions regarding who will be responsible when a patient is harmed. How can medical professionals and organizations take steps to protect patient safety? How can they shield themselves from responsibility for aspects of healthcare technology that belong with developers, not doctors?

It can take years for answers to wend their way through the courts. Yet clinicians and healthcare organizations have to make decisions now, during “the awkward adolescence of software-related liability.”

Asking three key questions can help decision makers guard their doors against risk and liability.

Question 1: Will the technology application be granted any power to take autonomous action, or will the application report a pattern or concern to a human who takes an action? If the application can take action, what are the stakes?

Self-driving cars in San Francisco have driven into construction sites, obstructed first responders, and otherwise caused headaches, if not hazards.

This experience demonstrates how AI lacks what is commonly called common sense. Therefore, when we say “artificial intelligence,” for now, we mostly mean augmented intelligence, through which machine learning and deep learning give humans the means to make better decisions. AI-powered clinical decision support tools may flag certain patterns for clinician attention, but we still need and want clinicians to investigate and validate AI recommendations.

Yet just because the human is in the loop does not mean that the human remembers the loop is there. For a variety of reasons, including the many ways in which technologies thread themselves through clinical practice, and the prediction that increased throughput expectations could consume any time clinicians have to investigate AI recommendations, clinician leaders have cautioned: “[I]t is perilous to assume that clinician vigilance is an acceptable safeguard against AI faults.”

Clinicians must have sufficient bandwidth to maintain vigilance, and workflow design must support and encourage that vigilance.

Question 2: At some point, it could be considered negligence not to use certain new technologies. For the application under consideration, how does the current standard of care accommodate human intelligence vs. human intelligence augmented by technology?

Once upon a time, arthroscopic surgery was new—yet revolutionary medical techniques and their benefits can quickly become familiar, even to medical laypeople.

Already, the idea that a computer might be the first to review a medical image or scan is familiar to many. Radiology groups, hand surgeons, and others are using AI to read films and compile reports—both of which are then scrutinized and edited by the clinician. Many medical professionals and patients welcome such assistance from technology, which enhances, but does not supplant, human expertise.

Over time, decision makers may face expectations that AI’s benefits become incorporated into the standard of care. At some point, the risks of not adopting a new tool could exceed the risks from the tool, so that ignoring AI could constitute malpractice.

Question 3: At first, it may be plaintiffs who have a harder time making their case in the courts, but that will change. How can clinicians and organizations make choices now to promote patient safety—while documenting those choices for the future?

Legal precedent has limited usefulness in predicting how AI-related medical malpractice litigation will fare in the courts, because courts have shown hesitation in applying doctrines of product liability to software.

AI models have statistical patterns at their heart, and to prevail in court, plaintiffs must show that relevant patterns were “defective” in ways that made their injury foreseeable. With representation of these patterns involving billions of variables, the technical challenges for plaintiffs are daunting. Over time, however, tort doctrine will grow to reflect the realities of AI use in healthcare.

Meanwhile, assessing the liability risk of any specific AI implementation involves accounting for factors including the AI’s accuracy; opportunities to catch its errors; evaluating the severity of potential harms; and the likelihood that injured patients could be indemnified.

Healthcare organizations can begin their liability risk assessments with the following considerations:

Richard F. Cahill, JD, is Vice President and Associate General Counsel of The Doctors Company, Part of TDC Group

Recent Videos
Victor Bornstein gives expert advice
Victor Bornstein gives expert advice
Victor Bornstein gives expert advice
Victor Bornstein talks about his company, Justpoint
© 2024 MJH Life Sciences

All rights reserved.