Banner

Nutrition labels: Ensuring AI transparency, accountability in healthcare

Blog
Article

Nutrition labels in healthcare are intended to provide transparency to doctors, clinicians and other health professionals so they can make informed decisions about how they use it.

artificial intelligence | © Production Perig - stock.adobe.com

© Production Perig - stock.adobe.com

Nutrition labels go way back. It all began with the Pure Food & Drug Act of 1906, a truth in labeling law intended to protect consumers from adulterated and unsanitary food and drugs. It also was the precursor to the establishment of the FDA. Fast-forward to 1990, and the FDA published “proposed rules for the mandatory nutrition labeling of almost all packaged foods.” The evolution of nutrition labels empowered consumers with transparency, data and the ability to make informed decisions about what they put into their bodies – and it held manufacturers accountable.

As we move full steam ahead into the artificial intelligence (AI) era, the healthcare industry is taking a cue from food and drug markets, slapping nutrition labels on AI-driven solutions, to share how they were trained and on what types of data. Nutrition labels in healthcare are intended to provide transparency to doctors, clinicians and other health professionals so they can make informed decisions about how they use it.

Federal regulators are working to enact legislation for nutrition labels. In December 2023, the Biden administration issued an executive order, outlining dozens of actions to ensure the “safe, secure and trustworthy development and use of AI.”

Spearheaded by the Office of the National Coordinator for Health Information Technology (ONC), the proposed nutrition label discloses how the app was trained, how it performs, how it should be used (or not used). This is the same group also responsible for certifying electronic health record (EHR) software. The ONC leaves it to AI developers to decide how to apply the label, but they mandate that it must be visible to medical professionals in order to be certified by the ONC. If they choose not to disclose anything, clinicians would be able to see that too.

The labeling rule, which could be finalized before year’s end, represents one of Washington’s first tangible attempts to impose new safety requirements on artificial intelligence. Healthcare and technology companies are pushing back on it, saying the rule could compromise proprietary information and hurt competition, in a sign of how difficult it is for the government to police rapidly evolving AI systems. According to the Wall Street Journal, the ONC estimated that the rule will cost developers between $81 million and $335 million over 10 years, but it just might become a standard rule of play, as AI seeps further into healthcare.

AI-assisted healthcare

AI is being increasingly utilized by clinicians and medical professionals in various aspects of healthcare, offering opportunities for improved diagnosis, treatment and overall healthcare management. Below are some of the key applications:

Diagnostic imaging - AI is helping to analyze medical images such as X-rays, MRIs, and CT scans, to aid in the early detection of diseases and abnormalities. Doctors still control the final findings, but AI is assisting radiologists in interpreting images, reducing the chances of human error and improving diagnostic accuracy.

Administrative tasks - AI streamlines administrative tasks, such as appointment scheduling, billing and claims processing, reducing administrative burden and improving efficiency.

Clinician notes - According to the National Institutes of Health (NIH), doctors spend around 35 percent of their time documenting patient data.AI assistants are playing greater roles helping them generate progress notes from a patient-provider conversation in the exam room. It’s also helping them immediately create a draft response to a patient's question.

Personalized medicine - AI is helping doctors analyze patient data, including genetic information, to tailor treatment plans based on individual characteristics, leading to more effective and targeted interventions.

Virtual health assistants and chatbots - AI-powered virtual assistants and chatbots are enabling better patient engagement, providing support for patient education, medication adherence and general health inquiries.

The inherent risk in AI

Despite the growing benefits of AI in healthcare markets, there is reason to ensure it’ safe and transparent use. Consider the following AI risk factors:

Data privacy and security - By its data-driven nature, AI handles sensitive patient data, which can raise concerns about privacy and security. Unauthorized access to health data could lead to breaches and misuse. It’s important to have strict security measures in place before using an AI system – especially when it is sharing that data across other systems within the provider network.

Bias and fairness - If the datasets used to train AI algorithms are biased, the AI systems may produce biased results, leading to disparities in healthcare outcomes across different demographics. The use cases of bias in AI are real. In one documented case, scientists at MIT discovered that AI solutions used to diagnose the presence of problems in chest X-rays were less accurate for black people. And, UK researchers discovered that the datasets used to train algorithms to diagnose eye disease “contained a disproportionate number of patient records from Europe, North America and China, meaning they would likely perform worse for patients from low-income countries.”

Explainability - For every decision made by an AI tool, there must be a way to explain how the algorithm arrived at the decision. Given the complexity of AI and the massive datasets required to make them effective, it can be difficult to explain the decision-making process today.

AI has the potential to significantly improve the quality of healthcare, freeing up overtaxed clinicians to focus more on patients and less on documentation, empowering them with data-driven diagnostics to support their findings and giving patients peace of mind with 24/7 answers to their questions via chatbots. Yet, along with the benefits come the risks, and clinicians should have the opportunity to understand what they’re getting into. Nutrition labels, along with other forms of governance, can help provide some level of transparency. They also underscore the importance of keeping humans front and center, overseeing the outputs of AI and ultimately being the ones to make decisions that can affect people’s lives.

Carlos Meléndez is vice president of operations for Wovenware, a Maxar Intelligence company and a Puerto Rico-based provider of custom software and AI services.Prior to cofounding Wovenware, Carlos was a senior software engineer with several start-up software firms and held strategic positions with global consulting firm, Accenture.

Recent Videos
MGMA comments on automation of prior authorizations
Erin Jospe, MD gives expert advice
A group of experts discuss eLearning
Three experts discuss eating disorders
Navaneeth Nair gives expert advice
Navaneeth Nair gives expert advice
Navaneeth Nair gives expert advice
Matt Michaela gives expert advice
Matthew Michela gives expert advice
Matthew Michela gives expert advice
Related Content
© 2024 MJH Life Sciences

All rights reserved.