Banner
  • Utilizing Medical Malpractice Data to Mitigate Risks and Reduce Claims
  • Industry News
  • Access and Reimbursement
  • Law & Malpractice
  • Coding & Documentation
  • Practice Management
  • Finance
  • Technology
  • Patient Engagement & Communications
  • Billing & Collections
  • Staffing & Salary

California law to protect ethical AI use in health care

Blog
Article

These core principles should be adopted into any covered entity or business associates’ policies and procedures.

AI gavel | © witsarut - stock.adobe.com

© witsarut - stock.adobe.com

It is well known that “cybersecurity is patient safety.” This same premise applies equally to generative artificial intelligence (GenAI) as it does to cybersecurity in the healthcare sector, including patients. Before delving into California’s AB 3030: Health care services: artificial intelligence (AB 3030) and California’s SB 1120, “The Physicians Make Decisions Act” (SB 1120), a great resource to lay the foundation of ethical AI is the White House Office of Science and Technology’s Blueprint for an AI Bill of Rights. The compliance date is set for January 1, 2025. When evaluating and implementing GenAI, five (5) factors should be considered to help ensure that the GenAI is ethical and legal:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

The analysis of AB 3030 highlights the AI Bill of Rights, as well as how these build on the California Consumer Protection Act (CCPA). Specifically,

Recent Federal and State Activity. As GenAI has exploded into popular knowledge and use, here has been a flurry of activity among states and the federal administrative and regulatory agencies. Governor Newsom issued an Executive Order (EO) in September, 2023, that addressed evaluation and deployment of AI within state government. The administration is implementing the EO, which includes procurement proposals by state agencies, two of which relate to health care. One proposal seeks to improve efficiency in inspections of health facilities by the Department of Public Health (DPH), and another within the California Health and Human Services Agency seeks to improve translations in the health care setting.
The CPPA is issued draft regulations in December 2023 that would impose requirements for businesses using "automated decision-making technology" (ADMT). However, the draft CPPA regulations on ADMT do not appear to explicitly address the disclosure issue raised by this bill.
The federal government has recently taken a number of AI-related actions. In 2023, the Federal Blueprint for an AI Bill of Rights included five principles and associated practices to protect the rights of the American public in the age of AI. One principle is that someone should know that an automated system is being used and understand how and why it contributes to outcomes that impact them. Another principle holds that a person should be able to opt out, where appropriate, and have access to someone who can quickly consider and remedy problems. This bill appears generally aligned with these principles; however, the principles also state the notice should be calibrated to the level of risk based on the context, and that automated systems should provide explanations that are technically valid, meaningful and useful.

This brings us to SB 1120 and AB 3030, both of which were signed into law on September 28, 2024. AB 3030 is short, so its full text appears below:

SECTION 1.Chapter 2.13 (commencing with Section 1339.75) is added to Division 2 of the Health and Safety Code, to read:2.13.Artificial Intelligence in Health Care Services1339.75.

(a)A health facility, clinic, physicians office, or office of a group practice that uses generative artificial intelligence to generate written or verbal patient communications pertaining to patient clinical information shall ensure that those communications include both of the following:

(1)A disclaimer that indicates to the patient that the communication was generated by generative artificial intelligence.

(A)For written communications involving physical and digital media, including letters, emails, and other occasional messages, the disclaimer shall appear prominently at the beginning of each communication.

(B)For written communications involving continuous online interactions, including chat-based telehealth, the disclaimer shall be prominently displayed throughout the interaction.

(C)For audio communications, the disclaimer shall be provided verbally at the start and the end of the interaction.

(D)For video communications, the disclaimer shall be prominently displayed throughout the interaction.

(2) Clear instructions describing how a patient may contact a human health care provider, employee of the health facility, clinic, physicians office, or office of a group provider, or other appropriate person.

(b)If a communication is generated by generative artificial intelligence and read and reviewed by a human licensed or certified health care provider, the requirements of subdivision (a) do not apply.

(c)For purposes of this section, the following definitions apply:

(1) Artificial intelligence means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

(2) Clinic has the same meaning as defined in Section 1200.

(3) Generative artificial intelligence means artificial intelligence that can generate derived synthetic content, including images, videos, audio, text, and other digital content.

(4) Health care provider means a person licensed or certified pursuant to Division 2 (commencing with Section 500) of the Business and Professions Code.

(5) Health facility has the same meaning as defined in Section 1250.

(6) Office of a group practice means an office or offices in which two or more physicians are legally organized as a partnership, professional corporation, or not-for-profit corporation licensed according to subdivision (a) of Section 1204.

(7) Patient clinical information means information relating to the health status of a patient. This information does not include administrative matters, including, but not limited to, appointment scheduling, billing, or other clerical or business matters.

(8) Physicians office means an office of a physician in solo practice.

(d)(1)A violation of this section by a licensed health facility is subject to the enforcement mechanisms described in Article 3 (commencing with Section 1275) of Chapter 2.

(2) A violation of this section by a licensed clinic is subject to the enforcement mechanisms described in Article 3 (commencing with Section 1225) of Chapter 1.

(3) A violation of this section by a physician is subject to the jurisdiction of the Medical Board of California or the Osteopathic Medical Board of California, as appropriate.

Since SB 1120 is much more lengthy and applies to health plans and medical professionals, here are the highlights:

  1. Amends Section 1367.01 of the Health and Safety Code, and to amend Section 10123.135 of the Insurance Code, relating to health care coverage.
  2. “No individual, other than a licensed physician or a licensed health care professional who is competent to evaluate the specific clinical issues involved in the health care services requested by the provider, may deny or modify requests for authorization of health care services for an enrollee for reasons of medical necessity. The decision of the physician or other health care professional shall be communicated to the provider and the enrollee pursuant to subdivision (h).”
  3. Medical necessity determinations require a qualified human individual to conduct a review of utilization review (UR) and utilization management (UM) coverage determinations for insurance qualification.

In sum, a great place to start is with the AI Bill of Rights. These core principles should be adopted into any covered entity or business associates’ policies and procedures. Training should also reflect these GenAI items in order to protect patient safety and maintain HIPAA compliance.

Rachel V. Rose, JD, MBA, advises clients on compliance, transactions, government administrative actions, and litigation involving healthcare, cybersecurity, corporate and securities law, as well as False Claims Act and Dodd-Frank whistleblower cases. She also teaches bioethics at Baylor College of Medicine in Houston. Rachel can be reached through her website, www.rvrose.com.

Recent Videos
Jennifer Wiggins
Jennifer Wiggins
Physicians Practice | © MJH LifeSciences
Ike Devji, JD and Anthony Williams discuss wealth management issues
Ike Devji, JD and Anthony Williams discuss wealth management issues
Victor Bornstein gives expert advice
Victor Bornstein gives expert advice
Victor Bornstein gives expert advice
Related Content
© 2024 MJH Life Sciences

All rights reserved.