Artificial Intelligence & Regulatory Compliance
Written by Joanne Byron, BS, LPN, CCA, CHA, CHCO, CHBS, CHCM, CIFHA, CMDP, COCAS, CORCM, OHCC, ICDCT-CM/PCS
This article follows Part 1 - Basics of Artificial Intelligence (AI) and Healthcare Compliance published by AIHC on June 6, 2023. AI is advancing rapidly, so we encourage you to reference the new Artificial Intelligence article category for the latest articles. As stated in Part 1, the Office of the National Coordinator for Health Information Technology (ONC) and the Agency for Healthcare Research and Quality (AHRQ), with support from the Robert Wood Johnson Foundation, turned to an independent group of scientists and academics to consider how AI might shape the future of public health, community health, and healthcare delivery. The question remains, how will the use of AI be regulated for health care use?
Artificial Intelligence/Machine Learning has gained heightened attention globally. Augmented Intelligence has been embraced as a concept by physician organizations to underscore that emerging AI systems are designed to aid humans in clinical decision-making, implementation and administration to scale healthcare, according to Act Online Key Terminology for AI in Health.
Although the United States is making progress in developing domestic AI regulation, including with the National Institute of Standards and Technology (NIST) AI Risk Management Framework, the Blueprint for an AI Bill of Rights, and existing laws and regulations that apply to AI systems is still a work-in-progress. The goals are to protect people from unsafe or ineffective systems.
So, Who Regulates Healthcare AI?
What seems like a simple question is really a complex situation. This article only scratches the surface of various regulatory agencies involved in the regulation of AI. The Health & Human Services (HHS) response to OMB Memorandum 21-06 “Guidance for Regulation of Artificial Intelligence Applications” was drafted in November 2020 and is directed to the heads of all Executive Branch departments and agencies, including independent regulatory agencies. Much has happened since then.
On April 25, 2023, the Federal Trade Commission (FTC), the Civil Rights Division of the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement highlighting their commitment to "vigorously use [their] collective authorities to protect individuals" with respect to artificial intelligence and automated systems (AI), which have the potential to negatively impact civil rights, fair competition, consumer protection, and equal opportunity.
The joint statement from the DOJ, FTC, CFPB, and EEOC signifies a growing awareness and concern among federal agencies about the potential risks and challenges posed by AI and automated systems. As AI continues to become more integrated into all aspects of daily life, the importance of addressing potential biases, transparency issues, and flawed design becomes increasingly critical.
Federal Trade Commission (FTC) Raises Concerns
The FTC’s mission is to protect consumers and competition through preventing anticompetitive, deceptive and unfair business practices. This is achieved through law enforcement, advocacy, and education without unduly burdening legitimate business activity. The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose. The FTC’s action should help protect healthcare organizations by limiting deceptive or exaggerated promises of what a medical device or AI software can actually do. It’s not uncommon for advertisers to say that some new-fangled technology makes their product better – perhaps to justify a higher price or influence labor decisions.
On May 18, 2023, the FTC issued a warning that the increasing use of consumers’ biometric information and related technologies, including those powered by machine learning, raises significant consumer privacy and data security concerns and the potential for bias and discrimination. Biometric information refers to data that depict or describe physical, biological, or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.
The Federal Drug Administration & AI
The Food & Drug Administration (FDA) released a discussion paper in 2019 and then an action plan on January 21, 2021 regarding Artificial Intelligence and Machine Learning, or AI/ML. This action plan describes a multi-pronged approach to advance the Agency’s oversight of AI/ML-based medical software. Then, in April 2023, the FDA is publishing a draft guidance, "Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions."
- This draft guidance proposes a science-based approach to ensuring that AI/ML-enabled devices can be safely, effectively, and rapidly modified, updated, and improved in response to new data.
The approach the FDA is proposing in this draft guidance would put safe and effective advancements in the hands of health care providers and users faster, increasing the pace of medical device innovation in the United States and enabling more personalized medicine.
- This means, for example, that diagnostic devices could be built to adapt to the data and needs of individual health care facilities and that therapeutic devices could be built to learn and adapt to deliver treatments according to individual users' particular characteristics and needs.
National Institute of Standards and Technology (NIST) AI Risk Management Framework
Released on January 26, 2023, NIST’s AI Risk Management Framework or “AI RMF” which is intended to be used voluntarily to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The Framework was developed through a consensus-driven, open, transparent, and collaborative process with the intention to build on, align with, and support AI risk management efforts by others.
Recently NIST launched the Trustworthy and Responsible AI Resource Center (AIRC), which will facilitate implementation of, and international alignment with, the AI RMF. We recommend watching the introduction video: https://www.nist.gov/video/introduction-nist-ai-risk-management-framework-ai-rmf-10-explainer-video
For healthcare HIPAA covered entities, NIST is likely a familiar organization to you. NIST published prior documents related to AI. The initial draft of the AI RMF was published March 17, 2022 and a second draft on August 18, 2022.
The Health Insurance Portability and Accountability Act (HIPAA)
The Office for Civil Rights (OCR) is responsible for enforcing the HIPAA Privacy and Security Rules (45 C.F.R. Parts 160 and 164, Subparts A, C, and E). One of the ways that OCR carries out this responsibility is to investigate complaints. As health care organizations evolve with the use of AI, there is increased potential for cyber criminals to exploit vulnerabilities.
At the present, there are two exclusions existing in the HIPAA Privacy Rule that allow Covered Entities to share Protected Health Information (PHI) with device vendors and other organizations without the authorization of the individual(s) to whom the PHI relates. The two exclusions can be found in 45 CFR §164.512(b)(1) and 45 CFR §164.512(i)(1). Respectively, they relate to:
- Disclosures to vendors regulated by the Federal Drug Administration are permitted by the Privacy Rule for the “purpose of activities related to the quality, safety or effectiveness of such FDA-regulated product or activity”. The FDA regulates the sale of all medical device products, including personal health devices that transmit data to AI-driven healthcare solutions as described above.
- PHI can also be disclosed without authorization for research purposes without being de-identified if the disclosure is approved by an Institutional Review Board or Privacy Board. In such circumstances, the disclosed PHI must remain in the possession of the Covered Entity and the disclosure(s) can only be for the purpose of preparatory research (i.e., programming a “Supervised Learning Algorithm”).
Conclusion
Simply stated, a shift to AI calls for new skills. It warrants increased knowledge of HIPAA privacy, security and anticipating other legal issues surrounding it’s use in healthcare.
Needless to say, it is important to maintain a robust HIPAA program and utilize information from the National Institute of Standards and Technology (NIST) AI Risk Management Framework as mentioned above.
In the context of HIPAA, healthcare data, and AI technologies, AI developers and vendors should consider that HIPAA only provides a federal floor of privacy and security standards. Often, other state and federal laws can apply that pre-empt HIPAA – particularly with regard to healthcare adjacent data – or apply to more organizations than Covered Entities and Business Associates. Also, many Managed Service Providers (MSP) companies providing services to healthcare organizations should be aware of AI applications and security vulnerabilities.
If your organization plans or is using AI for medical diagnostics, reference the annual joint publication by the U.S. Government Accountability Office (GAO) and the National Academy of Medicine published each September entitled “Technology Assessment – Artificial Intelligence in Health Care – Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics”. A new publication is posted each year: https://www.gao.gov/products/gao-22-104629
AIHC will continue to post articles related to artificial intelligence with regards to healthcare compliance. Click Here for additional articles on various HIPAA topics. Click Here for articles relating to Artificial Intelligence. Visit the AIHC Certifications page with online compliance learning opportunities.