Artificial Intelligence and the Health Insurance Portability & Accountability Act
Written by: Joanne Byron, BS, LPN, CCA, CHA, CHCO, CHBS, CHCM, CIFHA, CMDP, COCAS, CORCM, OHCC, ICDCT-CM/PCS
This is the fourth article in our Artificial Intelligence and compliance series. Click Here to read other AI articles published by the American Institute of Healthcare Compliance.
The Health Insurance Portability and Accountability Act (HIPAA), Public Law 104-191 was implemented in 1996. Technology has greatly evolved since then. Although updates have been made to HIPAA, artificial intelligence (AI) technology is evolving faster.
ChatGPT and its developer, OpenAI, has been targeted with heavy criticism from governments and privacy experts due to concerns about its data retention policies. So, how is ChatGPT being used in health care?
ChatGPT exists in the form of an AI-enabled chatbot that mirrors intuitive human conversation. Medical record keeping is another area where ChatGPT will likely improve healthcare systems. Here, it could summarize patient medical histories, effectively streamlining the record-keeping process. Theoretically, healthcare professionals could dictate to ChatGPT, which could automatically summarize the key details.
Additional uses of ChatGPT include clinical data management, recruitment for clinical trials as well as assistance in clinical decision-making. ChatGPT has potential applications in mental health support, remote patient monitoring, medication management, disease surveillance, medical writing, patient triage, and more.
According to a 2023 article from the National Institutes of Health, it can be utilized to generate automated summaries of patient interactions and medical histories, making the medical recordkeeping process more streamlined for doctors and nurses. “ChatGPT is a tool that can assist in various domains of healthcare and medicine such as in structuring scientific literature, analyzing vast literature, and functioning as a conversationalist agent.”
As you can see, ChatGPT has access to protected health information or PHI. Additional concerns is ChatGPT and your device information! ChatGPT's service garners some personal information automatically from your device and browser. This includes your IP address, location, browser type, and the date and time that you start using ChatGPT as well as the length of your session. ChatGPT also retrieves your device’s name and operating system.
OpenAI uses cookies to track your browsing activity both in the chat window and on its site. It claims to use this information for analytics and to find out exactly how you interact with ChatGPT. ChatGPT records and stores transcripts of your conversations. This means any information you put into the chat, including personal information, is logged. It also collects any personal information you enter into the chatbot, which is a real privacy risk. To make things worse, it makes this information available to its AI trainers.
If your organization implements ChatGPT chatbot, it is imperative to include a security expert to ensure privacy of all data is secure. According to an article published in April 2023 in an article Is ChatGPT Safe? 6 Cybersecurity Risks of OpenAI's Chatbot, “Although many digital natives praise ChatGPT, some fear it does more harm than good. News reports about crooks hijacking AI have been making rounds on the internet, increasing unease among skeptics. They even consider ChatGPT a dangerous tool.” The article also states “Rumors say that ChatGPT sells personally identifiable information (PII).”
In March 2023, a security breach occurred. Some users on ChatGPT saw conversation headings in the sidebar that didn't belong to them. Accidentally sharing users' chat histories is a serious concern for any tech company, but it's especially bad considering how many people use the popular chatbot.
As reported by OpenAI when addressing the security breach on March 24, 2023:
“We took ChatGPT offline earlier this week due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.
The bug is now patched. We were able to restore both the ChatGPT service and, later, its chat history feature, with the exception of a few hours of history. As promised, we’re publishing more technical details of this problem below.
Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window. In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, credit card type and the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time.
We believe the number of users whose data was actually revealed to someone else is extremely low. To access this information, a ChatGPT Plus subscriber would have needed to do one of the following:
- Open a subscription confirmation email sent on Monday, March 20, between 1 a.m. and 10 a.m. Pacific time. Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users. These emails contained the credit card type and last four digits of another user’s credit card number, but full credit card numbers did not appear. It’s possible that a small number of subscription confirmation emails might have been incorrectly addressed prior to March 20, although we have not confirmed any instances of this.
- In ChatGPT, click on “My account,” then “Manage my subscription” between 1 a.m. and 10 a.m. Pacific time on Monday, March 20. During this window, another active ChatGPT Plus user’s first and last name, email address, payment address, the credit card type and last four digits (only) of a credit card number, and credit card expiration date might have been visible. It’s possible that this also could have occurred prior to March 20, although we have not confirmed any instances of this.
We have reached out to notify affected users that their payment information may have been exposed. We are confident that there is no ongoing risk to users’ data.
Everyone at OpenAI is committed to protecting our users’ privacy and keeping their data safe. It’s a responsibility we take incredibly seriously. Unfortunately, this week we fell short of that commitment, and of our users’ expectations. We apologize again to our users and to the entire ChatGPT community and will work diligently to rebuild trust.”
As reported by Reuters, ChatGPT had 100 million monthly active users in January 2023 alone. While the bug that caused the breach was quickly patched, the Italian data regulator demanded that OpenAI stop all operations that processed Italian users' data.
Even with improved changes to OpenAI's privacy policies following the incident with Italian regulators, it may not be enough to satisfy the General Data Protection Regulation (GDPR), a data protection law that covers Europe.
In the United States, where healthcare providers are subject to HIPAA's Privacy Rule, the use of ChatGPT with PHI data could lead to stiff penalties. In fact, the creators of ChatGPT clearly warn against feeding their AI model with confidential information.
Conclusion
Anonymizing or de-identifying health data before it's processed by ChatGPT can mitigate the risk of PHI breaches. By stripping away identifiable information, the data can no longer be traced back to a specific individual, allowing it to be handled without violating HIPAA regulations.
With no sign of AI development slowing down, the problems with ChatGPT are even more important to understand. From security breaches to privacy concerns to the undisclosed data it was trained on, there are plenty of concerns about the AI-powered chatbot, yet the technology is already being incorporated into apps and used by millions of users.
While compliance issues currently limit the full utilization of ChatGPT or other generative AI tools, their potential benefits are too significant to ignore. Hopefully soon AI developers and regulators will collaborate more closely to address compliance concerns and ethical dilemmas in health. This could involve the development of specialized AI models tailored to the needs and regulations of the healthcare industry. Until such time, implement this tool with caution.
AIHC will continue to post articles related to artificial intelligence with regards to healthcare compliance. Click Here for additional articles on various HIPAA topics. Click Here for articles relating to Artificial Intelligence. Visit the AIHC Certifications page with online compliance learning opportunities.
Copyright © 2023 American Institute of Healthcare Compliance All Rights Reserved