• Home
  • >
  • Blog
  • >
  • Artificial Intelligence – Resources to Technology Governance

February 8, 2024

Artificial Intelligence – Resources to Technology Governance

This article provides an introduction to Artificial Intelligence, large multi-modal models and the impact on health care.  This information is not intended as legal or consulting advice.  Please utilize resources provided within this article for more information.     

During the design and development of machine-learning models (aka general-purpose foundation models), the responsibility rests with the developers.

A general-purpose foundation model can be used by a third party (a “provider”) through an active programming interface for a specific purpose or use. Governments bear the responsibility to set laws and standards to require or forbid certain practices.  Another aspect to consider is compliance to national and state security standards.  In the United States, America’s cyber defense agency is CISA (Cybersecurity & Infrastructure Security Agency).  Secure by Design means building cybersecurity into the manufacturing of the technology.


International Governance

International governance is necessary to ensure that all governments are accountable for their investments and participation in the development and deployment of AI-based systems and that governments introduce appropriate regulations that uphold ethical principles, human rights and international law. International governance can also ensure that companies develop and deploy LMMs that meet adequate international standards of safety and efficacy and are upholding ethical principles and human rights obligations. Governments should also avoid introducing regulations that provide a competitive advantage or disadvantage for either companies or themselves.

Published in September 2023 by Oxford Academic, journal article The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research discusses the international impact of AI technology posing a global governance challenge.  The first section of the article explains why AI is now global governance concern:

“Why does AI pose a global governance challenge? In this section, we answer this question in three steps. We begin by briefly describing the spread of AI technology in society, then illustrate the attempts to regulate AI at various levels of governance, and finally explain why global regulatory initiatives are becoming increasingly common. We argue that the growth of global governance initiatives in this area stems from AI applications creating cross-border externalities that demand international cooperation and from AI development taking place through transnational processes requiring transboundary regulation.”


NIST – the National Institute of Standards and Technology

Published in January 2023, NIST’s Artificial Intelligence Risk Management Framework (AI RMF) guidance targets mitigating risk while cultivating trust in AI technologies. 

“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” said Deputy Commerce Secretary Don Graves. “It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”

On March 30, 2023, NIST launched the Trustworthy and Responsible AI Resource Center, which will facilitate implementation of, and international alignment with, the AI RMF.

As the United States and other governments regulate foundation models, new legal definitions have emerged.  The World Health Organization (WHO) is weighing-in with free resources related to health care.


World Health Organization and the Governance of Generative Artificial Intelligence (AI) Technology

The six core AI principles identified by WHO are:

  1. Protect autonomy;
  2. Promote human well-being, human safety, and the public interest;
  3. Ensure transparency, explainability, and intelligibility;
  4. Foster responsibility and accountability;
  5. Ensure inclusiveness and equity;
  6. Promote AI that is responsive and sustainable.

In January 2024, the World Health Organization (WHO) posted a new guidance regarding AI ethics and governance guidance of large multi-modal models.

This is an update to the previous June 2021 publication Ethics and governance of artificial intelligence for health and used by WHO to offer a free 3.5 hour online introductory course designed for policymakers, AI developers, designers and health care providers involved in the design, development, use and regulation of AI technology for Health.  Click Here for more information and “Enroll me now”.

Multimodal language models are considered to be next steps toward artificial general intelligence.  A large multimodal model (LMM) is an advanced type of artificial intelligence model that can process and understand multiple types of data modalities. These multimodal data can include text, images, audio, video, and potentially others.

As reported by WHO, LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Bard and Bert – entering the public consciousness in 2023.  GPT-4, the latest iteration in the GPT series of models maintained by OpenAI, is capable of responding to multimodal queries. Multimodal queries use text and images.

The new WHO guidance outlines five broad applications of LMMs for health:

  • Diagnosis and clinical care, such as responding to patients’ written queries;
  • Patient-guided use, such as for investigating symptoms and treatment;
  • Clerical and administrative tasks, such as documenting and summarizing patient visits within electronic health records;
  • Medical and nursing education, including providing trainees with simulated patient encounters, and;
  • Scientific research and drug development, including to identify new compounds.

Read Additional Free Articles on Artificial Intelligence and Health Care posted to the American Institute of Healthcare Compliance Blog.

This article is written by members of the AIHC Volunteer Education Committee.  AIHC is a non-profit organization.  We value our members, credentialed professionals and greatly appreciate the talents offered by our member volunteers!


Copyright © February 2024 American Institute of Healthcare Compliance All Rights Reserved

TAGS


Verified by MonsterInsights