How Healthcare Organizations Can Prioritize AI Governance

As artificial intelligence gains momentum in healthcare, it’s critical for health systems and related stakeholders to develop robust AI governance programs. AI’s potential to address challenges in administration, operations, and clinical care is drawing interest across the sector. As this technology evolves, the range of applications in healthcare will only broaden.

To harness AI’s benefits, healthcare organizations must address complex challenges related to AI development, validation, and deployment. AI governance—establishing standards and oversight for AI use—will be essential for ensuring that health systems implement and monitor AI in ways that align with laws and best practices.

During a recent webinar, leaders from Sheppard Mullin Richter & Hampton LLP discussed strategies for health systems, hospitals, and other providers to build an effective AI governance program.

The Need for AI Governance in Healthcare

The panel highlighted that effective governance starts with understanding how AI can enhance healthcare services, medical devices, and operational solutions. AI encompasses a variety of technologies—including machine learning (ML), deep learning, natural language processing (NLP), generative AI (GenAI), large language models (LLMs), and clinical decision support software. Each offers unique applications in healthcare: NLP can improve clinical efficiency, ML is valuable for predictive analytics, and GenAI and LLMs can enhance documentation and nurse workflows.

Transparency and interpretability are key as healthcare AI develops further. According to Esperance Becton, an associate at Sheppard Mullin, “We want to make sure that AI systems are open to scrutiny and understandable to users.” This focus is vital for maintaining patient trust, a core element of positive healthcare experiences.

To prioritize patient safety and privacy, healthcare leaders should establish ethical principles and risk management frameworks for AI usage. Together, these considerations build the foundation for effective AI governance.

Defining AI Governance

Carolyn Metnick, a partner at Sheppard Mullin, describes AI governance as a system of laws, policies, frameworks, practices, and processes that support responsible AI implementation, management, and oversight. Effective governance ensures consistent, standardized, and ethical AI use, minimizing risks like AI misuse, unreliable outputs, and model drift.

Several regulatory frameworks offer guidelines to help mitigate these risks. These include:

  • The White House’s Blueprint for an AI Bill of Rights and the Biden administration’s executive order on trustworthy AI.
  • The Federation of State Medical Boards’ best practices, The National Academy of Medicine’s AI Code of Conduct, and World Health Organization’s guidelines for GenAI.

Some states, such as Utah and Colorado, have also enacted AI-specific regulations. Utah’s AI Policy Act requires disclosures in healthcare interactions with GenAI, while Colorado’s Artificial Intelligence Act mandates transparency for high-risk AI systems, such as those used in healthcare.

Beyond regulations, industry collaborations like the Coalition for Health AI are advancing responsible AI standards focused on transparency, accountability, security, privacy, and inclusivity.

Key Elements for a Healthcare AI Governance Program

According to Metnick, AI governance should reflect an organization’s values and risk tolerance. Existing frameworks like the OECD’s AI Principles, Fair Information Practice Principles, and NIST’s AI Risk Management Framework can help healthcare organizations customize their governance structures.

A robust healthcare AI governance program should include:

  1. An AI Governance Committee – This committee should include healthcare providers, AI experts, ethicists, and legal professionals. Its role is to oversee AI deployment and assess risks, aligning AI use with organizational goals and ethical standards.
  2. Policies and Procedures – These create a standardized approach to AI use, establishing accountability, data management practices, and validation protocols. Policies should also cover incident management to address issues when they arise.
  3. Training Programs – Regular training ensures that AI tools are used appropriately, based on roles, organizational policies, and relevant laws.
  4. Auditing and Monitoring – These processes track AI use, user engagement, and intended purposes. Regular audits verify that tools are properly vetted, formally approved, and performing as expected. This includes an incident response plan, outlining steps for referring and documenting issues, suspending algorithms if necessary, and notifying relevant authorities.

Privacy, security, transparency, and interpretability are also key considerations when creating an AI governance framework. Becton emphasized that “positive healthcare experiences often are the result of patient trust in the health system.” With effective governance, AI in healthcare can continue to evolve responsibly, reinforcing patient trust and safety as it transforms the industry.

🔔🔔  Follow us on LinkedIn  🔔🔔

Related Posts
Salesforce OEM AppExchange
Salesforce OEM AppExchange

Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more

The Salesforce Story
The Salesforce Story

In Marc Benioff's own words How did salesforce.com grow from a start up in a rented apartment into the world's Read more

Salesforce Jigsaw
Salesforce Jigsaw

Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more

Health Cloud Brings Healthcare Transformation
Health Cloud Brings Healthcare Transformation

Following swiftly after last week's successful launch of Financial Services Cloud, Salesforce has announced the second installment in its series Read more

author avatar
wp-shannan