How hospitals and health systems can maintain trust in a world of rapid AI adoption


This article was submitted by Gregg Killoren, the General Counsel at Xsolis. Killoren specializes in regulatory strategy, corporate compliance, and legal risk management across healthcare and technology. He brings deep expertise in navigating complex legal frameworks and supporting high-growth, mission-driven organizations.

Artificial intelligence (AI) is being rapidly adopted across healthcare, with the potential to save providers and payers time and money while improving patient outcomes. But with that promise comes responsibility: implementing effective AI governance and maintaining transparency, especially with patients. Trust will be the foundation on which successful AI adoption rests.

Communicating AI Policy

Every AI implementation involves a balance of risk and reward. Low-risk, high-reward applications—such as those supporting claims processing, billing, or back-office functions—are more easily adopted. These tools typically operate far from clinical decision-making, making their risks relatively modest.

More complex questions arise when AI tools directly affect patient care. For example, should a patient be told that their recorded conversation with a physician is being processed by AI? Most healthcare organizations rely on a “human in the loop” approach—ensuring a clinician remains responsible for final decisions. Communicating that standard helps reassure both patients and clinicians.

The challenge lies in determining how much detail patients need about how AI supports their care. Is transparency best achieved through disclosure in a consent form, or should hospitals consider alternative approaches?

To Opt In or Opt Out?

Patient consent is a cornerstone of healthcare, covering everything from dental cleanings to surgeries. Extending this to AI may seem logical, but it raises two challenges:

  1. Burdening the patient – Patients cannot reasonably be expected to assess the safety and efficacy of AI tools already vetted through a provider’s governance process. They should not bear the responsibility of deciding whether an AI tool is appropriate.
  2. Operational impracticality – For organizations deeply integrated with AI, “opting out” may not be feasible without disrupting care. And in many cases, opting out could negatively impact treatment and outcomes.

Rather than asking patients to evaluate each AI tool, hospitals may instead need to communicate that all AI in use has undergone governance review, requires ongoing monitoring, and includes human oversight. This approach emphasizes accountability without shifting decision-making burdens onto patients.

Public and Private Collaboration

Another key question is whether AI governance should be led by hospitals, state legislatures, federal regulators—or all three. A patchwork of state laws risks creating inconsistent standards.

Industry collaborations such as the Coalition for Healthcare AI (CHAI) offer a model for establishing shared standards. CHAI’s Responsible AI Guide and Applied Model Card framework provide actionable tools for ethics, transparency, and quality assurance in healthcare AI. Working with regulators while fostering private-sector collaboration could ensure more consistent standards across jurisdictions.

Global considerations further complicate the picture. The European Union’s AI Act establishes an AI Office to enforce compliance across member states, while non-EU countries may introduce their own requirements. For multinational healthcare organizations, aligning with both U.S. and international standards will be essential.

The Path Forward

Earning and maintaining patient trust in AI requires two key steps:

  1. Establishing a clear definition of what “AI transparency” means in healthcare.
  2. Communicating that policy to patients, clinicians, and stakeholders in ways that are accessible and consistent.

Developing standard practices for responsible AI—ideally through a combination of public and private efforts—would help avoid fragmented regulation and allow providers to focus on patient care.

AI adoption in healthcare is advancing rapidly, and trust will determine its long-term success. Providers cannot afford to offload responsibility onto patients, but must instead invest in governance, transparency, and collaboration. Done right, this investment will safeguard trust while enabling AI to deliver on its promise: improving care for patients and efficiency for providers.

Similar Posts