Published on March 12, 2025

In a recent Federation of American Scientists article, AI researcher Daniel Wu argues for NIST “to lead an interagency coalition to produce standards that enable third-party research and development on healthcare data.” Such standards, which determine how medical data is anonymized, shared, and used, would accelerate AI development in healthcare through comprehensive analysis, machine learning model training, and consistent, high-quality data sharing across research and clinical environments.

To bolster his case, Wu points to countries the United States could look to as an example: the UK with its Trusted Research Environments; Australia with its Secure Unified Research Environment (SURE); and Finland with its Finnish Social and Health Data Permit Authority (Findata). These federally sponsored initiatives demonstrate approaches to centralized health data management that prioritize public-private collaborations as well as instilling confidence in both patients and providers.

Unfortunately, Americans are probably not going to see a data governance framework that’s overseen and enforced by its government anytime soon. Recent bills, like the American Data Privacy and Protection Act and the American Privacy Rights Act, died in Congress. And any bill that emerges from the latest working group will likely be dismissed by the current administration. Moreover, the administration is gutting the US AI Safety Institute after it revoked Biden’s 2023 AI executive order, which required HHS to establish an AI safety program and certain developers to share test results. So don’t hold your breath for any federal data protections to emerge from AI legislation, either.

Given this reality, what can healthcare companies—in the U.S. or any country without federally mandated data governance frameworks—do to build patient trust in technologies that show real potential for medical progress when used responsibly?

Current state of healthcare data governance in the U.S.

At the moment, the United States is operating with fragmented systems when it comes to healthcare data. There’s the ENACT Network, which allows researchers at CTSA hubs to conduct EHR-based studies on any disease or condition within a network of over 142 million patients. There’s also the N3C Data Enclave, which the NIH describes as “a centralized, secure, national clinical data resource with powerful analytics capabilities that the research community can use to study COVID-19.” But without a unified federal framework that explains how to safely and securely contribute patient data, some healthcare providers are hesitant to participate in these programs.

Additionally, there are “no specific regulatory pathways for AI-based technologies” in the U.S., according to the NIH. Instead, the FDA evaluates such technologies under the existing regulatory frameworks for medical devices. However, the FDA admits these frameworks were “not designed for adaptive artificial intelligence and machine learning technologies,” and that “many changes to artificial intelligence and machine learning-driven devices may need a pre-market review.”

Given the lack of progress in the U.S. in both data governance and AI regulation, it’s no surprise that American sentiment toward AI in healthcare is at best lukewarm. Less than a third of consumers (30%) are comfortable getting medical advice from AI, according to a 2025 Qualtrics study Additionally, a Pew Research study found 37% of Americans believe AI will worsen patient data security, while 57% expect it to harm provider-patient relationships. 

Whether concerns over AI in healthcare are due to accuracy, data privacy, or both, healthcare companies that are considering AI implementation will want to be extra intentional about building and maintaining trust with patients.  

Building patient trust in AI in the absence of federal frameworks   

Here’s what healthcare leaders should do to build trust in their use of AI technology, whether or not they’re based in a nation with strong data protection frameworks.

Communicate AI’s role in patient care

Clear communication regarding AI usage is vital for building trust with patients. Healthcare providers must go beyond basic privacy notices to help patients understand how AI enhances their care while respecting their autonomy. Organizations that proactively communicate about their AI usage and data handling show respect for patient concerns while fostering greater acceptance of these technologies. Here are key actions organizations should take:

Explain AI-assisted decisions using straightforward language. Healthcare providers must explain data processes and AI applications using clear language that patients can understand. Chief Medical Officers (CMOs), Chief Privacy Officers (CPOs), Chief Legal Officers (CLOs), Chief Information Security Officers (CISOs), and Chief Technology Officers (CTOs) should meet regularly to coordinate decisions about how AI will influence patient care and maintain updated privacy policies that reflect current technological practices.

Create accessible resources showing how patient data informs AI tools. This might look something like the Coalition for Health AI (CHAI)’s applied model cards, which act as nutrition labels for users considering the adoption of AI tools. Resources should clearly explain what data the AI uses, how it makes decisions, and what safeguards are in place to protect patient privacy. CMOs and Chief Compliance Officers (CCOs) should make these explanations available through multiple channels, ensuring patients understand how AI tools support their care.

Enable patients to track how their data is used in AI systems. Whether through a website or patient portal, make sure patients can easily opt in or out of what data is used to train AI models. Chief Digital Officers (CDOs) and Chief Experience Officers (CXOs) should provide clear visibility into when and how AI is being used in their care decisions, similar to how patients can view test results or provider notes. This transparency helps patients feel more in control of their healthcare data while building trust in AI-assisted care.

Ensure fair and responsible AI implementation

Responsible AI implementation requires healthcare companies to establish robust processes that ensure fairness, transparency, and accountability. Organizations like the Mayo Clinic are leading the way with initiatives like their Digital Hippocratic Oath, which commits to developing AI systems that prioritize patient wellbeing. Here are key steps healthcare companies should take to implement AI responsibly:

Test AI systems across diverse patient populations. AI algorithms can be biased if their training data doesn’t accurately reflect the diversity of the population. This can lead to inaccurate predictions and reinforce health disparities. For example, a tool used to predict the success of vaginal birth after cesarean section was found to be biased against African American and Hispanic patients. The tool was later revised to remove race as a factor, demonstrating the importance of testing to ensure AI tools aren’t perpetuating health inequities. Chief Artificial Intelligence Officers (CAIOs) and Chief Data Officers (CDOs) should oversee this testing.

Establish clear protocols for overriding AI recommendations. A 2024 NIH article states that, “in cases where AI-driven recommendations may lead to adverse outcomes or harm to patients, healthcare providers must be prepared to intervene decisively . . . even if it means overriding or disregarding algorithmic suggestions.” CMOs, CCOs, CAIOs, and department heads should collaborate on override protocols to safeguard patient wellbeing.

Document AI system limitations and potential biases. As of July 2024, healthcare organizations must comply with HHS’s anti-discrimination rule by implementing systematic bias detection and mitigation strategies for their AI tools. Whether or not this rule stands in the future, third-party assurance labs are an effective way to validate AI systems by evaluating tools against safety, efficacy, and fairness standards before deployment. CAIOs, CMOs, and CCOs should oversee this process.

Monitor and report AI system performance metrics. CAIOs, Chief Analytics Officers (CAOs), and Chief Quality Officers (CQOs) should establish KPIs that track both technical accuracy and clinical outcomes of AI systems. Regular performance reviews should examine error rates, disparities across patient groups, and instances where provider override was necessary. This data helps identify potential issues before they impact patient care and demonstrates commitment to quality assurance.

Create feedback channels for patients and providers. CMOs, CXOs, and Chief Information Officers (CIOs) should implement structured processes for both patients and clinicians to report concerns about AI-assisted decisions. This feedback loop is crucial for identifying unexpected issues, understanding user experiences, and continuously improving AI systems. Regular surveys and focus groups can supplement these channels by proactively gathering insights about AI tool effectiveness and user trust.

Maintain compliance through strong data governance

Any technology that involves the use of protected health information (PHI) will require authorization from HIPAA in the U.S. or GDPR in Europe. Here are some essential compliance steps to help healthcare companies in the U.S. prioritize patient privacy while leveraging AI capabilities:

Establish AI oversight. Form governance teams including CMOs, CISOs, CLOs, CPOs, and Chief Data Scientists (CDSs) to monitor PHI usage in AI systems. This team should meet monthly to review AI system performance, assess compliance risks, and update governance policies as technology evolves.

Strengthen contracts. Task CLOs and CCOs with updating agreements to address AI-specific PHI handling tasks. This team should review and update all vendor agreements annually, with special attention to data processing, security requirements, and breach notification procedures.

Educate staff. Direct CLOs and CPOs to implement HIPAA compliance programs specific to AI usage of PHI. Training should cover both technical aspects of AI systems and compliance requirements, with quarterly updates.

Create guidelines. CLOs and CPOs should establish clear protocols for data handling, including specific requirements for encryption, access controls, and audit trails, and distribute these protocols to all business partners.

Assess risk regularly. CISOs and CCOs should conduct quarterly HIPAA risk evaluations of AI systems using PHI, focusing on potential vulnerabilities and compliance gaps.

Maintain transparency. CPOs and CCOs should clearly document AI usage of PHI in privacy notices and business materials. They should also create different levels of documentation for various stakeholders: detailed technical and compliance reports for internal teams and regulators; clear explanations of data usage for providers; and accessible summaries for patients and families.

Secure expert guidance. CEOs and CIOs should regularly engage external auditors and consultants to review compliance programs and suggest improvements.

A path forward for healthcare companies without federal data frameworks

Given that the U.S. is unlikely to see a federally sponsored data governance framework, healthcare leaders should take the following steps to build trust in their organization’s use of AI tools:

‐ Communicate AI’s role in patient care.
‐ Ensure fair and responsible AI implementation.
‐ Maintain compliance through strong data governance.

Healthcare leaders should also consider making their organization a CTSA hub if it isn’t already. Participating institutions can submit ENACT queries to obtain EHR data for use in research studies. Ongoing research initiatives further strengthen patient trust in AI by demonstrating a commitment to improvement, transparency, and patient engagement.

Lauren Spiller

Lauren Spiller

Enterprise Analyst, ManageEngine

Lauren Spiller is an enterprise analyst at ManageEngine. She helps business leaders navigate their digital transformation journeys by covering AI, data privacy, and other technology-related issues. Lauren previously wrote content for Gartner. Before that, she taught college writing and served as the writing center assistant director at Texas State University. She has presented at the European Writing Centers Association, Canadian Writing Centres Association, and the International Writing Centers Association conferences. Lauren holds a B.A. from Ashland University and an M.A. from Texas State University.

 Learn more about Lauren Spiller
x