
Trust in Healthcare AI Starts with Governance
ModelOp CEO Pete Foley is calling on healthcare leaders to rethink how they approach AI—starting with governance. In his recent article for MedCity News, Pete outlines how transparent, well-governed AI systems are essential for building trust with patients, practitioners, and regulators alike.
Pete doesn’t mince words: “Trust in healthcare AI is not a feature—it’s an outcome.” And achieving that outcome requires structure. In the article, he identifies three pillars for making AI trustworthy by design:
- Visibility: Organizations need a clear inventory of AI models in use—including their purpose, data sources, and risk level.
- Controls: Policies, guardrails, and oversight must be embedded into the AI lifecycle—not added after the fact.
- Reporting: Leaders need the ability to demonstrate compliance and explain outcomes to both internal and external stakeholders.
These principles aren’t just theoretical. Pete connects them directly to emerging regulations like the EU AI Act and the U.S. Executive Order on AI, noting that proactive governance isn’t just about reducing risk—it’s about enabling innovation at scale.
“We can’t afford to treat AI trust as an afterthought — we need to operationalize it.”