Hosted by
Guest

Watch Full Episode
In this episode of The Medical AI Podcast, host Dr. Felix Beacher is joined by Dave Trier, VP of Product at ModelOp, to explore what it really takes to bring AI into healthcare at scale.
They discuss the promise of AI for patients, providers, and pharmaceutical innovators—and why the biggest barriers to adoption often aren’t technical, but organizational. From trust and acceptance, to data access, litigation risks, and clinical workflow disruption, Dave shares lessons from helping large enterprises commercialize AI responsibly.
Hear how healthcare leaders can strike the right balance between innovation and governance, avoid common pitfalls, and build repeatable processes that accelerate safe, impactful AI adoption.
What You’ll Learn in this Episode:
- Why organizational processes, not just technology, are the biggest barriers to AI adoption in healthcare.
- How trust and acceptance—among both clinicians and patients—shape the success of AI systems.
- The role of governance, testing, and guardrails in preventing risks like hallucinations and misinformation.
- How healthcare leaders can streamline commercialization of AI solutions while managing compliance and litigation risks.
- Real-world lessons from radiology and genomics that show both the pitfalls and promise of medical AI.
Transcript
1. Introduction to Medical AI Podcast
Welcome to The Medical AI Podcast with Dr. Felix Beacher. Strap in for a thirty-minute ride of big ideas.
Let’s imagine a utopian future of medical AI: health implants and wearables constantly monitor your health. AI systems process those data in real time and recommend treatments, combining them with information from your genome. Recommended drugs are more efficacious and cheaper than those of 2025. Life expectancy will be 100 or more, and old age will be longer, healthier, and groovier than ever.
2. Challenges in Achieving Medical AI Goals
Dr. Felix Beacher: But getting there is another thing. And the problem may not be so much with the technology. Perhaps what is needed is a fundamental shake-up of healthcare systems themselves. Joining me to discuss this very important area is Dave Trier. Dave, a very, very warm welcome to the podcast.
Dave Trier: Thank you so much, Felix. Pleasure to be here.
Dr. Felix Beacher: So, Dave, please introduce yourself to the listeners and explain a little bit about what it is that you do.
Dave Trier: Excellent. Yep. My name is Dave Trier, Senior Vice President of Product at a company called ModelOp. We focus on helping large healthcare and other organizations to scale the use of AI safely, effectively, and rapidly.
3. Realistic Perspectives on Medical AI
Dr. Felix Beacher: Now, I’ve given a fairly fanciful, impressionistic overview of what medical AI might be like in some kind of future. Could you describe, from your own point of view and perhaps in a more realistic way, what you think the promise of medical AI is?
Dave Trier: Yeah. So I’ll first start with that of the patient—or the consumer, if you will—and what that looks like. And I don’t believe it’s very far off from what you mentioned: being able to have real-time information, whether it’s CGMs or others, as well as combining some of that with your history, your past, and also your genetics.
And getting an understanding of everything about you from a health perspective, but more importantly, being able to quickly understand, “Alright, how does this relate to current conditions that I might have, as well as current treatments that might be asked of me?” So I’m working through current treatment plans.
How do I decipher some of that “Greek,” for lack of a better term, that’s around different treatments, diseases, and scenarios? How do I turn that into something I can understand—relating it back to my current information to give a more tailored, personalized understanding of what exactly I need to do?
So for me, from that patient or consumer perspective, it’s insight and understanding of what’s happening—and really a very casual, easy-to-understand view of exactly what I need to do at that point in time.
If I turn to that of, say, a provider: for a provider, it’s all about helping and assisting with making clinical and other decisions. Making sure that we are processing large amounts of data—deciphering things such as radiology PACS imaging, as well as notes that might have been provided from other clinicians—to help inform decision-making. It’s about making that life easier, taking stress and burden off the day-to-day grudge work, so providers can focus on a truly patient-centered view.
Then lastly, from that of the pharmaceutical or med devices industry: it’s about getting groundbreaking new devices or medicines into the market faster, because we have just a wealth of evidence. Being able to combine what we’ve seen in clinical studies with research on related medicines or devices in the past. For them, it’s about speeding time to market, but still in a way that is safe and effective.
4. Stakeholder Perspectives on Medical AI
Dave Trier: So I just wanted to give a couple different views from different stakeholders and personas. Hopefully, that makes sense, Felix.
Dr. Felix Beacher: Yeah, absolutely. Now, I try not to be too cynical. But people could be forgiven for thinking, “Well, hang on. I remember back in 2020, we’d had machine learning systems around for a few years even then, and nothing very much had changed as far as going to the doctor, joining a waiting list, and so on. Nothing much has changed in the last five years.” You can forgive people for being a little cynical about the pace of progress of AI systems in healthcare. Could you comment on that?
5. Barriers to AI Integration in Healthcare
Dave Trier: Yeah, absolutely. So I think if you think about the problems we’re looking at—it’s just pervasive in years of working with healthcare systems.
One, you just have some general organization and process challenges. Most healthcare provider systems, as well as pharmaceutical and med device companies, are very large and complex. There are a lot of different departments, teams, and stakeholders involved. So just overcoming some of the process and organizational challenges is an area that has to be addressed in order to truly leverage AI throughout the organization.
Then there are technology barriers. A lot of those are being ironed out in other industries, so medical AI should be able to take advantage of lessons from outside healthcare.
But the biggest one to me, Felix, is trust and acceptance. How can we trust that AI is actually making the proper decisions? That it’s not hallucinating or providing misinformation or misguided information? That’s a major challenge, both internally within healthcare organizations and across the patient and consumer population in general.
6. Trust and Acceptance of AI in Healthcare
Dr. Felix Beacher: Okay, well, why don’t we take some of those factors in turn? Let’s focus a little bit more on the trust and acceptance issue. What’s your sense about how that’s changing over time in healthcare?
Dave Trier: Yeah. I think especially with generative AI and things such as ChatGPT becoming more pervasive with the general population, consumers at large are starting to use it more. They use it in their everyday life. They start to get an understanding of what it can do—and what it can’t do.
So I think just in general, having that broader usage of generative AI is helping. It’s a step on the way, if you will, to helping people trust and accept. That’s not what it was like in the past. I know you talked about ML and the cynicism around, “Oh, we tried ML in the past.” But it’s different.
Machine learning and traditional AI were really just pockets of teams—data scientists and others—sitting in their labs, cooking up use cases. They were the only ones who understood it or used it, and it was kind of hidden from the general population.
What’s different now is generative AI—it’s everywhere. My grandma uses ChatGPT. So it’s just a different time. Everybody has been exposed to it, and that naturally helps build trust.
The second thing I would say, though, is rigorous testing and guardrails. Especially for generative AI in medical contexts, there need to be accepted patterns or processes for what a system can and can’t do. You don’t want it going off the rails, so to speak, or hallucinating.
So the second piece around trust and acceptance is just rigorous testing, validation, and approval capabilities. Making sure that we’re using these systems purposefully for specific use cases, with clear results, guardrails in place, and even negative testing to try and break it. That kind of rigor helps build confidence that the system will stay in line.
7. Differing Trust Levels Among Stakeholders
Dr. Felix Beacher: No, I think this is very important. Of course, the trust and acceptance issue is interesting because it differs for clinicians—who presumably know when machine learning or AI systems are being used—and the general public, who most of the time probably don’t even know when AI systems have been used. So I guess the trust and acceptance is more important with clinicians than the general public. Do you think that’s fair?
Dave Trier: I think it’s actually both. But certainly clinicians, being highly educated, will be much more cynical. They know the right answer, so they will want very deep details and a deep understanding of how an AI system came to a certain conclusion.
Oftentimes, what we see is link-backs. The system will answer a question, and then it will say, “Here’s where I got my answer from.” For clinicians especially, those sources need to be trusted—journals or other authoritative sources. Giving clinicians those details helps establish trust.
On the consumer side, they also want to know when an AI system is being used. This is where you see some of the regulatory frameworks—for example, Texas House Bill 2060 or the California Attorney General’s requirements—that organizations must publish which AI systems are being used and notify patients when AI is involved behind the scenes.
So yes, consumers do want to know. Many are becoming more astute, asking: “Where did this information come from? Was it from a trusted and reliable source?” Clinicians and consumers differ in the depth of expectation, but both groups want understanding and trust.
8. Commercialization Challenges for Medical AI
Dr. Felix Beacher: Okay, so trust and acceptance is a key issue for commercialization of medical AI. What else would you highlight in this area?
Dave Trier: It’s organization and process. We work with a lot of healthcare providers, pharmaceuticals, and other large organizations, and you’d be surprised how little of the challenges are around technical integration, Felix, and how much are about alignment.
In reality, even for one AI system, on average there are about ten different stakeholders who touch it before it ever sees the light of day—medical teams, data science, IT, security, legal, risk, compliance, and so on. There are also typically between five and ten different systems involved in the process: security, data, infrastructure, etc.
So there’s the people side, the technology side, and the process side. And each one can be a barrier. You might have a system that has shown great results, but then it gets stopped because of an organizational or process roadblock.
We see this all the time—teams trying to get innovations out the door quickly, only to grind to a halt because of the sheer complexity of organizational processes in large healthcare systems.
9. Litigation Concerns in AI Adoption
Dr. Felix Beacher: Now, the US is a somewhat litigious society, as people tend to be aware. And in medicine, of course, we are dealing constantly with issues of life and death. Given that, I would imagine that the fear of litigation could be especially pronounced. How much is this a factor in generating a kind of conservatism when it comes to adoption of new technologies like AI?
Dave Trier: Yeah, it’s high—especially for clinical decision-making. This is why a lot of health organizations start with back-office optimization use cases, like nurse triage optimization. They’re safer ground compared to high-stakes clinical uses.
But when it comes to areas like radiology, organizations need much more trust, evidence, testing, verification, and sometimes even independent third-party reviews before they’ll adopt AI.
And often, if they’re purchasing an AI system from a vendor, they’ll require liability agreements. Legal and procurement teams become deeply involved to ensure terms and conditions, liability coverage, and all that are ironed out before adoption.
The challenge is that most large organizations apply a one-size-fits-all process. Whether it’s a simple back-office tool that never touches patient data or a high-impact clinical system, the process is the same—and that slows things down unnecessarily.
10. Challenges of IT Infrastructure in Healthcare
Dr. Felix Beacher: Now, could you also talk about challenges with IT infrastructure in healthcare settings in the US?
Dave Trier: Sure. In short—it’s a challenge. In the US, certainly, but really everywhere.
Healthcare has a combination of legacy infrastructure alongside modern cloud-based systems. The bigger issue, though, is the data. You’ve got siloed datasets across the organization: some in legacy databases or warehouses, some in vendor-based systems, PACS, EHRs, and so on.
And AI doesn’t work without data. You need access to it, you need to train and fine-tune AI on it, and then you need to use it for inferences, predictions, and decision-making. So while infrastructure is part of the challenge, it’s really the data problem that’s the biggest barrier to commercializing AI.
11. Disruptions in Clinical Workflows Due to AI
Dr. Felix Beacher: Well yes, because data, as you say, is the food of AI systems. They can’t exist without it. But at the same time, when a new AI system is introduced, it’s potentially a major disruption to established clinical workflows. Based on your experience, can you think of an example of that going very badly wrong?
Dave Trier: Yeah, a good example is something as simple as helping to summarize clinician notes. On the surface, that sounds great—just summarizing notes from an office visit.
But it can go badly wrong. EHRs are already complex, and when you add an untrained or poorly integrated process on top, it can create chaos. For instance, rolling out a note-summarization tool without enough training or education often leads to clinicians not knowing how to use it properly. They might record things unintentionally, or interact with it in ways that produce atrocious results.
Clinicians are strapped for time. So if they try it once or twice and it fails, they quickly give up and dismiss the tool as worthless.
Without proper change management and training, even a useful system goes unused. That results not just in wasted effort but also negative feedback—clinicians saying, “AI is not what it’s built up to be.”
Dr. Felix Beacher: And as LLMs get integrated further into healthcare, I expect to see even stranger challenges—like the legal world has seen, where lawyers presented fake case law generated by an LLM. In healthcare, we could easily imagine an LLM inventing new diagnostic categories or conditions that don’t exist.
Dave Trier: That’s why, in practice, healthcare organizations are very cautious. They typically use controlled, deterministic systems that may call on LLMs for specific tasks—like searching clinical case studies—but they don’t let AI drive the whole process. Guardrails and repeatable patterns are essential.
12. Positive Examples of AI in Radiology
Dr. Felix Beacher: Sure, okay. Now, I like to ask about disasters and worst-case scenarios because, firstly, they’re fun and secondly, they’re instructive. But why don’t we try to be more positive? I know you have experience with radiology AI systems. Could you talk about examples of radiology AI systems you think are really good use cases of how these technologies have been commercialized?
Dave Trier: Yeah. I talked about some of the challenges—organizational processes, infrastructure, trust issues. So let me give you an example of how it went well in radiology.
We worked with a large healthcare provider in their radiology department. They had a vendor AI technology they wanted to bring in, to help with studies like pulmonary embolisms.
In the past, bringing in a new AI system was a long, rigorous process involving ten different teams and systems, taking six to twelve months or more. But using our software to streamline that process, we reduced it to nine to twelve weeks.
Here’s how it worked: the radiology department registered the AI system as required by best practice and regulatory guidance. The system identified it as clinical, flagged it as higher risk, and triggered the appropriate steps: legal review, procurement checks, vendor testing results, case studies, and validations.
The AI committee then reviewed it, cataloged risks, and approved usage. Finally, the system monitored performance—tracking whether the AI was producing results comparable or better than radiologists.
So instead of a slow, frustrating, ad hoc process, it became streamlined, consistent, and transparent, with trust and oversight at every stage. And that made it possible to get value from the AI much faster while keeping safeguards intact.
13. Key Lessons for AI Commercialization in Healthcare
Dr. Felix Beacher: Okay, so if we just wrap up and tie all of that together, what would you say to leaders in the sector, or government, or the general public about the biggest lessons from your experience overall?
Dave Trier: I think the biggest lessons are:
First, in order to commercialize AI, you need to balance innovation and oversight. You can’t go so far into oversight that it slows innovation to a halt. It’s about finding the “Goldilocks” level of governance—not too much, not too little—so that you have trust, oversight, and visibility, but without stifling progress.
Second, there’s a lot of buzz around AI, and technologists have a natural tendency to chase every shiny new thing. Leaders in large organizations need to focus on vetting use cases carefully—balancing potential impact against risk—and making sure resources go toward the most impactful areas in the short term, rather than chasing distractions.
And third, you need a consistent, repeatable process that guides ideas through vetting, analysis, approvals, usage, and eventually retirement. That way people aren’t left guessing, and expectations are clear across all the teams and stakeholders involved.
14. Innovative AI Applications in Genomics
Dr. Felix Beacher: Apart from lessons about deployment and commercialization, what is your favorite specific medical AI system out there?
Dave Trier: Mine is definitely in the genomics area—using AI, including LLMs, to help with genomics research. I have a personal, close-to-heart story around this, so anything that can help innovation in genetic diseases is near and dear to my heart.
I’m continuing to watch how generative AI and AI as a whole are evolving and speeding up genomics and genetic-based research.
Dr. Felix Beacher: That’s definitely one to watch. Dave, thank you so much. It’s been very, very interesting talking to you, and best of luck in the future.
Dave Trier: Thanks so much for having me, Felix.
Dr. Felix Beacher: Okay, bye-bye then.
Dave Trier: Have a good one.