Hosted by
Guest

Watch Full Episode
Transcript
Introductions and ModelOp Overview
Evan Kirstel: Hey everybody, fascinating chat today on AI governance with a true innovator in the space at ModelOp. Jim, how are you?
Jim Olsen: Doing good. Doing good today. How are you doing?
Evan Kirstel: I’m doing great. Thanks so much for joining from your off-the-grid location. I see the solar in the background. You’ve got Starlink going—really intriguing. But before all that fun stuff, maybe introduce yourself and what’s the big idea behind ModelOp.
Jim Olsen: Sure. I’m Jim Olsen. I’m the Chief Technology Officer of ModelOp and actually did the architecture and design of the original system, so I know a lot about the space. Some of my colleagues and I have been working in this area for 10-plus years. We have a lot of knowledge about not only newer generative AI but also traditional AI, statistical techniques, and how they affect business.
That’s how we created the ModelOp solution—to bring in full lifecycle management of all kinds of models, everything from an Excel spreadsheet to an LLM foundational model, and now Agentic AI solutions as well. We put that together to make the process a lot easier because we found a lot of companies are struggling to get these technologies into production.
AI Governance Benchmark Report
Evan Kirstel: Fantastic mission, and you released a governance benchmark report on AI recently—ideal for this audience. Let’s start with the big picture. What was the big idea or motivation behind this benchmark study, and what did it tell us?
Jim Olsen: A lot of it was understanding where companies are at with their AI solutions. There’s a lot of disparate information and articles. If you listen to the Silicon Valley, digital-native companies, everybody’s using it for absolutely everything—it’s the next big thing. Then you talk to some enterprises, and they’re more hesitant: How does this impact my business? What am I willing to put into place?
There wasn’t a lot of clarity about what the plans for enterprise businesses actually are in different spaces. What we found is a lot of companies are struggling to build trust within their organizations about these solutions because they don’t have the insight.
We’re seeing a lot of IT departments pushing back because they’re finding shadow AI. We’ve seen things where hospitals were posting customer data into ChatGPT to get summarizations—obviously that’s a huge risk, breaking several laws. So, how do they get these processes in place? That’s where we saw companies struggling with these concepts.
In the report you can see a lot of findings: a lot of people are playing with AI but not getting solutions out quickly, so they’re losing business value. Understandably, they also want to make sure they have trust in these solutions.
Evan Kirstel: Well done. And one of the headline stats from the report: 56% of GenAI projects take 6 to 18 months to reach production. How do we get out of this kind of quagmire?
Jim Olsen: That’s the foundation of why we built the company. Much like software in the 90s, people would develop on their desk and just throw it into production. There weren’t really processes, and things broke. Programming is deterministic in nature, so you could put processes in place. Now no company would operate without a CI/CD pipeline—common practice stuff. Back then, those didn’t exist.
We found those same kinds of processes for enabling efficient deployment and providing insight and reproducibility didn’t exist for AI models, foundational models, etc. So, we created a process where you can automate a lot of this to make it easier. Software is deterministic; AI models are the exact opposite.
What can you put in place to provide those insights and build trust so you don’t hit all these red flags? When we deploy our solution, customers are easily cutting that time in half or more, depending on how sophisticated they were to start. We’re creating a formalized repository where people can see who’s using what for which use cases, what’s approved, what they can leverage, and so on.
Having that centralized inventory and an automated lifecycle process drives these solutions out to production.
Sector Differences and AI Governance Challenges
Evan Kirstel: You surveyed a number of sectors—financial services, pharma, manufacturing. Did any particular vertical stand out in terms of maturity or challenges with AI governance?
Jim Olsen: Oddly enough, the ones with the most challenges are the most mature because they’re forced into it. Some of our very first customers were financial institutions—heavily regulated. You can’t have models making trades or predictions about loans without being well scrutinized and understood. That was the first place with big challenges because they could be audited constantly and had to have everything documented.
I don’t remember which bank it was, but there was a multi-billion-dollar fine for not doing this properly. So, there’s a lot at stake. Necessity is the mother of invention—you’d see a lot of homegrown processes that weren’t always effective because they weren’t stepping back from their business to develop them. Our solution is more neutral and takes all these ideas into account to create a more efficient approach.
We have several firms in the financial sector. What we’re starting to see now is AI coming into healthcare, where we’re literally talking about life-or-death decisions. I see that space as having even more challenges because they weren’t born and bred in the statistical rigor of finance, where things are well laid out and regulated.
Healthcare has patchwork legislation across different states about what AI can be used for, and of course, there are very real concerns. Nobody wants these models to blow up and stain their reputation. For example, there was a case in healthcare where long-term care was less recommended for minority groups than non-minority groups, leading to a lawsuit. These are real-world situations that come up in life-or-death contexts.
Evan Kirstel: Yeah, we all saw the challenges IBM Watson faced trying to make an early stab into the healthcare space. I think we’ve matured since then, but there’s still a lot of work to do. You also mentioned the report shows 50-plus generative AI use cases in many companies, but only a handful make it to production. What’s that disconnect? Why the drop-off?
Jim Olsen: To be fair, there’s always a natural drop-off. Everyone has great ideas, but bringing them to production is another story—there’s always revision there. But what’s really driving it now is the lack of trust.
People are skeptical because these solutions are non-deterministic. If you have a model that predicts whether a cell is cancerous, that’s fairly testable and verifiable with known labeled data. But if you ask it to summarize a patient record into a recommendation, or a company prospectus into something actionable for investing, that’s not deterministic.
People are skeptical because they can’t be sure. Foundational models sound very professional and intelligent but aren’t always factual—they’re convincing when wrong. One bad recommendation is harder to overcome than a thousand correct ones. People remember where it went wrong.
So, how do you build trust? You need to look holistically at the foundational model, its applicability to your use case, and the risks and mitigations you’ve put in place. Tying all the models, agents, tools, and resources into a single pane-of-glass inventory—like we provide—helps give clarity.
You can see where else it’s been used successfully. You can build trust when you know it’s been reviewed, risks were identified, and mitigations were put in place. But you can’t have that story be spread across Excel files, SharePoint, or Jira tickets. That doesn’t work.
Our software helps pull all of that together into one place with documents, risks, findings, and so on. By automating it and making it readily available, you make the model lifecycle manageable. Otherwise, it becomes overwhelming.
Fragmentation, Silos, and Technical Debt
Evan Kirstel: Amazing—spreadsheets for AI governance. What's this, 1999? Come on. We need to up our game a bit. And that’s for you, healthcare, with your fax machines and email—it’s like a zombie that just won’t die. The other challenge in the enterprise is fragmentation: lots of silos, lots of technical debt. What does that look like in the real world in terms of impacting AI at scale?
Jim Olsen: Obviously, if you have different people taking entirely different approaches, using different technologies without any consistency, it creates a big burden—not just for getting things deployed but also for reviewing those technologies.
A lot of this is because these efforts are grassroots. It’s often not centralized at a higher level. Water finds its own level—the individual groups pick their best-of-breed tools and run with them, often without knowing what other teams are doing because they can’t find them.
In very large companies, that’s just a reality. Cross-business unit collaboration is a challenge. That’s why you want a centralized process and understanding—and the ability to automatically generate findings like, “Hey, have you thought about this or that? We’ve already seen this be an issue elsewhere.” Or, “Who’s the responsible person to talk to?”
Without some kind of centralized, understandable, and automated process, there’s inconsistency even in the process itself, which frustrates teams. You’re not standing on the shoulders of giants within your own company—you’re all trying to forge your own way, and that never works out as well.
Regulatory Risks, Compliance, and Brand Impact
Evan Kirstel: Let’s talk risks. There are still lots of landmines out there on the regulatory side—compliance risk, fines, and other challenges. What do you advise customers to be aware of when it comes to real-world exposure?
Jim Olsen: It’s not just regulation. Regulations are important, obviously—if you’re not compliant, it’s pretty cut and dry that you’re going to get in trouble. How much trouble depends on how much process you can show.
Nobody’s going to be perfect. If you did nothing and ignored it, they’ll be much harsher on you than if you tried your best. Even then, things can still go wrong. If it goes wrong because of a black swan event, you’re probably not going to get in too much trouble from a regulatory standpoint.
But more importantly, there’s your brand. It’s not just about fines. Especially in consumer products—and we work with several in that space—your whole value is in the customer’s perception.
Think about toilet paper brands: there are differences, sure, but you make money on brand recognition and trust in quality. If you put out an AI solution that blunders, that can damage your brand.
Look at McDonald’s with its automated ordering system. There were videos of people saying, “I’ll take one fry,” and it added 11 fries. “Remove that—I only wanted one,” and it went to 12. It made them look foolish. Did it destroy McDonald’s? No. But it hurt the brand.
These hits have real financial impacts, even if they’re hard to measure. They can cause even more problems in the long term than a government fine.
Competitive Landscape and Agentic AI
Evan Kirstel: You’re in a very hot space right now. A third of companies are evidently budgeting $1 million or more annually on AI governance software. Congratulations on being in a hot market segment. Maybe talk to us about your space in general—where it’s headed and how you see yourself competing versus other players out there.
Jim Olsen: One of the biggest things for us is staying ahead. Just building a RAG architecture or a foundational model—that’s kind of yesterday’s news. Yes, it’s still highly relevant for organizations, and we still focus on that, but all the buzz now is around Agentic AI.
Agentic AI has even bigger implications. You’re literally giving autonomy to foundational models to make decisions that actually change data in your database, send emails, and so on. That’s what we’ve been working on specifically—how do we bring Agentic AI solutions into the model lifecycle process?
We’ve done a lot of work there. We even have webinars on our website about it. You can start to manage these things with tools like MCP. The Anthropic MCP (Model Context Protocol) kind of won the tool war for how LLMs communicate with things that can affect change or access data.
We’ve incorporated Agentic tools right into our solution so you can use Agentic AI for model governance itself. More importantly, we also provide ways to approve MCP tools for use, know which use cases are allowed, and set filters—like PII protection. For example, if a model has access to PII data, you can block any PII data from coming out of it.
We’ve been building for that space, knowing these Agentic AI solutions are going to change the landscape. As companies put these in place, they need to know what they’re doing, where it’s used, and how to prevent issues—like suddenly deciding to sell all of its stock with full autonomy.
Evan Kirstel: Got it. For sure. Including personal danger—I’m thinking about getting into these robo-taxis now all the time. I’m always scratching my head about how that’s going to go, but I digress. When you talk to a customer who’s skeptical or uncertain about where to start, how to prioritize this journey, what’s your advice?
Jim Olsen: We suggest what we call Minimal Viable Governance. It’s the minimum you need to do. If you try to start by doing it all, you’re never going to get there.
It’s just like coding. We use an iterative approach now instead of the old waterfall design. Same with governance: get started. Start small with the essentials. That will vary by business. If you’re a financial institution, your minimum level is a little higher than if you’re just protecting your brand.
Get those processes in place, understand what’s there, and then iterate—continue to grow and add. Our solution supports a configurable approach to model lifecycles that doesn’t require writing code or changing the product itself. That enables an iterative process that can evolve.
If a new regulation comes out tomorrow, you can plug it in. The key is: don’t wait. The problem’s only going to get worse. Get started now. Even having any process means you know what’s going on, versus burying your head in the sand and waiting until it bites you.
It’s much harder to unravel it later when there’s a whole bunch of these models out there, versus getting started now when, as the report says, only so many are in production. You want the process in place to help govern that backlog and push good efforts into production.
Personal Background and Closing
Evan Kirstel: Great advice. We’re halfway through the year—I can hardly believe it! What are you up to in the second half? Any travel or events beyond the summer? What’s on your radar?
Jim Olsen: We’re attending a whole bunch of different things. Honestly, I don’t even know all of them because I don’t go to every single one. We just recently went to the CHAI conference at Stanford, talking about AI usage in healthcare.
We’ve got CDAO conferences we’re going to, constantly doing webcasts, and we do our own webinars. I just presented one last week on Agentic AI and what we’re doing there. A lot is still virtual nowadays, but we’re also doing in-person events with conferences that are starting to pick up.
We’re really participating everywhere in a lot of different things. Usually, this is such an iterative space that things come up—you never know where you’re going to go next week, potentially.
Evan Kirstel: Exactly. Speaking of virtual, I’m admiring your real background, not a virtual one. What’s up for the summer in Colorado? Any hiking, fishing, hunting, or birdwatching? What do you get up to there in the woods?
Jim Olsen: Yeah, the wildlife we get to watch right from the deck. We see moose, elk, marmots—everything comes right up. Personally, we have 14 acres here, and there’s a lot of beetle-kill trees, so I’m always working on cleaning that up.
I don’t need a gym membership—I get my workout moving trees around. We also get out into the woods for hiking, take our UTVs around, and just enjoy nature where we can.
Evan Kirstel: Fantastic. Well, thanks for joining and taking some time away from all that. Congratulations on all the success—onwards and upwards!
Jim Olsen: Absolutely. Thank you for taking the time to talk with me today. I really appreciate it.
Evan Kirstel: And thanks everyone for listening, watching, and checking out our new TV show at techimpact.tv, now on Bloomberg and Fox Business. All right, take care.