Good Decisions: A Monthly Webinar for Enterprise AI Governance Insights

AI's Time-to-Market Quagmire: Why Enterprises Struggle to Scale AI Innovation

Enterprises are pouring millions into AI governance—but many are still struggling to scale their initiatives. In this 30-minute session, ModelOp’s experts break down the key findings from the 2025 Benchmark Report and shares actionable strategies to overcome the biggest roadblocks—from fragmented systems and manual processes to training gaps and unclear ownership.

Key Takeaways

Organizations are experiencing a massive imbalance between AI ambition and execution. While there's an overwhelming number of proposed AI use cases and initiatives, very few actually make it to production. Most remain stuck in the proposal phase, creating a significant bottleneck that prevents companies from realizing AI's value at scale.

What’s holding enterprise AI back? The data tells a clear story.

The 2025 AI Governance Benchmark Report revealed a growing commitment to responsible AI—with 36% of enterprises budgeting over $1M for governance software. But investment alone isn’t enough. In this exclusive webinar replay, ModelOp’s experts dig into the report’s most surprising—and actionable—findings, including:

  • Why fragmented systems and manual processes remain the #1 blockers to governance adoption
  • The critical training gap that’s slowing AI scale across industries
  • How top-performing organizations are aligning governance with business value

Download the full slide deck to review the stats, charts, and takeaways shared during the session.

Want more expert insights like this? Register for the Good Decisions webinar series to stay ahead of the curve with monthly 30-minute sessions designed for enterprise AI leaders.

Transcript

Introduction to AI Time to Market Challenges

But welcome. I'm Jay Combs, VP of marketing at ModelOp, and joining me is Dave Trier, VP of product. Today, we're gonna unpack what we're calling the AI time to market quagment.

And so these are the challenges you have to navigate when taking AI initiatives from idea to production, achieving ROI, and scaling. And this is based on the findings from our twenty twenty five AI governance benchmark report. There's a lot to cover. So we're gonna go, quickly. But again, we'll be sending out this presentation and recording afterwards.

So the idea for doing this report came from the question on how are enterprise actually keeping pace with the accelerating rate of AI and the AI ecosystem, third party vendor models, agents, software embedded with AI. Beyond just the tech changes, there's the change to business model. Now agents are often consumption based. And so me in, marketing, I have dozens of gen AI or agent use cases that I could potentially use.

But even for initiatives with low risk, like marketing, there's still issues that need to be controlled. Consider, like, a agent force, sales agent on your website. That could get throttled and you could potentially rack up massive bills with Salesforce. Now scale that up to higher risk use cases in HR, in health care, and financial services, and that lack of control can be really scary.

1. Assessing Risk vs. Innovation in AI

So that leads us to the question, are you scaling risk? Are you scaling innovation?

And this is what our report dives into. So we partnered with Crinium Global Intelligence and we surveyed a hundred, senior AI and data executives across a variety of industries to get their insights on both their ambitions related to AI and their ability to execute in bringing those initiatives to market.

So this is the report. You can download it here on the link. If you really wanna download it right now, you can go to the website. But again, we'll send over the recording, after, after the webinar.

Alright.

2. Key Findings from the AI Governance Report

So couple key findings, but the biggest really insight is really everyone's invested in AI. Duh. But, you know, few are realizing the value at scale. And there's actually important nuance here, so we're gonna talk through that. We're gonna talk through the current state of AI initiatives and the failure of operationalization, where enterprises are struggling and why, and what strategic shifts can be made to overcome these challenges.

So I'm gonna check the chat real quick. I see something in the chat. K. Great. Love it. Cool.

Alright. So let's go to the first finding.

So there are many, many use cases being proposed, and what you're looking at is a chart that shows the different life cycle stages that a AI initiative or model can go through from proposal to production. And you can see it's very heavily weighted in terms of the number of ideas or use cases in the proposal phase. Whereas if you look at production, there's only a handful in production now. And some of that's, you know, obviously, you know, new technology comes out, enterprises generally take a little longer to to do things. And so this just might be the natural progression of ideas coming to market.

But I think there are some friction points that might be worth, you know, talking about or diving into, and this is one of the reasons I have Dave on the call. He's talking to customers on Fortune five hundred companies, every day on this topic. So, Dave, I'd love for your, your insight onto why you think there's so many proposed AI use cases or initiatives, but so few in production right now.

3. Challenges in Moving AI from Proposal to Production

Yeah. Thank you, Jay. Appreciate it, and and welcome, everybody. Thanks for joining. So to answer your question that so first off, when it comes to AI or ML or just data science in general, there are always going to be a number of use cases that are proposed that may not make it in there.

Just that's just part of the innovation cycle. But because of the staggering number that are stuck in the left hand side, if you will, there's actually brings to bear what the the frictions as Jay mentioned overall. Oftentimes, this comes down to one major bullet point, if you will, which is this. How do we have a consistent way to ensure that we move from idea into production in a way that all the different stakeholders can trust?

That if we go through this process, that we can ensure that this AI is going to be used effectively. It's not gonna cause any undue risk, etcetera.

So, Jay, really at the highest level, it's just making sure that there's a consistent blueprint, if you will, to move from idea through the reviews and QA and productionization into usage that all the different parties, legal, risk, compliance, data, IT, security can trust and ensure that if we go through this process, then we are assured that we are doing everything we can to help to mitigate any risk around it.

Yeah. I think a good phrase for that is operational readiness.

Like, how prepared are we to do this?

And, I do think on on top of that, there's another, idea, that Skip McCormick shares.

4. Identifying Profitable AI Use Cases

So Skip McCormick is the current CTO of Cornerstone, Technologies, but he also spent a lot of time as managing director of Bank of New York Mellon. And, you know, he's raising the question of how do you know which AI use cases to accelerate? How do you know which ones to retire? Which ones are gonna be the most profitable?

Which ones do customers even care about?

And, actually, the Wall Street Journal, just around the time we released this report, had a very similar article that came out that kinda dove into that question around why companies are struggling to, you know, drive return, return on investment for AI initiatives. And it really came down to a couple things, but one of those things was the, product productivity paradox. Right? Like, you're getting maybe some incremental savings from AI initiatives, but to really get the value of it, you gotta scale scale them up and that's gonna take some time.

5. Operational Readiness for AI Initiatives

So while, you know, only a few companies have invested, that they have said that they've actually scaled their investment, forty three percent report that they're still in the pilot stage. So that operational readiness point that Dave pointed out is is really, really critical to being prepared to scale so that you can capture that ROI as you start identifying those key use cases. But the other thing is being able to track and align your initiatives to key KPIs and metrics so you know what is working, what isn't working. So that ability to track and monitor not just the performance of the models, but the value of those models is really, really important.

Alright. Let's keep rolling here.

6. Long Lead Times in Generative AI Deployment

So the second key finding that we came across was the long lead times it takes to bring, generative AI initiatives to market.

For most folks, you know, fifty six percent can take anywhere from six to eighteen months to go from idea intake to put into production and being used. And that seems like a really long time for a technology that is evolving so so quickly.

And so I know there's a lot of things that have to happen at the enterprise level. Right? Large enterprises, it just takes time to do some of the stuff. There's use case review. There's risk tiering. There's data and asset traceability, reviews for security and privacy, production approval, the ongoing monitoring and auditing.

7. Friction Points in AI Implementation

But they've taken a deeper dive. Like, are there some friction points that folks need to be thinking about whether it's manual processes, fragmented systems, or other friction points that maybe might not be so obvious when you're you've got a great AI idea and wanna bring it to market?

Absolutely. I I think the biggest one it it starts at is just the unknown. Right? The unknown of the different technologies, the unknown of what are the different processes that we need to go through, the unknown of what stakeholders need to be involved, what systems that we need to touch.

So it as I mentioned previously, it just comes down to well, if you don't know, naturally, everything's gonna slow down, and and it becomes a manual process. You're going to have to reach out to five different people. They say, well, I'm not sure. Reach out to these other five people.

Okay. Well, you need this review. Oh, wait a second. You're trying to it's the vendor model, so you need to get procurement involved.

So it's that uncertainty, that unknown in the process that it really is the the start of why it takes longer. Because of that, it results in manual processes. But from there, then it goes into well, you actually do need as a especially for the higher risk models or even medium risk models, you do need to involve different reviews, different stakeholders, different systems as part of it. You need to make sure that it has PII or any customer consumer data that you actually are doing a proper review of the usage of that data, the proper security around it, etcetera.

So, again, depending on the situation, you run into and the tenant on the really, the risk during the model, you'll run into additional steps that you need to take. And if you don't have something that's fully defined and end to end life cycle that's defined, that goes and takes the appropriate steps at the right point in the process, reaching out to the right stakeholders, the right different systems that are involved, that becomes an unwieldy, a manual, and therefore slow process. So, again, it first starts with just nailing down that that blueprint that you want to shepherd any new AI solution through the full process from inception all the way through usage and eventual retirement.

8. The Importance of a Defined AI Lifecycle

So it starts there and then aligning the different stakeholders, connecting different different systems to make something that is automated, consistent, and, of course, auditable, which I'm sure we'll talk about soon.

Yep. Yeah. We definitely will.

But hold on to the audible thought.

Skip, has actually joined the webinar, and he made the comment that, you know, folks can get or projects can get stuck in perpetual beta, quote, unquote, because the government situation is still so dodgier. Maybe the operational readiness isn't there. And, you know, again, for me, even looking at low risk use cases around marketing, you know, the potential to run up massive bills or, you know, other concerns related to, you know, data, that that can be really, really unnerving. And so sometimes it is easier to just continue in that testing or beta phase before really ramping it up just due to, you know, the risks and and the lack of that operational readiness, or maybe you don't know where to go next. Dave, I don't know if you you've heard anything like that before, but interesting point from from Skip.

Yeah. The perpetual beta is just because of the fact that it comes back to my original comment of just that, well, not all the different teams are sure that we can trust and have assurance that this is not gonna cause any issues. So they said, you know what? We're good to keep it in a pilot stage or beta stage, but we're not quite ready to sign off on the dotted line that, yes, you could proceed out into production. So, again, it just comes back to alignment, consistency, collection of the relevant inputs, evidence, process, and auditability around it to make sure that everybody can, again, trust that this is something that we can go ahead and use and that we have any of the risks that may occur that we have a mitigation plan around those.

Yep.

One of the other things, you know, to kind of maybe back up some of these ideas of why the operational readiness isn't there, the challenge to getting there.

9. Barriers to Effective AI Governance

There are some perceived hurdles to getting to governance.

And that was one of the questions we asked in the survey.

And, you know, the top four are really interesting, like, the trying to integrate fragmented systems, replacing or scaling manual process processes, you know, administrative burdens, regulatory compliance hurdles. You know, all the typical change management concerns that I think make it challenging for folks to tackle big projects or get that operational readiness. It's not really a lack of buy in or prioritization, although the you know, those kinda came up if you look on the on the right side. Those really aren't the big reasons.

It's really kind of those structural challenges that have been in place. Say we've been doing things for a while. Now we've got all this new technology and new risk that we have to handle. We've gotta change our structure and thinking to do it.

10. Urgency in Achieving Operational Readiness

And that is that is the challenge. So those those hurdles I think are are very real, in getting to that operational readiness, but it's something that can't can't wait.

You know, Dave, I don't know if you have any, like, anecdotes on just the urgency around this or kind of the fear, like, hey. Maybe we can push this off to next year. But from the headlines I'm seeing in the Wall Street Journal, from a variety, you know, financial services or health care companies, like, the the urgency is now. Like, this is something that can't be wait that can't wait till next year.

Absolutely. Again, the really the thing with generative AI that's very impressive, if you will, and, around this is that it it touches every aspect of every single organization. In the past, it was basically when you wanna use AI or ML as as part of the COE, a data science COE or AI COE. But now with generative AI, everybody wants to use it.

Sales, marketing, finance, HR, everybody wants to use it. So the demand, the pressure to use it is really necessitating that you have something that is, again, consistent, scalable, the right level of governance that Mike Dillon actually just made a great comment that you can have too much governance based on the situation. So it's the right level of governance, the Goldilocks of governance as I like to say. Not too much, not too little, but the right amount based on the the scenario that you have.

For high risk models, you need something a little more arduous. Low risk, you can take a more streamlined path, if you will. So, again, is it making sure that you are having something that's defined, consistent, that it has the right level of governance based on the scenario? But, ultimately, it has to be there.

Otherwise, again, with all this demand, it's just going to slow to a halt your innovation. And with the demand across all those different teams, that's just not possible anymore. You have to address this.

Yeah. And, Mike, Michael Dillon, thank you for for that that comment. That's really, really important. Yeah. Thanks for bringing that.

Okay.

11. Fragmentation in AI Use Case Intake Processes

So one of those things so the, you know, the one of those perceived barriers was the fragmented systems. Let's start with one of the first systems or elements to the life cycle process with use case intake or idea intake for the, AI life cycle process. So we asked folks, how many different tools they're using for use case intake, and the answer was, you know, multiple ones. On average, respondents were saying they have at least two point four, different systems or method for use case intake. And so at the very beginning of the life cycle, we've got this kind of fragmented approach that might have downstream in, impacts into visibility and orchestration later on.

Dave, like, when when you're talking to to folks in the market, Fortune five hundred companies, you know, at the very beginning of the life cycle process, like, what impact does, you know, a fragmentation in the approach, take this early in the process? How does that impact the downstream flow or or or, operational readiness for life cycle management?

Yeah. We see this all the time. This cost first and foremost, causes confusion. Right? So do I go to to ServiceNow to put in a application as part of my AI use case?

Do I go to a SharePoint site? Do I go to a a Confluence site? So first off, it just causes confusion. Where am I supposed to go?

I if I wanna have this I have this great idea for a generative AI use case, and I wanna get going. I wanna get going quickly, but I don't know where to go and fill it out. And then even worse, I go fill out on one system and then I have to duplicate a bunch of information to another one and then to a third one. I'm just questioning, like, why do I need to do all this?

Right? What's what's the point? So first first off, causes confusion. Second, it causes duplicative information that often is difficult to keep in sync and or becomes out of date and stale.

So then you're trying to report on this and you don't know where to go to. And then the third, really point around this is that now you have multiple management points. Right? So you have to have teams that are keeping up to date these different areas, these different site, and it's a manual process, as part of it.

And as Skip rightfully points out in the chat just now, thank you, Skip, is that when you get an audit that's called, whether it's internal or external, where do you go? What's your actual system of record? Is it your CMS? Is it your ITSM?

Where do you go? Great comment, Skip. So, again, if you think about confusion, duplicative efforts, management and overhead around it, and lastly, around the audit of just where do you go for that system of record.

Yeah. Yep. Really challenging.

12. Ensuring AI Performance Assurance

I told you we we'd come back to the the assurance piece that you mentioned earlier, Dave. So another question we asked, the group of leaders was, how does the organization demonstrate, assurance or or that your gen a gen excuse me, gen AI initiatives are performing as expected, I e assurance. Right? How do you make sure they're performing within the bounds of what you'd expect them to do?

A multiple choice question, and there are a couple options. One is, hey. This can be done at, you know, the business level or the team level, kind of fragmented across different teams, for testing and documentation and reporting. Or maybe the testing and documentation is done at that team or the business level, reporting is all done at the enterprise level, or maybe it's all done at the enterprise level.

13. Fragmented Testing and Reporting Approaches

And a couple other options, but everybody came back with one of those kind of three things. And the most common one was that testing and documentation done at the line of business or team level, but reporting is done at the enterprise level. And so this is kind of an interesting, you know, kind of fragmented tiered approach, Dave. Like, what do you think about this, this particular common approach that the respondents of the survey are doing?

Are there pros and cons to doing it that way?

Is it better than the other ways? Like, what are you what are you seeing in the market?

Right. So if you think about the ultimate goal here, there there's really, first and foremost, what you need at the enterprise level is an understanding of the risks. Everybody needs to understand what are the risks that are involved, what's the performance of the a given AI system, and, again, is it driving value? Right?

You wanna have visibility into what's being used, the risk, and is it actually driving value? So in that way, the, that understanding needs to be at an enterprise level. Now you may need to do testing and monitoring that as a a model owner or a solution owner, you may wanna do that at a low level. That can be done at a department or team level just to understand, you know, the real mechanics and the day to day metrics around this AI solution.

So that I that I generally see done at a team level. But, again, if you think about what you need at the enterprise level, you need the visibility, you need the visibility into the risk, and is it driving value? So you need to make sure that you're filtering up the most important metrics that are being done for testing, for monitoring, for documentation.

Those do need to be surfaced up at an enterprise level so that you can truly have an AI portfolio view of what I'm using, what are the risks, and where am I getting the most value out of it. So I do see that second bucket as as fairly common, but, but the most important thing that is often missed is to make sure that you flow to the enterprise level the key metrics that are driving the inherent risk to make sure that those risks aren't becoming into turning into issues, I should say, and making sure that you have, again, the right mitigation approach approach that they come into into into, you know, actually coming into fruition.

Yeah.

14. Challenges in Terminology and Vocabulary

On top of that, I think Nancy raises a really good point around just terminology and vocabulary, what that can mean across different teams or geos or business units, and normalizing that to make sure everybody's comparing apples to apples. Like, that that is a real challenge. I mean, regardless of whether you're dealing with Gen AI, but anything across large enterprise. I think especially so with the technology that's moving so quickly and metrics with different technologies that you're trying to compare, it can be really, really, really challenging. So, Nancy, thanks for for raising that point.

Yeah. And that's part of that blueprint that I mentioned, Jay. And, Nancy, that's an excellent point. The blueprint is around everybody has a common understanding of what's necessary at the enterprise level.

That starts with terminology. It starts with how you're doing your risk steering. It starts with how you're categorizing it for reporting purposes across teams and departments and categories of AI solutions, if you will. And then it moves into the process side to make sure you have that blueprints for that process of how you move it through the move it from inception to usage, inclusive of what stakeholders, what reviews, what evidence, artifacts, etcetera, are needed.

So totally agree, Nancy.

15. The Importance of Traceability

Alright. Let's keep rolling here. Moving the last last few minutes, there's the question of traceability. The quote on the previous slide of Skip saying, hey. It's hard to get data scientists to document what's going on, what data they based everything on, what tests they run. And so I think when you're doing those things, you're creating evidence.

But also as you put these use cases into production, you wanna be able to tie that use case back to the source code, the prompt templates, the guardrails, the test that we're on. That can be really, really challenging. And, you know, the interesting thing on this this data that we have here is that more than fifty percent, regardless of what level of traceability you have, more than fifteen per excuse me. Less than fifty percent have high confidence or complete confidence, which means most have moderate confidence or limited confidence.

And I think in these kind of surveys, it's always kinda tough to gauge what that means. Everybody might have a little bit different definition of it. But the fact that most are on that moderate or limited confidence bucket raises, you know, some concerns, like, of how challenging it is to get this traceability or auditability. Dave, what kind of challenges are you seeing around traceability, in the market?

Just complete lack of traceability, to be be quite honest.

So there are some that focus on, alright. I just wanna have an entry or record of here's an AI solution, and then here's a document.

You know, that's a good start, but what happens when an audit calls and said, okay. Well, I need you to prove, you know, that in a reproducible fashion, what data sets you use to go and test this to show that there wasn't any undue bias or that you put it through its paces. I want to see how you were able to show that there was stress testing done. Right?

So how do you do that backwards looking view to say okay, well here's the exact version of this solution that was running on August the 1st of 2024, and here's the datasets we used in testing. Here's who signed off on it. Here's where with their security view. So having that level of traceability both forward and backwards is often missed.

And it's it's actually a a very difficult problem to solve when you think about now with generative AI, but just even before with AI and ML, you had a variety of different technologies that are being used. So being able to trace back across the different data science tools, the different frameworks, languages, data systems that are part of it as part of the the common heterogeneous enterprise right now. Right? How do you ensure you have traceability to answer those simple questions around what model was running or what solution was running on August the first of twenty twenty four and show me the evidence that you tested it?

What were those test results and approvals thereof? So like I said, Jay, it's actually something that is just a gap in most markets. And it's getting worse with the variety of technologies that most enterprises are adopting.

Yeah. Yeah. You can you can see that on the data for sure.

Alright. And so let's keep rolling here into the last five minutes.

So this was a really interesting, response to the question of who owns AI governance, specifically who's controlling the program, budgets, and ultimately accountable for AI initiatives.

16. Ownership of AI Governance

And I think when you hear the term governance, a lot of time we think legal, compliance, and risk teams. But, you know, that the response, you know, was only ten percent. And looking at the percentages here, if you you did the addition, it's well more than a hundred percent, which means there's kinda multiple, I guess, ownership areas, which means there's a multidisciplinary approach here. But there you know, the big buckets of ownership here, chief innovation officer, chief data analytics, and chief information officer with this mattering of cross functional committees or, centers of excellence or or, you know, AI councils. And so that, I guess, multidisciplinary approach, but starting to see accountability with that CIO DNA and innovation areas is really, really interesting and maybe something that most folks might, might have had a different view of governance in the past, but that shift to innovation is it was a really interesting finding that we got, from the report.

So not that these, you know, the lines of business owners and other folks aren't still really, really important or part of it, but clearly, we're seeing some consolidation up at that innovation and CIO level.

And then finally, budget.

17. Budgeting for AI Governance Initiatives

So around, you know, how much is there specific budget allocated for AI governance in the upcoming year? The answer is yes. Everybody has budgeted for for AI governance. And over a third have budgeted, more than a million dollars annually for that.

So just to say, hey. This is on our minds. This is becoming a priority or is a priority. We need to put the dollars behind it to make it happen.

So that shift to innovation, shift in budgets is a really interesting finding that we've seen this year.

And then that kinda wraps it up. You know, we've talked about some of the prescriptions for change that we've seen, what we talked about in here. You know, the first one, you know, at the beginning of the report, we talked about how long it takes to bring AI initiatives to production, to operational readiness issues. There's rapid change.

There's the pressure to, you know, get things to market for competitive advantage.

But those challenges in bringing it to market, you know, six to eighteen months, and that backlog of proposed use cases, clearly something needs to be done to to help bring those to market, more effectively. And you can start in some simple areas. One, it's just simplifying that use case intake process that we talked about. Kind of consolidating some of those systems and anticipating governance challenges early on, making sure you're doing that governance and control work upfront, to reduce the time to market and bring ROI forward.

Second thing is, you know, applying life cycle management at the enterprise level. Right? Considering enforcing a assurance, making sure there's that enterprise common language and blueprints, to make sure everybody's clear in the organization, how that's working, where there's clear ownership and accountability. That can help streamline the process, but having it at the enterprise level blueprint is really, really, important. What we're seeing, leaders starting to do more and more. And then finally, having leaders take the reins on AI life cycle automation and governance, ensuring that leadership is strong with clear ownership.

If there is a multidisciplinary team, making sure there's understanding of how the council works, center of excellence works, and assessing whether, your teams have the, you know, who's got the capacity to monitor and, control use cases as they go live and prepare for audits and reports, so that there's not, you know, like Dave, I think, said early on, audit search parties can happen when, you know, you put something in production and forget about it. So making sure that, you know, there's clear accountability and folks are taking the lead on it is really important.

18. ModelOp's Role in AI Lifecycle Management

And then this wouldn't be a model op webinar without a quick little, little word from model op.

You know, we just talked about a lot of the challenges and then what can be done to make that strategic shift.

And model op can help you make that strategic shift. It's the AI control tower for enterprises and, you know, can help you with, you know, traditional AI and l ML, Gen AI agents, and beyond. We're always preparing and seeing around the corner for what's coming next.

ModelOp is the leading AI life cycle automation government software. We can help you really in three different ways tackle the challenges that we covered in this webinar. Number one is visibility. Right? Executives, leaders need clear view into every model, every use case, where it lives, what it does, and ultimately what its value is, model up provides that.

Control, you know, optimizing that life cycle automation to your internal policies, external policies, whether in health care, financial services, dealing with SR eleven seven or implementing frameworks like NIST dot ai or MF, we help you consistently enforce those policies across the entire life cycle. And finally, assurance, making sure that your models are consistently and continuously monitored, showing they're performing, compliantly and they're auditable, and they're delivering the right value.

So with that, that's it. Thank you, guys. One minute over, I will send out the recording and, the presentation so you have free records. I encourage you to download the report. Please read it, and we'll have another webinar in June. We'll be announcing that shortly.

And then in the meantime, have a great, rest of May, and thanks for joining. Bye, everybody.

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us