Who’s Accountable for AI and its Risks? Why Enterprise CEOs Need to Assign AI Ownership Now
Webinar | Tuesday, April 30th | 1pm ET
Search
Close this search box.

AI Needs to Break Free from “Frozen” Processes

4 Minute Read
By Stu Bailey

There is no disputing that artificial intelligence (AI) has had a massive impact on a broad range of human activities, an impact that has been widely publicized.

Accounts like this one from WIRED magazine are impressive. But then frustration creeps in, because I know AI could have an even greater impact – scaled to a wider range of applications — if it was not held back by manual and inefficient processes!

I visualize the current state of affairs as an iceberg, with only a small portion visible above water and a much larger portion hidden underneath.

Beneath the impact that analytical models have, once deployed at scale, lies a huge amount of effort by all stakeholders in model creation, deployment, monitoring, and governance. First, they must develop truly useful models, then operationalize them, and, afterward, make sure those models continue to deliver value.

You might think what’s needed is an ice breaker – a way to crash through process inefficiencies. And that’s what many companies opt for initially. They use brute force to get models into production and keep them updated.

But this approach won’t scale. What’s needed is a redesign of the entire process — exactly what the discipline we call ModelOps provides.

Frustration all along the line

Today, a line-of-business manager with a thorny problem first must convince the organization’s analytics team that existing reports and recommendation engines do not adequately address the problem. Then data science experts are engaged to develop, train, and test a new or updated analytical model.

But data scientists are not experts at deploying such models within operational systems and may need to call on DevOps specialists. Even if a data science workbench facilitates deployment, a company’s IT organization needs to find the resources to integrate the model’s coding with its computing environment.

Setting aside the likelihood that data science, DevOps, and IT teams may have a backlog of projects, the back-and-forth between all parties is time-consuming and, typically, frustrating for all. And who is responsible for monitoring the lifecycle of this and other analytical models? That’s critical – and, in many organizations, pretty much up for grabs.

Here are some situations I often observe:

  • The line-of-business manager has to re-budget and re-balance resources to operationalize the AI model lifecycle to meet C-Level commitments to leveraging AI to drive top-line growth and bottom-line efficiencies. What was developed by the data scientists provided clear business value – but was developed and trained in a way that could not be supported in an enterprise’s execution environment — nor was it meant to, initially. As the number of models deployed grows, governing them through spreadsheets or simple repositories just won’t work.
  • The analytics team is also concerned about model governance. If model development is scattered across business units, the organization will find it difficult to audit models’ performance and trace their lineage. Models created with different tools or in different languages further complicate this task. On the other hand, if governance is centralized, the team may risk lock-in to a single data science platform, which can be costly and cuts off access to broader innovation in the market.
  • Data scientists resist using a single tool to develop models. Their expertise leads them to favor certain tools or languages for certain types of problems. They also know that newer languages hold great promise for model creation and want to be able to use them. While some data science workbenches do support model deployment, they do not support operationalizing the full lifecycle of the model at enterprise scale.
  • The DevOps team can’t help with governance either. The tools they work with are designed to identify and fix software, not errors in PMML, Python, or other languages used in model creation. Maintaining model lifecycle across the enterprise is simply not in their skill set.
  • IT would welcome a way to bring greater efficiency to the process of deploying models in the array of applications that business units rely upon. But IT very likely has responsibility for managing a complex, heterogeneous computing environment and the CIO is understandably worried about incurring greater technical debt and Shadow AI.
  • The rest of the C-suite, however, sees the competitive advantage rivals are obtaining through AI and are being pressured by investors and the board to scale their use of AI across a wider array of internal and market facing applications and processes.

We are living in “the age of AI,” and its effects are all around us, from wearable devices that track our heart rates and sleep patterns to recommendation engines behind our favorite shopping sites and streaming services. AI is being employed in virtually every industry, from agriculture to medicine to software development. AI, for example, helps financial institutions spot money laundering and other potentially fraudulent activity more quickly.  It has helped US veterans transition to civilian life, improved monitoring of at-risk newborns, and saved lives by making weather forecasting far more accurate. But AI can do much more, and Operational Enterprise AI -the ultimate goal of ModelOps- is the most challenging automation and governance problem the enterprise has ever faced.

You might also enjoy

AI Regulations: What to Know & What to Do Now

Global, federal, and state-level governments are moving quickly to implement AI regulations. While reading this, you may be asking, “If I want to use AI, what do I need to do now to prepare my organization now?”

Get the Latest News in Your Inbox

Further Reading