Who’s Accountable for AI and its Risks? Why Enterprise CEOs Need to Assign AI Ownership Now
Webinar | Tuesday, April 30th | 1pm ET
Search
Close this search box.

Benefits and Risks of Artificial Intelligence: 3 Lessons on Managing AI Risk and Reward

by Dave Trier, VP Product at ModelOp

Dave is charged with defining and executing on strategy for ModelOp, which helps customers solve their AI Governance and operations challenges, ultimately unlocking the value of AI.


How enterprises balance the risk and reward of generative AI

Over the past year, I had the pleasure of speaking about AI with over 100 executives from some of the largest companies in the world. Two resounding themes arose from these conversations: 

1. Rapid adoption of AI (especially Generative AI) was a demand from the C-suite in the vast majority of companies.
2. Unparalleled concern about AI risks existed among all executives.

In this post, I’ll share three particularly noteworthy conversations to help executives and AI leaders weighing the untapped potential of AI versus the risks. It’s important to highlight how each executive recognized the urgency to implement AI governance in order to deliver on the demands for AI by the business, but in a way that protected the organization from the inherent risks of the technology. 

Also, I will touch on the simple steps each organization undertook to get started. Spoiler alert: AI governance does not require a multi-year program; rather, there are a few targeted steps that an enterprise can — and should — take immediately to protect themselves from undue customer, financial, brand, and IP risk.

 

The time to safeguard AI was yesterday 

Before jumping into the specific case studies, we need to understand the overarching context of why AI Governance is necessary. In my 20+ years of working in technology for large enterprises, I’ve never seen global Fortune 500 companies move as quickly as they have to embrace a new technology. 

For example, in select conversations with enterprise executives, they’ve stated the following around the demand for Generative AI:

• “Across the enterprise, the business teams have proposed over 75+ Generative AI use cases.”
• “We are investing $20M+ in Generative AI.” These are big numbers.
• “In our last Board of Directors meeting, we spent over 50% of the time talking about Generative AI.” For example, H2 2023 had a record number of S&P 500 companies citing AI on earning calls.

Source: FactSet

Indeed, the demand for Generative AI by business teams is real and here now. However, there is inherent risk with the use of AI. There have been countless articles and conversations about the risks of AI — mostly focusing on ethical concerns, which are very real — but there are also financial, intellectual property, brand, and regulatory risks associated with the use of AI. 

Headlines such as the following demonstrate the real challenge of “AI that goes wrong”:

• Intellectual Property Risk: “Samsung employed unknowingly uploaded source code to GPT, leaking confidential intellectual property.”
• Regulatory Risk
: state-level regulations, such as the California Attorney General’s Office AI Inquiry and Texas House Bill 2060 are requiring immediate cataloging of AI systems at Healthcare providers.
• Financial Risk: “LLM Chatbot was tricked into agreeing to give away a free Chevy.”

Source: X

With this Risk vs. Demand context in mind, we’ll dive into the select executive conversations. 

 

Lesson 1: “There are too many requests for new AI initiatives and tools for an enterprise to handle manually ”

Context: In this first example, a healthcare executive started by explaining how multiple teams had already adopted AI for patient care, as well as internal operations. Despite being a conservative organization, the demand by numerous teams to benefit from the use of AI necessitated that they adopt these technologies. However, soon the organization was drowning in a myriad of requests to use different AI technologies — and the head of AI could no longer keep tabs on which AI tools were being explored for which AI initiatives. Furthermore, these AI initiatives leveraged not only internally developed AI models, but also incorporated AI models from 3rd parties, as well as software systems that embedded AI within the software. Finally, regulations at both the federal level and state-level were shaping quickly, all of which would require that any “agency” (which typically includes healthcare providers that are part of a state university and/or obtains funding from the government) register all AI systems that were being used by the organization. 

The Need: The executive was very clear — we cannot wait any longer to put AI governance in place — manual efforts will no longer suffice. 

“We can cannot wait any longer to put AI governance in place — manual efforts will no longer suffice.”

Start Smart: Given the urgency, we talked through a targeted action plan they could use to start immediately — in as quickly as 90 days. Their immediate goals were two-fold:

1. Obtain visibility into the specific initiatives or “systems” using AI throughout the organization.
2. Provide confidence among the C-suite that an initial AI governance system was in place. 

To accomplish this, the organization started with implementing an AI Governance inventory that systematically collected the requisite information for each AI system, regardless of whether it was internally developed, a purchased AI model (e.g. GPT-4), or AI embedded into an existing software capability (e.g. Zebra Med-AI). 

Initially, the executive had the perception that this would be a multi-year effort. However, I talked through how modern AI Governance software (like ModelOp) was specifically designed to streamline the onboarding and management of all AI models, through out-of-the-box integrations and a powerful automation engine. As such, a typical implementation of their “phase one” AI Governance system could be accomplished in 90 days, providing the executive team Visibility into AI usage across the organization, and thus instilling confidence and assurance that the AI systems were being governed. 

 

Lesson 2: Enterprises can’t afford to keep living in the AI Wild West

Context: In the second example, a CPG executive explained that his global organization had 100’s of AI and ML use cases — many of which were driving tremendous quantifiable value to the bottom line. To date, the executive explained, it has been “the wild west of AI”, meaning that there had been inconsistent and ad hoc governance oversight of the AI/ML projects. While this was acceptable in the early days, given that these AI/ML systems were making real-time decisions on behalf of the marketing and other organizations, there were real and substantial dollars at play. The enterprise could no longer afford — quite literally — to continue these programs without a proper AI/ML governance program.

“Given that these AI/ML systems were making real-time decisions on behalf of the marketing and other organizations, there were real and substantial dollars at play.”

The Need: The executive was transparent in their challenge: their very small team of model governance professionals could no longer keep pace with the rapid growth of the AI/ML use cases — a more systematic approach was needed.

Start Smart: Given the current state, we talked through how they can use automated governance workflows to: 

1. Drive a lightweight, but consistent set of controls across all of the disparate teams and all of the varying AI/ML technologies used. 
2. Substantially lessen the manual burden on governance officers by leveraging self-service and automation. 

As with the prior example, the perception was that such an effort would require a major overhaul of their current processes and also a major replatform of their existing technologies. However, again, I talked through how modern AI Governance software like ModelOp has native governance workflow capabilities that automate the end-to-end lifecycle of AI systems–from idea inception, to risk review, to productionisation, usage, and eventual retirement. Furthermore, software like ModelOp ships with best-in-class governance workflow templates and out-of-the-box integrations that typically provide 80% of what an organization needs. Thus, again, the implementation time is as little as 90-days (and not 3+ years as originally perceived). Through this AI Governance software approach, the executive was confident that they could enforce the basic governance controls without the need to exponentially scale a governance team.

 

Lesson 3: Enterprises need to quickly measure and report the value of AI initiatives

Context: For the third example, it touches an area that is not often discussed in the context of “AI Governance”, but nevertheless, is a crucial aspect for the success of an AI program at an enterprise. In this conversation, a retail executive enumerated the countless AI/ML projects that were underway. While each one had excellent promise, there was little-to-no understanding of the financial (or more generally, business) returns of each of the initiatives. Now, the executive didn’t expect that every single AI/ML project would reap multi-million dollar returns; however, he did want to ensure that there was appropriate accountability that the most impactful investments were being made. If a project was not showing business benefits, then it should be retired, and focus should be directed elsewhere. Failing fast was particularly important as the enterprise had corporate-level goals that needed to be met to maintain its competitive advantage — they did not have the luxury of a “slow-burn” approach to determining if AI/ML projects would be fruitful. 

“Given the strategic corporate goals, the enterprise did not have the luxury of a “slow‑burn” approach to determining if AI/ML projects would be fruitful.”

The Need: The enterprise-level strategic goals necessitated that all AI/ML projects were aligned and driving benefits towards these goals. However, in the current state, the executive had no visibility into the ROI of each initiative. All “reporting” was done manually via anecdotal conversations — there was no consistency or investment-level view for executives. Given the strategic urgency, a more comprehensive “governed” approach was crucial.

Start Smart: With this context, we discussed the primary two goals as:

1. Establishing a consistent approach for identifying and tracking business benefit of an AI/ML project.
2. Providing visibility to executives on which AI/ML projects were most impactful.

As mentioned above, most of the buzz around AI Governance is around ethical fairness — business impact tracking is typically not mentioned. In my opinion, this is extremely odd given that the sole purpose of AI is to drive top and bottom-line business benefits. In the simplest terms, the purpose of “Governance” is to ensure that all teams are “doing the right thing for the company, its customers, and the community”. Thus, Governance should include ensuring that the AI projects are driving value for the company and its customers. 

Luckily, modern AI Governance software like ModelOp provides

1. A framework for allowing enterprises to capture and track business value of a given model.
2. Executive dashboards that surface the key performance indicators (KPI’s) to multiple stakeholders and executives.

As a result, I talked the executive through a few easy steps to get started with enabling lightweight Business Value Governance, once again, providing the executives confidence that they could implement a more comprehensive and visible approach to rationalizing their AI/ML projects. 

 

AI Governance will prevent you from becoming the the next ‘Rogue AI’ headline

The above are just three examples out of hundreds of similar conversations that I’ve had where executives are struggling to keep pace with the demand for AI by business teams in a way that does not introduce untenable financial, brand, IP, and/or regulatory risk to the organization. 

The common theme across all of these examples is that “AI Governance is needed now” — there is too much at risk to the organization to continue using manual methods in the age of Generative AI. Rest assured, though, that there are a few simple steps that an enterprise can take to quickly implement an AI Governance capability — so why wait?

*Please note that, for the purposes of this article, all company names and personas have been anonymized.

You might also enjoy

AI Regulations: What to Know & What to Do Now

Global, federal, and state-level governments are moving quickly to implement AI regulations. While reading this, you may be asking, “If I want to use AI, what do I need to do now to prepare my organization now?”

Get the Latest News in Your Inbox

Further Reading