April 25, 2024

Who’s Accountable in the Enterprise for AI and its Risks?

AI has an accountability problem

“Who’s currently accountable for AI, especially if AI goes wrong?”

“That’s a great question - I guess I’m responsible if there is a security leak, but I’m not sure who is accountable for AI as a whole.” – Global 2000 CISO

This is a very common scenario I encounter when speaking with enterprise executives — they’re unaware who is ultimately accountable for AI and its risks. Given all of the promises that the C-suite is making to their board of directors and shareholders (check out page 3 of this deck for a FactSet chart that shows how often AI is mentioned in earnings calls. Spoiler alert, it’s a lot), I’m perplexed and concerned that most enterprise CEO’s and BoD’s have not assigned an officer responsible and accountable for the use of AI. In every other corporate initiative with external shareholder visibility, there are always detailed plans to execute on the initiative and the assignment of an officer or senior leader to drive the initiative. Furthermore, the risks of AI are substantial, again, leading to the question, “Why isn’t there a designated leader accountable for AI?”

In this post, I’ll cover:

  1. The current state of AI accountability across enterprises
  2. Key considerations when designing an enterprise AI accountability structure
  3. Three recommendations for ensuring AI accountability

AI regulations are rising, standards are lacking, high risk tech is already in the market, and ownership is ambiguous

From mitigating risks to maximizing opportunities, the stakes are high when it comes to navigating the ethical, legal, and operational challenges posed by AI. More importantly, the pace at which the stakes are getting higher is accelerating rapidly. Consider:

The number of AI regulations in the United States is sharply increasing
The recently published AI Index 2024 Annual Report* shows that AI-related regulations in the U.S. rose significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.

*Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.

The US Office of Management and Budget (OMB) is mandating that all US Federal Agencies have AI Governance implemented by December 1, 2024
On March 28, 2024, the OMB issued a memo to department and agency leaders on new rules for how the Federal Government can use AI. The memo contains rules that require all 400+ Federal Agencies to hire a Chief AI Officer by May 27, 2024, to have a plan for compliance with the issued rules by September 24, 2024, and to implement AI safeguards by December 1, 2024.

Robust and standardized evaluations for LLM responsibility are seriously lacking
The AI Index 2024 Annual Report* goes on to reveal a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.

Companies are currently using “high risk” and possibly “unacceptable risk” AI systems in employee hiring practices
AI is transforming the hiring process for enterprises. Employers increasingly rely on more layers of technology to streamline the recruiting, application, and interview processes for job candidates. AI-based pre-employment screening tests are becoming more common and these tests aim to quantify the previously “intangible,” but extremely important “fit” characteristics: personality, attitudes, integrity, and emotional intelligence. However, these systems present a myriad of ethical and bias issues that will be considered “high risk” or potentially “unacceptable” under the recently approved EU AI Act. Furthermore, in 2023, New York City passed the nation’s first-ever law requiring employers to conduct annual bias audits of automated employment decision-making tools. Months later, the U.S. Equal Employment Opportunity Commission issued guidance for employers about how to audit AI for employment discrimination. Illinois and Maryland have similar laws.

The number of companies with a designated head of AI position has almost tripled globally in the past five years
The Financial Times reported on the rise of the Chief AI Officer and found a massive surge in demand for this role, but a limited talent pool. Even with the increase in CAIOs, the analyst firm Gartner says that AI generally remains the domain of the Chief Technology Officers and Chief Information Officers, which respectively take the lead on AI initiatives in 23% of organizations, and that only 21% of companies have plans to create a CAIO position.

These market dynamics are ripe with risk, and large enterprises to small startups are betting heavily on Generative AI to drive top-line revenue and operational efficiencies. These transformations are along the same magnitude as the digital internet era, which wholly shifted business strategies for many companies. Again, with the gravity of such changes, how can there not be clear accountability for AI and its risks within an enterprise?

This accountability vacuum has parallels to the financial crisis of 2008, before banks and insurance companies ushered in the era of Chief Data Officers. Too little attention was paid to data, which may have led to missed signals about the coming financial meltdown. So who is now paying attention to AI in enterprises with the same level of diligence that CDO’s apply to data?

Ambiguity in AI ownership slows both innovation and risk mitigation

ModelOp recently conducted a poll that reinforces the finding that AI ownership is fragmented. We asked a group of enterprise leaders which C-Suite executive is currently accountable for all AI-related risks in their respective organizations. While Chief Legal or Compliance Officers led the way with 27.3% of the responses, 21.2% of the respondents said they “don’t know.” Only 3% said CAIO.

With these findings as context, there are several key considerations that an organization needs to take into account when framing their AI Accountability structure:

Committee vs. Designated Leader
In many conversations with large organizations, it’s been shared that they have an “AI committee,” which typically reviews the different AI initiatives. However, it’s still unclear on the lines of ownership and accountability. Ask the AI committee, and they state the business sponsor is ultimately accountable. Ask the business sponsor and they point to the AI committee as the entity tasked with safeguarding the use of AI technology.

While an AI committee absolutely has its purpose, committees have their challenges:

  1. They are consensus driven, which often leads to delays in making decisions on what should or should not be done; the result of which can leave interpretation to the business teams during decision making, which leads to varying levels of oversight and accountability.
  2. It is generally difficult for the CEO or BoD to ask “a committee” what the current state of AI risk posture is for an organization, amongst other regular conversations that a CEO should have to stay informed of such a critical strategic initiative.

Given the pivotal role of Generative AI for such a corporate initiative, a designated officer or senior leader seems to be the most plausible approach to ensure the program is designed and executed to plan, to manage the risks around the use of AI, and ultimately be accountable for issues that absolutely will arise throughout the program.

Enter the Chief AI Officer
To address the challenges listed above, some organizations have created the role of the Chief AI Officer, who is responsible and accountable for all aspects of AI, including the usage of AI and its ever-critical governance.

While the role is still evolving, a recent Chief AI Officer job post suggests an effective officer needs to have cross-discipline experience in computer science and business, and priority responsibilities include transformation, technology advocacy, strategy alignment, and governance. Here’s the list of responsibilities from a recent post on LinkedIn:

  • Lead the AI/ML Center of Excellence, which acts as the enabling function for the enterprise; develop and retain critical talent in this highly competitive space.
  • Work with learning and development organizations to develop training and communication plans to bring the enterprise up the curve with respect to the art of the possible.
  • Evangelize AI/ML approach and vision internally and externally with clients and key stakeholders.
  • Partner with the Chief Data Officer to build a cohesive data strategy and roadmap that enables the enterprise to realize its AI/ML vision.
  • The AI/ML COE will lead the design, development, deployment, and ongoing optimization of AI/ML models and solutions.
  • The AI/ML COE will partner with the CIO/CTO organization to develop a technology roadmap and champion business case development for ongoing investment and returns.
  • Work with the risk, legal, and compliance organizations to ensure compliance with all risk policies, regulations, and laws with respect to models, their implementation, and use cases.

The US Government is leading the way — mark Dec 1, 2024 on your calendar

As a result of President Biden’s Executive Order, the White House Office of Management and Budget (OMB) published Memorandum M-24-10 on March 28, 2024, which specifically and legally requires that all 400+ federal agencies hire a Chief AI Officer:

“Within 60 days of the issuance of this memorandum, the head of each agency must designate a CAIO. To ensure the CAIO can fulfill the responsibilities laid out in this memorandum, agencies that have already designated a CAIO must evaluate whether they need to provide that individual with additional authority or appoint a new CAIO.”

Specifically, the OMB M-24-10 calls out managing the risks associated with the use of AI, further laying the groundwork for AI accountability:

“Executive Order 14110 tasks CAIOs with primary responsibility in their agencies, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation, managing risks from the use of AI, and carrying out the agency responsibilities defined in Section 8(c) of Executive Order 1396012 and Section 4(b) of Executive Order 14091.”

The timeline for following this order is eye popping. Federal agencies must hire a CAIO by May 27, have a compliance plan by September 24, and implement AI Governance by Dec 1, 2024. This is an amazingly tight timeline — even by private sector standards, forget about the Government where this is ludicrous speed — so a Cannonball Run-style race to hire, plan, and implement is on. This Federal mandate for the Chief AI Officer and AI safeguards will further bolster support for similar roles and responsibilities within the private sector.

Here’s the detailed timeline for the OMB memo and the requirements that must be met by each date:

  • Within 60 days (by May 27, 2024):
    • Each agency must designate a Chief AI Officer (CAIO)
    • Each CFO Act (includes the DoD and DoE) agency must convene an AI Governance board
  • Within 180 days (by September 24, 2024):
    • Each agency must report a plan for compliance with the memo (and do so every two years thereafter until 2036)
  • Agencies have until Dec 1, 2024 to implement AI safeguards, including:
    • Risk management and termination of non-compliant AI
    • Minimum practices for either safety-impacting or rights-impacting AI
    • Complete an AI impact assessment
    • Test AI for performance in a real-world context
    • Independently evaluate AI
    • Conduct ongoing monitoring
    • Regularly evaluate risks from the use of AI
    • Mitigate emerging risks to rights and safety
    • Ensure adequate human training and assessment
    • Provide additional human oversight, intervention, and accountability
    • Provide public notice and plain-language documentation
    • Identify and assess AI’s impact on equity and fairness, and mitigate algorithmic discrimination
    • Consult and incorporate feedback from affected communities and the public
    • Conduct ongoing monitoring and mitigation for AI-enabled discrimination
    • Notify negatively affected individuals
    • Maintain human consideration and remedy processes
    • Maintain options to opt-out for AI-enabled decisions
  • On an annual basis:
    • Agencies must individually inventory and report all of their AI use cases

Is AI Governance implementation really possible in 2024? Yes.

Really. Yes, AI Governance can be implemented within the timelines mandated by the OMB with the right accountability, minimum viable governance, and AI expertise and governance support structures.

Based on my experience implementing AI Governance with Fortune 500 companies, here’s three straightforward steps that will help any organization meet the December 1 deadline:

  1. Identify who will be accountable and drive AI governance across the organization - consider a Chief AI Officer.
  2. Begin implementing Minimum Viable Governance (MVG) immediately (check out my MVG framework here).
  3. In parallel, develop additional organizational accountability structures, such as AI Centers of Excellence and Governance Committees.

Conclusion: The Chief AI Officer Cometh

AI gone wrong is a ticking time bomb, and there needs to be one throat to choke, and the best way to balance innovation and risk-mitigation is with the Chief AI Officer.

Read the full article
Get the Latest News in Your Inbox
Share this post