Is the best practice for Enterprise AI to keep ModelOps independent from data science in both practice and platform? [Part 2]

By Stu Bailey, Co-founder and Chief Enterprise AI Architect at ModelOp

Back again to our question from my previous article: “Is the best practice for Enterprise AI that ModelOps must be independent from data science in both practice and platform?”

The input/comments/discussion from the previous post have been very interesting and in line with our experience.

We at ModelOp are involved with a lot of very large, non-digital native enterprises who are grappling with the question of how to govern and scale their AI initiatives, so we are on the front lines of this struggle. Here is what we’re seeing:

  • The status of data science investments is becoming a board level concern, and they’re asking hard questions, e.g. “Are we today capable of conducting a full audit with respect to cost, performance, provenance, and lineage of all models in production across business, risk, security, technology, data science metrics?”
  • While initial data science investments have shown great ability to deliver tangible value, serious challenges have been uncovered related to operational complexity and risk management at scale;
  • Organizations have made large, multi-year investments to support AI, including building vast data lakes and pipelines, hiring data scientists and acquiring multiple data science development and execution platforms;
  • Models are recognized as the concrete and identifiable enterprise assets that result from data science investments;
  • ModelOps is increasingly recognized as a foundational pillar of an Enterprise AI strategy [“Market Guide for AI Trust, Risk and Security Management,” Avivah Litan et al, September 1, 2021];

In light of these dynamics, the conversation around ModelOps is becoming more intense.  One thing we’re seeing is that DIY ModelOps projects are losing support to vendor-provided solutions, because few enterprises can build and maintain competitive ModelOps platforms at any reasonable cost.  And the number of commercially available offerings is growing.  Virtually every data science platform vendor is now claiming they will be able to deliver all the necessary operational (non-data science) governance and support across all Enterprise functions for ALL models – someday.  But today, virtually all such offerings are limited to MLOps capabilities and models that are developed by data scientists within that specific platform. But even if data science platform vendors were able to develop and come to market with comprehensive ModelOps capabilities, would it be a good idea to use them?  From what we see, the answer is decidedly “NO”, for several reasons:

  • Data science is evolving rapidly, and no one platform does it all.  Data scientists need the freedom to use the best tools and techniques to address each use case.  That’s why large enterprises report using at least 4 data science platforms, with some using 7 or more.  And there’s no evidence that this trend will reverse.  Tying ModelOps to a particular data science platform is fairly begging to be locked in down the road.
  • Machine learning models are very important, but they’re by no means the only type of models in use, nor will they be.   Our customers and prospects recognize that “models” could be machine learning, deep learning, natural language processing, big data centric, small data models, rules based, legacy models, and model types yet to be discovered.  This further distinguishes ModelOps as an enterprise capability vs. MLOps, which is a tool for data science.
  • Data science is about innovation, freedom and exploration.  ModelOps is about codifying and automating enterprise templates for policy, process and accountability.  There’s fundamental dissonance in trying to serve both of those objectives with a single system.
  • Enterprises need a single, centralized ModelOps platform that is equally effective for all models from all sources – that is, completely agnostic to where models come from and where they’re deployed.  It’s hard to imagine how a vendor of a data science/execution platform would put aside their own business objectives (i.e. market dominance) and be equally friendly to all models and all deployment environments.
  • Risk management requires enforcing compliance, which in turn requires separation between those doing development – data scientists – from those managing risk.  Using data scientists and their platforms to enforce compliance is like letting students grade their own papers.

So, back again to our question:   “Is the best practice for Enterprise AI that ModelOps must be independent from data science in both practice and platform?”  Everything in our experience says yes – keeping ModelOps independent means data science freedom and lower enterprise risk – at a fractional cost.

You might also enjoy

Get the Latest News in Your Inbox

Further Reading