By Stu Bailey, Co-founder and Chief Enterprise AI Architect at ModelOp
In the Enterprise, there is currently a lot of focus on operationalizing AI. Numerous organizations report debilitating challenges getting their data science-based models out of the Business Units, where most data science model development takes place, and into Enterprise production with Enterprise visibility, operational efficiency, repeatability, risk management and governance, at scale. These challenges are increasing at a time when senior management is becoming more insistent on getting deep visibility into the data science process and the results of data science investments. As a result, vendors of data science platforms across the board – including Sagemaker, DataRobot, Domino Data Labs, AzureML, GoogleAI and others – have been racing to add “MLOps” features to their products and platforms (where ML stands for “machine learning”).
On the surface this may seem like a positive development, however a more nuanced understanding reveals issues, namely:
- MLOps is only focused on machine learning (ML) models and thus not inclusive of the many other model types in use in large enterprises, and
- Unlike ModelOps, MLOps does not adequately address the Enterprise risk, business or technical challenges for any type of models, including ML models
As a result, the moves by data science platform vendors to address operationalization issues with MLOps features is confusing Enterprises needing to make critical decisions on how to close widening operational gaps. [See here for a discussion about MLOps vs ModelOps.]
The 800lb gorilla in the room is a question: Will any given data science platform ever provide the best solution for meeting Enterprise AI production requirements across technology, risk, and business needs? Or is the best practice for Enterprise AI to keep ModelOps independent from data science in both practice and platform?
BTW, if you are not sure what ModelOps is you can see here, here and here what Gartner and others are saying. Put simply, ModelOps is an enterprise capability that enables organizations to scale and govern their AI initiatives. A ModelOps capability achieves this by providing business, risk, and technology teams at the Enterprise level the ability to define, automate, and audit every model’s entire production life cycle as those models drive decisions and value while consuming resources and bearing risk in the Enterprise. While ModelOps certainly touches data science and model creation, ModelOps focuses exclusively on models that make it out of the Business Units and into Enterprise production. In that light, MLOps is definitely not ModelOps.
Back to the question posed above: Is the best practice for Enterprise AI to keep ModelOps independent from data science in both practice and platform?
It’s an important question, for several reasons:
- Lots of big organizations are just now asking the question as their focus shifts from data science initiatives within Business Units to getting models into Enterprise production with appropriate Enterprise-level technical, business and risk controls;
- The answer touches every constituency in the enterprise that is tied to Enterprise AI initiatives, which is pretty much everyone;
- Significant investments continue to be made in data science initiatives and data science platforms, but without first having an answer to Enterprise AI operation and control.
- Few organizations have a coherent strategy around who is responsible for ModelOps and how the function is structured.
That is DRAMATIC, especially for non-digital-native companies. The FANGs (Facebook, Amazon, Netflix, Google) and their cousins in industries like ecommerce and media have a much easier time operationalizing AI than do more traditional large enterprises, in large part because, as the song says, the digital native companies were “Born That Way”.
So, back again to our question: “Is the best practice for Enterprise AI that ModelOps must be independent from data science in both practice and platform?”
What do you think? Please take some time to really consider the question before next week’s blog post.