Where You Are Now
Once some initial pilot projects have been proven to show promise, teams need to go about putting them into production. At this phase, wave two, data scientists typically use a range of manual approaches for deploying models.
Soon, teams realize that the number of models being developed and that will ultimately need to be deployed can start growing rapidly. Often, you’ll have one or two business units in particular that will want to increase the number of models in production.
For many teams, manual deployment processes start to represent a significant obstacle to progress. The onboarding process can extend into months, and the longer this process takes, the longer it takes to start realizing value and recouping investments.
These manual efforts also create a clear ceiling in terms of the number of models that can be deployed. It’s also critical to underscore that once models are deployed, the work isn’t done. Teams need to continuously monitor models to ensure performance, compliance, and integrity issues don’t arise. Over time, the cost and waste associated with redundant, manual efforts will continue to eat up an increasing share of the model development budget. At some point, these costs will invariably become unsustainable.
How to Prepare
Within each business unit, teams are searching for various tools, including those that enable more rapid model deployment and those that can support ongoing data science-focused performance monitoring after deployment. At this phase, quite a few teams have turned to machine learning operations (MLOps) solutions and other tools offered by hyperscalers and data science vendors, including Amazon SageMaker, Dataiku, DataRobot, Fiddler, Google AI, Iguazio, and Microsoft Azure.
These offerings can help data scientists during model development and facilitate some of the efforts associated with deploying models into production. The problem is that these tools work at the department or team level; they’re not equipped to provide a unified view across the enterprise. Once models are deployed in production, this lack of visibility and standardization across the organization can present a range of implications.
Left ungoverned, models can introduce significant risks, including for non-compliance with industry regulations and failure to meet consumers’ expectations for how AI models should be used. Further, this lack of visibility makes it difficult for executives to gain a basic understanding of the models that are in production, where they’re running, how they’re performing, and so on. Consequently, leaders are ill-equipped to track, let alone optimize, the return that AI investments are delivering.
It is for these reasons that teams looking to make the move to wave three begin to look at ModelOps platforms. These platforms enable teams to track, govern, and scale AI model operations across an entire enterprise.