Technical Challenges of Model Deployment

technical challenges of deployment blog-1Deploying analytic models can be a long, slow moving process with many obstacles along the way. Many models are abandoned before they ever make it into production because of inefficiencies that slow down or halt the entire process. To overcome the challenges of model deployment, we need to identify the problems and learn what causes them. Some of the top technical challenges organizations face when trying to deploy a model into production are:


1. The model is not compatible with the production environment.

The first deployment challenge we will cover is the issue of the model’s compatibility from the creation environment to the production environment. Data scientists today use a variety of different tools to solve for many critical business problems. While a variety of solutions enables the data science team, each new tool and language they use must also be handled by IT to deploy the model.  This often results in models being recoded into different languages as the initial language cannot move into the production environment. This leads to longer cycle times, along with potential inconsistencies in the translated model. While monolithic platforms simplify some of this challenge, and helps IT, it may limit the data science team from adopting certain techniques. It’s a fine line between keeping the process efficient and limiting the data science team on what they can achieve.

Solution: The challenge of model compatibility across the analytic lifecycle can be handled with an agnostic scoring engine. Agnostic scoring engines take models created in any language and deploys them into production without constraint.


2. The model is not portable.

Another challenge of model deployment is lack of portability, whether you’re moving environments during the deployment process, or shifting applications and workloads to the cloud. Often a problem with legacy analytic systems, lack of portability can limit businesses in the deployment of their models. Lacking the capability to easily migrate a software component to another host environment and run it there, organizations can become locked into a particular platform. This, again, can create barriers for data scientists when creating models.

Solution: Containerization technologies, such as Docker, can help solve the application portability challenge. Containerized analytic engines capture all of the environmental dependencies for the analytic workload, providing a portable, lightweight “image” that can be deployed anywhere.


3. The organization has a monolithic architecture.

Since models are constantly evolving, the way we deploy them should be able to evolve, too. Monolithic, locked in platforms often limit what organizations are able to do, or may offer services that they don’t need. Businesses should have the ability to apply the microservices that fit their specific needs, and avoid the ones that don’t. Monolithic architectures also strain companies on the options they have to deploy models. Avoiding a monolithic approach gives organizations more freedom in the models they can put into production, and allows them to interchange applications when necessary.

Solution: Containerization technologies provide a microservices infrastructure, allowing organizations to utilize native microservice software to solve for their changing needs. The microservices architecture also limits any possible service failures to isolated components, and enables an organization to leverage on-demand, distributed nature of modern cloud applications.


4. The model does not scale.

The next challenge of deploying models is making sure that they are able to scale and meet increases in performance and application demand in production. Data used in the analytic creation environment is relatively static and at a manageable scale.  As the model moves forward to production, it is typically exposed to larger volumes of data and data transport modes.  The application and IT team will need several tools to both monitor and solve for the performance and scalability challenges that will show up over time.

Solution: Questions of scalability can be solved by adopting a consistent, microservices based approach to production analytics.  Teams should be able to quickly migrate models from batch to on-demand to streaming via simple configuration changes.  This enables a match between application requirements and analytic execution.  Similarly, teams should have ways to scale compute and memory footprints to support more complex workloads. And finally the production environment should allow for monitoring of all these operational details, to enable informed decisions to meet SLAs. 


Although the challenges of model deployment can seem overwhelming, the good news is that all of these issues can be resolved.  At Open Data Group, we help companies achieve model deployment efficiency and overcome these issues with our containerized analytic engine, FastScore. FastScore solves for all of these common challenges, as well as problems that are specific to certain organizations.  To learn more about FastScore, visit our product page here.

All ModelOp Blog Posts 

ModelOp Golden Ale Takes a Holiday – Part 2

ModelOp Golden Ale Takes a Holiday – Part 2

2 Minute Read By Greg Lorence Before we go much further, I feel obligated to state what is likely already obvious: I’m not all about that #InstaLife. All accompanying photography was snapped with little regard for composition, typically while stretching out from 4-6...

Q&A with Ben Mackenzie, AI Architect

Q&A with Ben Mackenzie, AI Architect

2 Minute Read By Ben Mackenzie & Linda Maggi How AI Architects are the Key to Operationalize and Scale Your AI Initiatives Each week we meet more and more clients who are realizing the importance of operationalizing the AI model lifecycle and who are dismissing...

Behind the scene of ModelOp by our Brewmasters- Part1

Behind the scene of ModelOp by our Brewmasters- Part1

2 Minute Read By Greg Lorence As a long-time homebrewer, when our President, Scott asked me, “wouldn’t it be cool if you and Jim brewed a beer to commemorate our rebrand later this year?” my reaction, after the immediate “heck yeah! Beer is awesome”, was honestly...

Open Data Group Officially Becomes ModelOp

Open Data Group Officially Becomes ModelOp

2 Minute Read By ModelOp Today, Open Data Group rebrands as ModelOp. Read more on Globe Newswire It is an exciting day for us, if only because people will stop asking “Why are you called Open Data Group?” after they understand what we do. More importantly the name...

Gartner & WIA Conferences Exit Poll

Gartner & WIA Conferences Exit Poll

2 Minute Read By Garrett Long As we continue into our “Year of Model Operations”, I thought it would be useful to highlight some of the key things I observed, learned and shared over the last few weeks at both the Gartner Data and Analytics Summit March 18-21, 2019 in...

Machine Learning Model Interpretation

To either a model-driven company or a company catching up with the rapid adoption of AI in the industry, machine learning model interpretation has become a key factor that helps to make decisions towards promoting models into business. This is not an easy task --...

Matching for Non Random Studies

Experimental designs such as A/B testing are a cornerstone of statistical practice. By randomly assigning treatments to subjects, we can test the effect of a test versus a control (as in a clinical trial for a proposed new drug) or can determine which of several web...

Distances And Data Science

We're all aware of what 'distance' means in real-life scenarios, and how our notion of what 'distance' means can change with context. If we're talking about the distance from the ODG office to one of our favorite lunch spots, we probably mean the distance we walk when...