Company Blog

Model Assessment and Model Traceability

Model Assessment and traceability blogModel assessment and model traceability are two crucial steps when deploying a machine learning model. Model assessment ensures that the model is running accurately and efficiently, while model traceability deals with the history of the machine learning model. These two components are critical for deployment, but how can we make sure we are successful in these areas? Today we are going to dive into model assessment and model traceability, and discuss the importance of these processes.

 

Let’s begin with model assessment. There are two main pieces to assessing a machine learning model. The first piece is model performance. This is basically a matter of whether or not the math is doing its job. The data science team ensures this in a couple of different ways. One way is by using ROC (Receiver Operating Characteristics) curves. An ROC curve is a probability curve that can be used to measure AUC (Area Under the Curve), which represents the degree of separability the model has.  The higher the AUC, the better the model is performing. ROC curves define if the model is doing a good job.

 

Another way to ensure the math is doing its job is with a  . Confusion matrices are only suitable for classifiers, not regressors. In other words, a confusion matrix can only be used for models that predict class labels, not numbers. Confusion matrices can be useful to see how often the model is accurate, as well as how often the model is wrong. They can also be used to look at true positive and false positive rates for the model. These tests allow data scientists to make sure the model is predicting for what it should be, and gives them high confidence when inputting a new line of data.

 

The second piece of model assessment is model telemetry, which speaks more towards the true operational characteristics of a model. This includes latency and throughput, how much memory the model is using, if it is limited by data throughput or other constraints, and if so, can it be improved with computational resources. Answering these questions can help assure the IT and application teams that the model is running efficiently.

 

Next, we are going to dive into what it means to trace a model. There are many words used to describe model traceability, such as history, lineage, and metadata. Model traceability refers not only the history of the model, such as who created the model and when, but also includes how the model was trained, in what context was the model used, and if there are any aspects of errors, acceptance, and performance that need to be captured and understood. 

 

Model traceability is about being able to trace exactly where a model has been and what it’s been doing, both backwards in time and across the various environments it may pass through. This could mean creation, pre-production, or production environments. You can think of it as analogous to a “history” or “audit trail” for the model. This feature is mainly useful for understanding what changes have occurred for a model, which is specifically valuable for long-running or frequently updated models. Model traceability is also useful as for legal and compliance reasons and related questions of security and ownership. For example, traceability allows you to know what specific actions certain people did to the machine learning model.

 

Proper model assessment and traceability capabilities lead to a more organized and efficient model deployment process. For enterprise grade model operations, systems must provide insights to each of these avenues, integrating data and insights across the Model Development Life Cycle.  It’s imperative to build systems for model assessment and traceability that are agnostic to modeling environments, and can capture data from the myriad of systems used in the deployment of models.  FastScore offers microservices such as FastScore Engine and Manage that incorporate model assessment and traceability into the deployment process natively, making the model easy to keep track of throughout production. On top of this, FastScore adds efficiency to the entire model operations workflow, allowing machine learning models to be deployed and optimized  10x faster than typical enterprise processes.

 

 

All ModelOp Blog Posts 

Machine Learning Model Interpretation

To either a model-driven company or a company catching up with the rapid adoption of AI in the industry, machine learning model interpretation has become a key factor that helps to make decisions towards promoting models into business. This is not an easy task --...

Matching for Non Random Studies

Experimental designs such as A/B testing are a cornerstone of statistical practice. By randomly assigning treatments to subjects, we can test the effect of a test versus a control (as in a clinical trial for a proposed new drug) or can determine which of several web...

Distances And Data Science

We're all aware of what 'distance' means in real-life scenarios, and how our notion of what 'distance' means can change with context. If we're talking about the distance from the ODG office to one of our favorite lunch spots, we probably mean the distance we walk when...

Communicating between Go and Python or R

Data science and engineering teams at Open Data Group are polyglot by design: we like to choose the best tool for the task at hand. Most of the time, this means our services and components communicate through things like client libraries and RESTful APIs. But...

An Introduction to Hierarchical Models

In a previous post we gave an introduction to Stan and PyStan using a basic Bayesian logistic regression model. There isn't generally a compelling reason to use sophisticated Bayesian techniques to build a logistic regression model. This could be easily replicated...