Good Decisions: A Monthly Webinar for Enterprise AI Governance Insights

AI Nutrition Labels in Action: How ModelOp Operationalizes Model Cards

Model cards are the new nutrition labels for AI—learn how to move from transparency to action in this 30-minute session. You'll discover how healthcare organizations are using model cards to improve transparency, manage risk, and scale AI safely—with insights from ModelOp experts.

Download the slide deck.

Key Takeaways

Just like nutrition labels transformed food transparency by providing standardized information about ingredients and nutrients, model cards serve as "nutrition labels" for AI systems. They provide essential transparency into what AI models contain, how they work, their intended use cases, limitations, and potential risks - enabling stakeholders to make informed decisions about AI deployment.

Model cards are a breakthrough for AI transparency—offering critical insights into how models were trained, their intended use, and their limitations. But transparency alone isn’t enough. To drive real-world results in healthcare, organizations need to move beyond reading model cards to actively using them for governance, compliance, and risk management.

Is your organization ready to turn model card transparency into operational advantage?

Join Dave Trier, VP of Product at ModelOp, and Amanpreet Kaur, Ph.D., Implementation Engineer, for a 30-minute session focused on bridging the gap between documentation and action. You’ll learn how leading healthcare organizations are importing model cards, integrating them into compliance workflows, and automating governance processes to scale AI safely and efficiently. See how to turn transparency into a practical, repeatable process that accelerates responsible AI deployment.

🔍 What You’ll Learn:

  • How to import and evaluate model cards—including those from CHAI and other healthcare standards bodies
  • Why transparency isn’t enough—and how to use model cards in real governance workflows
  • How to centralize model tracking for compliance, audits, and risk reporting
  • Ways to automate governance processes without slowing down AI adoption
  • What leading healthcare teams are doing to scale AI responsibly and efficiently

Register for the series.

Transcript

1. Introduction to AI Nutrition Labels

Welcome, everyone, and thanks for joining today's session called AI nutrition labels in action, how model op operationalizes model cards. And, with us as guests, from model off today are Dave Trier and Amanpour who are experts, with model cards and have a lot of, great knowledge to share.

I'm excited to talk about, model transparency.

Today, it's a very timely topic, and it matters now more than ever. We are seeing increasing, pressure from a variety of stakeholders to explain how AI models are built, how they behave, and who they impact.

And without standardization, every model explanation can turn into a custom, manual effort that can slow down teams and introduce risk. So that's why today we're gonna talk about model cards. Think of them as AI nutrition labels, and how they can help, your teams consistently communicate what model is, what it does, and how it should be used.

Today, we're gonna use examples, from health care because model cards are very, very, instrumental in health care. But this concept applies across all industries.

So if your organization is using AI to make decisions, model cards are a great way to bring transparency and trust to scale.

2. Agenda Overview

Alright. Here's our agenda. It's pretty straightforward. We're gonna talk about why they're important, what they are, best practices. And most importantly, we're gonna provide a demo of how model cards can be operationalized with model lock. And then without further ado, Dave Trier is going to kick us off and talk about model cards.

Awesome. Thank you so much. Audio okay, Jay?

Audio is great, Dave.

Excellent. Thank you so much, and thanks all for joining. Good morning. Good evening. Hopefully, you are having a fantastic Wednesday.

As Jay mentioned, model cards are a very hot topic. We are gonna talk, a bit around the health care side, but they are absolutely applicable to all different industries. Now before we get into model cards, which many of you may have heard of, I thought it would be good to start with an analogy, and that is that of nutrition labels. You see these every single day on the food that you eat, the different types of both packaged goods and others that are really helping to describe what is in the food that you're about to consume.

Now if you think about everybody's seen nutrition labels for years and years. Right? So you probably can't even imagine a world before nutrition labels. But just just imagine before they were actually, come about, how did people how did consumers understand what they were about to put into their bodies?

Right? What are the how do they understand what were the ingredients, what are the key nutrients, etcetera? That did not exist.

And so as you started to, try to understand, alright, I really want to know what it is I'm about to eat, that's when these nutrition labels first came about to establish transparency first. So you hear us talk about this, but they provided transparency into alright. Here are all the different ingredients that are going into this particular food I'm about to eat. Here are the key nutrients that are part of it.

Right? Here's the different serving sizes that and how it relates to what I should consume. So it gives you, again, that transparency around what is in these different foods. But then second, if you think about there are tens of thousands of different foods.

Right? There are hundreds of, tens of thousand different companies that are producing these different types of food. And if you were trying to create these nutrition labels to get transparency, but you got tens of thousands of different companies that are trying to create and give you this information, you need to drive consistency.

So that's the second key important metric around or fact around nutrition label is transparency and then driving consistency so that it doesn't matter whether it's produced by vendor one or vendor two, that you know exactly what is part of it and what's the breakdown.

And by doing that, by giving transparency and consistency to consumers, then you can help them establish intentionality.

Consumers can have very well intentioned understanding of, yes, I do want to eat this food at this time because I know exactly what the breakdown is, the ingredients. It's aligned to my, dietary restrictions or my regimen overall, but I can make that conscious decision, that intentionality to say, yes. This is indeed aligned with what I'm trying to do. So nutrition labels went a long way to help to establish that transparency, consistency, and intentionality for consumers around food.

3. Transparency in AI Systems

Well, AI is in the same place. Right? We are first there are tens of thousands, believe it or not, different AIs, systems, models, solutions, software, etcetera. There's a lot of different types of AI out there, and we need first and foremost transparency into what does that AI entail, what are the various, data that's part of it, What are some of the key metrics?

Was it tested? Was it tested for bias? Everything that you want to know about it in order to have an understanding of what it's doing, you need to have that transparency so that you are comfortable and can trust in this AI solution before you use it. So, again, AI needs first that transparency.

But then second, as I mentioned, there are tens of thousands of different AI solutions out there. Right? And so how do you have a consistent way to understand everything that's going into that AI solution, etcetera.

And then lastly is the then the intentionality. By getting that transparency and consistency, it allows the different users, the deployers or consumers of this AI to if they make the conscious decision to say, yes. This AI solution meets my needs. I can trust it because I know everything about it, and it's go I'm going to use it for the intended usages that I have in mind for my particular organization, department, process, etcetera.

So, again, just as nutrition labels help to set up that transparency, consistency, and intentionality, model cards are been very paramount in helping to establish that same transparency, consistency, and intentionality for AI overall. Now if you go to the next one, this is important. I know I kept saying transparency, transparency. But if you look at this, this is was a study that was done by the Coalition for Health AI, and kudos to that that group they're called CHI for helping to drive some of this transparency in model cards.

4. The Need for Model Cards

But one of the key stats that they reported on is that all of their members, over ninety percent of them said that transparency was paramount, was incredibly important to help to adopt AI. Right? So you'll hear as a key mechanism to help to establish this transparency is using these model cards. You can see that top metric is calling out, alright.

We we should make it mandatory to have model cards to help to document AI performance.

So as a background, as a metric, again, transparency is paramount to trust and adopt AI. And this model card, which we'll talk about in a second, is really a a key critical, initiative to help to establish that transparency across all the different vendors and types of AI. So if you take a look at the next one, what is this model card, Dave? What is this model card that you speak of?

5. Understanding Model Cards

Well, again, it's really think of it as a nutrition label for AI. It helps to establish here are the key ingredients within it. Here are the key metrics as part of it. Here are the tests that we have run.

Here's what we called out in terms of the risk and limitations so that you have that understanding of everything about this AI system that you're about to use. The top portion gives you a little bit of an overview around what is in this particular AI solution. What's the purpose? What's the description?

That sort of thing.

The second area is intended use. That's that intentionality that I mentioned. Right? So here is that the specific scenario where this AI solution is meant to be used.

That's incredibly important in AI. That, yes, AI, there's lots of talk around. Okay. Well, can we have general purpose AI?

But at the end of the day, AI was developed an AI solution, I should say, was developed for a specific purpose in mind. It's not gonna be great at other purposes. It's just not. That wasn't designed and intended to be used at.

So it's important to understand that intended use.

The third section is going into bias and risk. Right? So there's, obviously, there's been lots and lots of commentary and discussion around bias. It's absolutely something that needs to be understood, or is there any partiality, any disparities amongst protected classes, in an AI solution?

So those need to be known, documented, thoroughly tested, and making sure that there are any warnings as it relates to bias or other risks as part of it. And then you get into some more of that center section, some more of the the technical information, if you will, about it. So what are some of the key metadata? So think about what are the model types?

Is it an LLM or is it, you know, an an LP or regression, if you will? Then into the data facet. Right? So understanding what data is being used, what data is being used to to train it if it's a trained model, what data is being used or needs to be used as your, as as the input to the model execution, and what does the output look like?

What what do you get back from it? It's imperative to understand that here is the relation between the the AI model and the data as part of it. And, again, that data summary helps provide that information. But as you think about, especially LLMs and generative AI, that there are some restrictions around data.

Some of the broader based LLMs are trained upon public data. Right? And you don't know if there's particular, trademarks or other, you know, sensitivities around that particular data. So is it it's something that is important to understand what that lineage of the data is going into it from both an upfront and ongoing perspective.

Security maintenance, as you can appreciate, you wanna know everything about the AI solution. So you wanna understand what's the the way that you are securing this, if you will. What's the what's the way that you're gonna maintain it in an ongoing basis. And then that bottom section is focused on metrics.

Metrics around performance, fairness, safety, reliability, some of the common terms that you'll hear over and over as you think about responsible or trustworthy AI. Now this is just one example of a model car. This is taken from the Coalition for Health AI, CHI, if you will, because they've done a a very nice job of of trying to drive some of that consistency across the different AI solution developers and the users, the health care providers, etcetera, that are are attempting to understand what these AI solutions are and then use them appropriately. This is one example of a model card.

It just meant to represent here is a kind of a baseball card view or quick view of here's what the AI solution is about. But the general concept of a model card is meant to be not just, it doesn't have to be just one flavor like the Coalition for Health AI. Actually, model cards came about from Google, paper pay, put out by Google and Hugging Face now and some of the AI, basic, you know, efforts. They have developed their own model card, if you will.

But the point is is it meant to present a singular view or a one kinda quick snapshot view of everything about that AI solution.

Alright?

6. Who Benefits from Model Cards?

So you asked me, Dave, we'd go to the next one. Alright. Who uses these model cards? Well, there's actually a couple of different personas that could benefit from model cards.

It first starts with the the solution developers and the solution owners. So if you are helping or if you're creating a certain AI solution, it helps you to document everything about this AI solution so that others can trust in it. Right? I came back to very beginning around transparency and trust.

If you provide this information, if you include all of the the metrics, the data lineage, the metrics, the testing that was done, etcetera, it helps them to trust what you have developed. And then it helps the solution owners that are going to implement it into your provider or other business process to, again, trust that I understand what it is and I can put it into that process.

For the executive or line of business or compliance perspective, it helps to provide that quick visibility into where AI is being used, what its purpose, its limitations, its restrictions, and its known risks. Right? So, again, think of it as a a quick way to establish a consistent understanding of the different AI solutions across the organization.

And then from a user perspective, it's about that transparency. Right? The clear understanding of that intended usage, the risks, the limitations, helping to instill that trust and, helping also for how do you safely use it. Right?

So there's typically instructions around the safe usage of that AI solution overall. So again, model cards is a great construct for a number of different personas helping to enable them to use it, but also enable collaboration across those different groups. Now if you go to the next one, where this often plays in, where do you use it in your process? Well, as I mentioned before that there could be a variety of different model sources.

7. Integrating Model Cards in Processes

You can develop your own internal models. You can purchase vendor models. You can have software that is AI embedded. There's just some examples of, Epic in the health care space and many others.

They are developing model cards. The Coalition for Health AI dot CHI, if you will, is helping to drive some consistency around that model card. So you have some inputs, if you will, around where you can understand the particular information about each of these models. Now what happens is that if you're a organization and you want to use an AI solution, model cards are actually very helpful at different points throughout that process.

First is even at that initial intake. Right? So you have the you have a business or solution owner that has this great idea that I'm going to use, this AI solution to help with nurse triage as an example. Fantastic.

As part of the intake process, you're able to identify here's exactly what my, challenge is. Here's what I have this intended AI solution is going to do. And from there, you can even start to collect the inputs that you need, which start to fill out some of that model card. From there, you move into more of the development phase where you actually have the the model implementation.

Here's how I'm gonna implement it. As I said, you can build it. You can buy it. You have software of AI embedded.

But this is where you start to actually further document the intended usages, the the actual data that's involved, the metadata, and the risks. From there, very often, you go to a testing or validation phase. Right? You always wanna make sure that the AI solution is going to fit and work within your specific organization.

It's not most of them are not a one size fits all. You need to do a little bit of tweaks in order to fit it to your specific organization. Now as you're doing those updates, as you're doing your validation and testing, you wanna have some of those results that could be then further filling out that model card. Right?

And then as you move forward and you get to having a verified, tested, and ready to promote to production, you'll produce a final model card that would states, okay. Here's exactly what we're gonna be moving into production. Again, here's all those elements that I went through before. So again, those model cards are very helpful as it goes through that process, not only through production, but ongoing iterations as the models are being updated or your vendor sends updates to the model that you want to make sure that those are populated in a general fashion.

If you go to the last one, so next slide, where this is particularly helpful with a solution like a a a model op or model cards in general is that, you know, model cards, it it's very difficult to keep the pace when you have hundreds or thousands of use case out there. So model cards, again, help to provide that consistent way to track everything about these AI solutions, to provide the transparency into here's all the ingredients, here's all the metrics, here's the the risks, the limitations, etcetera. So it helps to ensure that you have that transparency to those different personas and then ultimately lead you back to intentionality to make sure that the the solution or business owners that are adopting these AI solutions can make that conscious decision to say, yes.

This indeed helps to satisfy exactly what I'm trying to do from an AI solution perspective. So these model cards are very helpful to help to establish that consistent, transparent, and intentional behavior throughout your organization.

8. Demonstration of ModelOp

So we're gonna set this up, to show you kind of a brief demonstration about how a solution like ModelOp can help to incorporate model cards into your day to day. So with that, I'm gonna turn it to my colleague, Aman, who's gonna walk you through a a brief demonstration around how model cards can be enabled in a solution like model op within seconds and help you take advantage of of all those key benefits that we just discussed. With that, I'll turn it over to Aman.

Thank you, Dave, so much.

So hi. This is Aman, Amanpreet Kaur. I'm an implementation engineer at ModelOp.

And, like Dave said, so he explained, all the information about model card, why it is important. So let's just jump right into the demo. So, if you look at this slide here, so we are going into going to talk about these five. We are going to, go through these five steps here.

The first one is, orchestrate the intake process. The second one is we'll talk about the data a little bit and then actually generate the model card. The fourth and fifth, we'll talk about it, after we generate the model card. So let's go into this process, already.

Jay, can I please share the screen?

Sure can. Yep.

You. So can you can you see my screen?

Yep. See your screen? Just head over to the, right tab, and it'll be good.

Okay. Perfect.

Alright. So, let's take a look at the model of center model of center, which is our model life cycle and governance automation software.

And what you're seeing here is an example use case from EPIC, and it's inpatient risk of fall. This model protects the probability, whether a patient will have a risk of fall based on certain health data. So, a use case is where all the information about this model goes in. And so I'm going to switch to the overview point, the tab here, and this is where you add all the information about it. You provide the description, and there can be any additional information. And this is the intake process that we were talking about as step one.

9. Intake Process Overview

You can do this manually. Like, you can have as many questions as you want. You can import it. There are different ways of adding all this information in here.

So that's going back to step one. So this is the intake process. Very, very straightforward, simple. And now let's talk about the actual data, which is coming from, epic models.

So I'm going back to the same tab here, and we are going to look at something called implementations. Implementations is nothing but the technical model. Again, instead of this tab, I already have this tab open here for the implementation.

So this is where the actual data comes in from this particular model. So we do something called we attach a monitor, which is nothing but a Python code, and which takes this data and interrupts it in a way we want to see the results. So then I'm going to show you some of the results that we created using the data that came from this implementation or the technical model. So in this, what we are recording is we just have biweekly data for the month of February, and we recorded how many times this particular model was run. In other words, we say that this is a total number of inferences.

10. Data Reporting and Insights

And here, we can also do a breakdown of data report. So whatever you want to capture. So this is important information. It is sitting in the implementation part of the use case, and use case also has the intake. So now what we are going to do, we are going to fetch all that information the way we want it and whatever information we want into a model card. So I'm going to the tab called reporting, and then I'm going to say generate a model card. The step here is we have a markdown template based on the chart template that Dave just showed previously, which goes into different parts of the use case and implementations to fetch the right information.

11. Generating Model Cards

So all we do is we upload that markdown file, which is what I was talking about. This is the epic. This is the chart template. You just click next, and now you select that implementation, the technical model, and you take, you selected snapshot, which is basically the latest version of that model.

And then you say next step, and this was the test result I showed you which had all this information. You can put all this information in your model card, but for this demonstration, I didn't put all the information. I just kept some of the information, and put it in the model card. So now all you do is, click on next step here, and then you can rename it to whatever you want to do.

So let's call it inpatient risk. Right? And let's generate that. That's all. This is our model card right now.

You can, view it in a full screen here. So you see that all the information from the intake data, the metadata, and the use case was captured here. The key metrics, everything was there. And then in the end, we fetched the model test results from that implementation.

So we captured the information of the month, the week, and the total number of times the model was run. And like I said, you can add the table and add any information, but this is just like one page view of, all the information that is there related to that use case.

12. Iterating on Model Cards

So that's these are the three steps that we already went through. And now the fourth one is we can iterate on these model cards through the review process. So, basically, let's say now you look at this model card, you say that, oh, I want to add some more information or I want to edit some information. You go into the use case.

You edit that information. You don't change the template. You don't manually edit the model card. You don't sit here and manually write the model card.

So you are just using the template. You update your use case to the latest information. You update your data, and you just update the template for the right information.

13. Scaling Model Management

So that's that's all we can do here. So it's not a manual effort. It's very automated that way. And then the last point, which is the most important part. Right? We want to scale it to thousands of models, and we already went through this that since it's just a template, you can apply to thousand use cases, ten thousand use cases, as many as you want because you are not sitting and manually typing all the information.

It's all just embedded into one particular place. And you can fetch information from so many different places, and, you can, show it or share it with stakeholders. Like Dave said, whichever community you want to share. So that's all.

14. Transition to Model Ops

Back to you, Jay.

Awesome.

And thank you so much. I'm gonna try to just reshare my screen here. If you can stop sharing.

Mhmm. And I will go back to sharing.

Alright. Here we go. Hopefully, you can see this comparison chart here in a second.

There we go. Yeah. Awesome.

So, yeah, model cards regardless, I think, of how you implement them are really, really valuable for transparency and a best practice that, you know, we hope folks are following.

However, when you can combine them with model ops, evergreen inventory that, Aman was kind of walking through, can serve as a backbone to keep all the model information up to date throughout the entire model life cycle and allows you to generate model cards on demand, whenever you need to.

So typically, if you're, you know, working without model op, you might have to create this by hand or PowerPoint, relying on data scientists to, manually get metadata, metrics, risk reviews, approvals across teams, and that's really time consuming. You probably don't wanna have your data scientists spend all the the time getting all their documentation together because it is a lot of work. Right? And can be inconsistent and nearly impossible to scale as the mom's talking about across, you know, hundreds or thousands of models.

So with model op, model cards really become a living artifact. And Dave talked a lot about, you know, the CHI, model and even when you're pulling, you know, hugging face models or, you know, Google's templates. That's just really the starting point. And we know that, you know, your organization is gonna have custom metadata and need to build upon that, and model op really allows you to do that.

It allows you to create that living artifact to automatically generate as part of your your full, model life cycle and populate that key documentation, including metadata, update it with the KPIs for performance bias and usage metrics all over time. So, ultimately, you know, allowing teams to enforce policy, executives can see what's going on and get the transparency they need to help scale, AI.

15. Conclusion and Future Steps

Bottom line, model op helps you transform model cards from, you know, static, tool to an operational, kind of power tool that drives faster approvals approvals, clearer oversight, and more responsible AI at scale.

Alright. And so that's the conclusion. I'm gonna just wrap it up here. And, yes, we will share the, the presentation. We'll make the recording available. We'll send that out to everybody, who registered and posted to the model op website as well. But to wrap up, model cards are a powerful way to scale transparency and accountability across your entire AI portfolio.

But the real value comes when they're automated, standardized, and embedded into your model life cycle processes, Exactly what model op enables. So if you're ready to move from ad hoc documentation to a system that creates and updates model cards, as part of your production workflow at scale, let's talk. Please reach out to us, directly. We'd love to help you bring your model transparency to the next level, or you can reach out to schedule a demo with our team to dive deeper into the product to see what model op, to see how model op can work for you.

And that's it. Again, we'll send out the presentation and recording shortly. Thanks, everyone. We'll announce the May webinar soon, and have a great rest of the week.

Bye bye.

Get the Latest News in Your Inbox
Share this post
Contact Us

Get started with ModelOp’s AI Governance software — automated visibility, controls, and reporting — in 90 days

Talk to an expert about your AI and governance needs

Contact Us