Home > Learning > Modules > Inventory

ModelOp Governance Inventory Module

The ModelOp Governance Inventory Module provides centralized visibility and management of all AI/ML use cases, model implementations, and monitors across your enterprise—regardless of model type, development platform, or deployment status.

Understanding the Inventory Structure

ModelOp distinguishes between three key concepts that form the foundation of comprehensive AI governance:

Use Cases: The business problem being solved to drive tangible business outputs. Use cases represent the "what" and "why" of AI initiatives.

Model Implementations: The technical solution (model, algorithm, or system) used to solve the business use case. Implementations represent the "how" and contain all technical details including source code, training/test data, configuration files, and test results. A single use case can be addressed by multiple implementations.

Monitors: Tests and monitoring capabilities that can be applied to implementations to calculate metrics and assess performance, stability, fairness, drift, and other critical factors.

Example: The "Fraud Detection" use case could be solved using a rules-based implementation OR a neural network implementation. ModelOp tracks the governance details for each approach separately.

Learn more about Inventory Key Concepts

Core Inventory Capabilities

Navigating the Inventory

Access the complete Governance Inventory by clicking "Inventory" from the main menu. Toggle between three views:

  • Use Cases: All registered business problems being addressed with AI/ML
  • Implementations: All technical models and solutions in the system
  • Monitors: All available tests and monitoring capabilities

Search and Filter

Search Bar: Find use cases, implementations, or monitors by name or tag.

Standard Filters: Filter by deployment status, organization, risk tier, stage (environment), PII/PHI classification, and more.

Custom Filters: Search by enterprise-specific custom metadata fields configured for your organization.

Clear Filters: Remove all active filters to return to the complete inventory view.

Learn more about Using the Inventory

Lineage Visualization

Click the "Expand" icon on any row to reveal relationships:

  • Use cases and their associated implementations
  • Implementations and their snapshots (versions)
  • Complete hierarchical view of your model portfolio

All items are clickable for immediate navigation to detailed pages.

Inventory Extraction

Download current inventory data as CSV for offline reporting and analysis. The export respects active filters, enabling custom inventory extracts for specific organizations, risk tiers, or deployment stages.

Champion/Challenger Comparison

Enable the "Champion/Challenger" toggle to compare competing models or versions:

  • Select multiple implementations using checkboxes
  • Compare performance metrics side-by-side
  • Evaluate model strategies or version updates

Learn more about Champion/Challenger Model Comparison

My Work Dashboard

The personalized My Work page surfaces critical items requiring attention:

Statistics: Summary counts for use cases, models, production issues, active risks, and tasks.

Open Tasks and Issues: Breakdown by severity level for quick prioritization.

My Group's Tasks: Running list of all open items with priority, model name, task description, due date, and type.

Filters: Narrow focus by model organization, risk tier, stage, or owning group.

Learn more about Managing My Work

Use Case Management

Adding Use Cases

  1. Click "Add Use Case" from the Inventory page
  2. Complete required fields (marked with asterisks)
  3. Fill custom form fields if your organization requires additional metadata
  4. Review and submit
  5. Optionally add an implementation immediately

Use Case Details Page

The Use Case page serves as the central governance hub for model owners and governance officers:

Open Items: All risks, issues, and tasks for the use case and its implementations, including items from external systems like Jira with direct ticket links.

Governance Score: Automated calculation measuring adherence to AI governance policies based on:

  • Required information and metadata collection
  • Asset completeness (source code, artifacts, configurations)
  • Evidence collection (tests, job completion, documentation)
  • Other controls (attestations, approvals, change controls)

Click "see all" to view detailed pass/fail breakdown for each governance criterion.

Metrics: Visualize health and performance over time with:

  • Traditional ML metrics: performance, stability, fairness, drift, normality, linearity, autocorrelation
  • NLP metrics: PII detection, sentiment analysis, top words by parts of speech, SBERT similarity
  • LLM metrics: prompt validation, fact checking, accuracy assessment, rails compliance, bias detection

Overview: Three-tier metadata structure:

  • Basic Information: Standard ModelOp fields (name, description, owning group, risk tier, organization)
  • Additional Information: Custom enterprise-specific fields configured via Custom Form Administration
  • Detailed Metadata: Advanced technical or custom metadata beyond standard forms

Implementations: All model implementations associated with the use case, with direct navigation to implementation and snapshot details.

Documentation: Complete listing of all documents across the use case, implementations, and snapshots.

Approvals: All approvals with status and links to external ticketing systems.

Notifications: All notifications with severity indicators and detail views.

Reporting: Generate Model Cards using industry-standard formats (Hugging Face extended template) automatically populated with use case details, test results, and metadata.

Learn more about Use Cases

Learn more about Governance Score Administration

Implementation (Model) Management

Adding Implementations

Multiple registration methods support diverse workflows:

Via UI Wizard: Five registration paths based on implementation type:

  • Ensemble: Register collections of models (common for GenAI) with comprehensive tracking of all child models
  • Git Model: Import from Git repository with automatic asset creation
  • AWS SageMaker: Import from SageMaker with complete job and endpoint details
  • Vendor/Generic Model: Create records for vendor or third-party models
  • Existing Implementation: Associate previously registered models to use cases

Via Jupyter Plugin: Register directly from Jupyter notebooks with cell selection, git configuration, model function identification, and asset attachment.

Via CLI: Command-line registration with schema support and attachment capabilities.

Learn more about Adding an Implementation

Standard Model Definition

ModelOp's extensible model definition ensures consistent management across all languages, frameworks, and platforms.

Overview Tab:

  • Snapshots (Versions): All model versions with deployment status, stage, modification dates
  • Associated Use Cases: Links to business problems solved by this implementation
  • Notifications: All model-related notifications with external ticket links
  • Production Status: Business value and status heatmap for deployed models

Details Tab:

  • Basic Information: Standard ModelOp fields
  • Additional Information: Custom enterprise metadata (configurable via Custom Form Administration)
  • Detailed Metadata: Advanced technical metadata
  • Functions: Entry points for initialization, training, metrics, and scoring (for ModelOp Runtime)

Compliance Tab: Generate detailed audit reports with complete model details, tickets, test results, documentation, assets, deployments, and MLC instances. Download as PDF.

Assets: All source code, trained artifacts, configuration files, data, documents, and decision tables.

Repository: Git repository details including last sync information, or connections to Artifactory, SageMaker, or DataIku repositories.

Schemas: Input and output schemas for data validation and testing.

Learn more about Managing an Implementation

Model Assets

ModelOp supports comprehensive asset management across multiple storage platforms:

Supported Storage Technologies:

  • AWS S3 or S3-compatible storage
  • Azure Blob Storage
  • GCP Storage Buckets
  • HDFS
  • SQL data sources

Core Asset Types:

Trained Model Artifacts: Model binaries and weights files stored in S3, Azure Blob, GCP Storage, or Artifactory.

Data Assets: Training data, test data, comparator data, baseline data, known questions, and guardrail testing data for various job types including metrics testing, performance evaluation, drift detection, bias detection, and LLM testing.

Configuration Files:

  • requirements.txt: Model dependencies automatically installed by ModelOp Runtime
  • metadata.json: Model metadata imported upon registration
  • external_assets.json: External asset references added during import
  • input_schema.avsc / output_schema.avsc: Extended AVRO schemas for data validation
  • required_assets.json: Asset requirements for monitors and tests

Learn more about Inventory Key Concepts

Metadata Management

ModelOp provides flexible metadata architecture supporting three model states:

Stored Model: Work-in-progress with latest information (source code, metadata, assets).

Deployable Model (Snapshot/Version): Point-in-time snapshot of a stored model, captured for all batch jobs to maintain execution history.

Deployed Model: Deployable model actually installed in production, including complete lifecycle lineage.

Each state supports custom metadata fields configured via JSON, allowing enterprise-specific governance requirements.

Adding Custom Metadata:

  • Via UI: During registration or through model details pages
  • Via API: POST/PATCH endpoints for stored, deployable, and deployed models
  • Via MLC: Automated metadata collection during lifecycle processes

Learn more about Inventory Metadata

Supported Languages & Frameworks

ModelOp Center supports virtually any model language, framework, and development platform, including:

  • Python (scikit-learn, TensorFlow, PyTorch, XGBoost, etc.)
  • R (caret, randomForest, glmnet, etc.)
  • Java/Scala (Spark MLlib, H2O, etc.)
  • SAS
  • PMML
  • Vendor models and third-party solutions
  • Large Language Models and GenAI ensembles

All are unified under ModelOp's standard model definition for consistent governance.

Integration Points

The Inventory Module integrates seamlessly with other ModelOp capabilities:

  • Model Lifecycle Management: Automated workflows triggered from inventory items
  • Monitoring & Reporting: Tests and monitors applied to implementations
  • Operations: Runtime deployment and job orchestration
  • Compliance: Governance scoring and audit reporting

ModelOp Center

Govern and Scale All Your Enterprise AI Initiatives with ModelOp Center

ModelOp is the leading AI Governance software for enterprises and helps safeguard all AI initiatives — including both traditional and generative AI, whether built in-house or by third-party vendors — without stifling innovation.

Through automation and integrations, ModelOp empowers enterprises to quickly address the critical governance and scale challenges necessary to protect and fully unlock the transformational value of enterprise AI — resulting in effective and responsible AI systems.

eBook
Whitepaper
4/30/2024

Minimum Viable Governance

Must-Have Capabilities to Protect Enterprises from AI Risks and Prepare for AI Regulations, including the EU AI Act

Read more
Download This White Paper

To See How ModelOp Center Can Help You Scale Your Approach to AI Governance

Download