The ModelOp Governance Orchestration Module automates and standardizes the entire model lifecycle through the MLC (Model Life Cycle) Manager—enabling rapid productionization, continuous monitoring, and compliance enforcement across hundreds or thousands of models with diverse business requirements and deployment pathways.
Core Architecture
MLC Manager
The MLC Manager is a low-code automation framework built on Camunda, the leading Java-based Business Process Model and Notation (BPMN) platform. It executes, monitors, and manages all MLC Processes while automatically capturing metadata and tracking each model's journey through defined workflows.
Key Benefits:
- Accelerates time-to-production by defining consistent methodologies for moving models through required steps
- Ensures compliance by validating all production models produce optimal results within compliance rules
- Scales enterprise operations by controlling critical tasks and processes across hundreds or thousands of models
MLC Processes
An MLC Process encodes and automates a set of lifecycle steps—from registration through productionization, continuous testing, and retirement. Processes apply to individual models or model sets based on common criteria like business unit, language, or framework.
MLC Processes are defined as BPMN files using any BPMN-compliant editor (such as Camunda Modeler) and leverage standard Camunda elements plus custom ModelOp delegates for complex orchestration.
Core Process Elements:
Signal Events: Initiate processes or trigger actions based on model changes, timers, or external events.
Tasks:
- User Tasks: Manual activities (approvals, reviews) that pause workflow until completed
- External Service Calls: Integration with third-party systems
- Script Tasks: Custom code execution (inline Groovy) using variables and model metadata
- ModelOp Center Calls: Automated interactions including batch jobs and model deployments
Gateways: Decision logic controlling flow based on process information (model metadata, test results, etc.).
KPI Collection: Automated metrics capture for tracking time-to-production, approval cycles, and other key performance indicators.
Learn more about Model Lifecycle Management
Common MLC Process Scenarios
Use Case Registration
Orchestrate data collection for new AI/ML initiatives, ensuring required inputs, reviews, and documentation are captured consistently across the enterprise for visibility and reporting.
Model Governance - Productionization
Automate complex governance pathways including:
- QA runtime deployment
- Automated test suites
- Security scanning
- Review documentation generation
- Multi-level approvals
- Production deployment
MLC Processes automatically locate compatible runtimes or target specific runtime groups via tags.
Model Refresh & Retraining
Automate retraining on schedules or when new labeled data arrives:
- Execute retraining jobs
- Compare candidate models against deployed models using Champion/Challenger analysis
- Automate change management (re-testing, approvals)
- Deploy validated updates
Approval & Task Management
Integrate with existing systems (Jira, ServiceNow) to:
- Direct reviews to specific team members or roles
- Inject model-specific metadata for context
- Track approval status and history
- Manage external tickets automatically
Continuous Monitoring
Schedule or trigger batch jobs based on data availability:
- Calculate statistical performance metrics
- Detect ethical bias in predictions
- Apply decision criteria for automated actions
- Generate alerts for ModelOp support teams
Learn more about Model Lifecycle Management
Model Operationalization
Deployment Terminology
Model Execution Modes:
Batch (Ephemeral) Deployment: Model pushed to runtime for job execution, then cleared to free runtime for other jobs.
Online (Persistent) Deployment: Model deployed with persistent input/output endpoints for continuous scoring.
Model Runtime: Environment for executing models or metrics (Python Docker containers, Spark, SageMaker, Azure ML Studio, DataIku, Domino Data Labs).
Runtime Endpoints: Connection points for consuming applications (REST endpoints for synchronous communication, Kafka topics for streaming).
Runtime Environments ("Stages"): Development, testing, staging, production, and failover environments through which models progress.
Model Service Tags: Identifiers tracking where each model version runs in production, supporting multiple consumption modes and business processes simultaneously.
Learn more about Operationalizing Models
REST Deployment
Prerequisites:
- Runtime environments defined for each stage (Development, SIT, UAT, Production)
- Operationalization MLC configured for team/model requirements
- Governance and technical requirements incorporated
Deployment Steps:
- Prepare Runtimes:
- Add runtime endpoints for online models
- Set runtime "Stage" matching MLC environment requirements
- Add "Model Service Tags" identifying target runtimes
- Create Snapshot:
- Open model in Inventory and click "Create Snapshot"
- Confirm name, access group, description, and tags
- Enter runtime selection criteria (tags or runtime name)
- Review and submit
- Automated Orchestration:
- MLC automatically begins deployment process
- Model deployed as REST to designated runtimes
- Notifications appear in Snapshot Overview tab
- External tickets raised as configured (Jira, ServiceNow)
Learn more about REST Deployment
Batch Deployment
Batch Job Types:
Scoring Job: Executes scoring function for predictions (testing or production batch scoring).
Metrics Job: Executes metric function against labeled test data for efficacy metrics, bias detection, and interpretability.
Training Job: Executes training function to train or retrain models, producing trained artifacts.
Batch Job Scenarios:
ScenarioJob TypeDescriptionTesting a ModelScoringScore test data for functional, performance, or system testingModel Back-Test/EvaluationMetricsGenerate evaluation metrics (F1, confusion matrix, ROC curve, AUC)Ethical Fairness DetectionMetricsDetect ethical fairness issues in labeled dataRe-Training/RefreshTrainingCreate new trained artifacts from new labeled data
Input/Output Data Options:
- Upload files directly
- Reference S3, Azure Blob Storage, Google Cloud Storage, or HDFS locations
- Provide REST URLs
- Specify SQL statements
Creating Batch Jobs:
Via UI Job Creation Wizard:
- Select model and snapshot
- Propose runtime or allow MLC to select
- Choose job type (Training, Scoring, Metrics)
- Optionally enable input/output schemas
- Add input assets (upload, reference external storage, or use existing)
- Specify output assets (external location, SQL statement, or embedded)
- Review and run
Via CLI:
bash
moc job create [batchjob | testjob | trainingjob] <model-uuid> input.json output.json
moc job result <job-uuid>
ModelOp Runtime Batch Deployment:
Prerequisites and steps similar to REST deployment, but:
- Runtimes should NOT have endpoints assigned
- Model Service Tags coordinate deployment across stages
- MLC orchestrates testing, approvals, and promotion
Spark Runtime Batch Deployment:
Additional considerations:
- Define Spark runtime environments/stages
- Select "Apache Spark" runtime type during snapshot creation
- Target specific Spark runtimes via tags or names
- MLC handles Spark-specific deployment requirements
Learn more about Batch Deployment
Creating and Deploying MLC Processes
Development Workflow
- Download Camunda Modeler (version 4.8.1+ or 5.x.x)
- Create BPMN using Camunda modeling tools
- Deploy to MLC Manager:
- Use Deploy icon in Camunda Modeler
- Target URL:
http://<moc-base-url>/mlc-service/rest
- Provide bearer token for OAuth2-secured environments
- Verify Registration in ModelOp Center UI (Models section)
- For Camunda 5.x.x: Select "Camunda Platform 7" for new BPMN files
OAuth2 Authentication:
bash
curl --location --request POST 'http://<moc-base-url>/gocli/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'username=<user>' \
--data-urlencode 'password=<password>'
Retrieve access_token
from response and provide in Camunda Modeler Token field.
Reference: Camunda BPMN Documentation
Process Triggers
MLC Processes initiate based on various events:
- Model marked ready for productionization
- Time-based schedules
- New data arrival
- Notification receipt
- Manual user intervention
- UI actions (creating snapshots, triggering jobs)
Integration with ModelOp Modules
Orchestration integrates seamlessly with:
- Inventory Module: Processes triggered from use cases, implementations, and snapshots
- Monitoring Module: Automated test and monitor execution
- Operations Module: Runtime management and job orchestration
- Compliance Module: Automated documentation generation and approval tracking
Key Advantages
Flexibility: Support diverse pathways to production across business units with different requirements.
Consistency: Standardized governance enforcement regardless of model complexity or regulatory oversight level.
Scalability: Manage hundreds or thousands of models with automated orchestration.
Future-Proofing: Easily integrate new AI development technologies, data platforms, and execution environments while maintaining governance requirements.
Visibility: Complete lifecycle tracking with automated KPI collection and metadata capture.
Govern and Scale All Your Enterprise AI Initiatives with ModelOp Center
ModelOp is the leading AI Governance software for enterprises and helps safeguard all AI initiatives — including both traditional and generative AI, whether built in-house or by third-party vendors — without stifling innovation.
Through automation and integrations, ModelOp empowers enterprises to quickly address the critical governance and scale challenges necessary to protect and fully unlock the transformational value of enterprise AI — resulting in effective and responsible AI systems.
To See How ModelOp Center Can Help You Scale Your Approach to AI Governance