MLOps smart audit¶
Access this AI accelerator on GitHub
This accelerator outlines a workflow to create an application that provides an interactive dashboard for analyzing the MLOps configuration across multiple machine learning deployments. The application examines each deployment for enabled capabilities (e.g., data drift detection, accuracy monitoring, notifications, etc.) and produces a summarized, interactive view. This helps MLOps administrators assess deployment quality, identify gaps, prioritize improvements, and check compliance scores.
The key feature of the accelerator are outlined below:
-
Deployment overview: Quickly see which MLOps functions are enabled or disabled across your deployments.
-
Quality and compliance assessment: Each deployment is assigned a quality score based on the percentage of enabled capabilities and a compliance score based on a set of mandatory functions and model risk levels.
-
Advanced filtering and search: Use sidebar filters to refine deployments by type, owner, capabilities, or score ranges.
-
LLM-Based insights (optional): When enabled, Azure OpenAI provides natural language summaries and recommendations.
-
Capability governance: View governance rules for capabilities categorized by importance (Critical, High, Moderate, Low) for both predictive and generative models.