DataRobot MLOps provides a central hub to deploy, monitor, manage, and govern all your models in production, regardless of how they were created or when and where they were deployed. MLOps helps improve and maintain the quality of your models using health monitoring that accommodates changing conditions via continuous, automated model competitions (challenger models). It also ensures that all centralized production machine learning processes work under a robust governance framework across your organization, leveraging and sharing the burden of production model management.
With MLOps, you can deploy any model to your production environment of choice. By instrumenting the MLOps agent, you can monitor any existing production model already deployed for live updates on behavior and performance from a single and centralized machine learning operations system. MLOps makes it easy to deploy models written in any open-source language or library and expose a production-quality, REST API to support real-time or batch predictions. MLOps also offers built-in, write-back integrations to systems such as Snowflake and Tableau.
MLOps provides constant monitoring and production diagnostics to improve the performance of your existing models. Automated best practices enable you to track service health, accuracy, and data drift to explain why your model is degrading. You can build your own challenger models or use DataRobot's Automated Machine Learning to build them for you and test them against your current champion model. This process of continuous learning and evaluation enables you to avoid surprise changes in model performance.
The tools and capabilities of every deployment are determined by the data available to it: training data, prediction data, and outcome data (also referred to as actuals).