Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Performance monitoring

To trust a model to power mission-critical operations, users need to have confidence in all aspects of model deployment. Model monitoring is the close tracking of the performance of ML models in production used to identify potential issues before they impact the business. Monitoring ranges from whether the service is reliably providing predictions in a timely manner and without errors to ensuring the predictions themselves are reliable.

The predictive performance of a model typically starts to diminish as soon as it’s deployed. For example, someone might be making live predictions on a dataset with customer data, but the customer’s behavioral patterns might have changed due to an economic crisis, market volatility, natural disaster, or even the weather. Models trained on older data that no longer represents the current reality might not just be inaccurate, but irrelevant, leaving the prediction results meaningless or even harmful. Without dedicated production model monitoring, the user or business owner cannot know or be able to detect when this happens. If model accuracy starts to decline without detection, the results can impact a business, expose it to risk, and destroy user trust.

DataRobot automatically monitors model deployments and offers a central hub for detecting errors and model accuracy decay as soon as possible. For each deployment, DataRobot provides a status banner—model-specific information is also available on the Deployments inventory page.

The following tools are available to monitor model deployments:

Task Tool Data Required
View deployments inventory Deployments N/A
Track model-specific deployment latency, throughput, and error rate Service Health Prediction data
Monitor model accuracy based on data distribution Data Drift Prediction and Training Data
Analyze performance of a model over time Accuracy Training Data, Prediction Data, and Actuals Data
Monitor remote models MLOps agent Requires a remote model and an external model package deployment
Compare model performance post-deployment Challenger Models Prediction Data
Track attributes for segmented analysis of training data and predictions Segmented analysis Prediction Data (Training Data also required to track data drift or accuracy)

Updated November 16, 2021
Back to top