# Service Health and Accuracy history

> Service Health and Accuracy history - Service Health and Accuracy history allow you to compare the
> current model with previous models in one place, on the same scale.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.574251+00:00` (UTC).

## Primary page

- [Service Health and Accuracy history](https://docs.datarobot.com/en/docs/classic-ui/mlops/mlops-preview/pp-deploy-history.html): Full documentation for this topic (HTML).

## Sections on this page

- [Service Health history](https://docs.datarobot.com/en/docs/classic-ui/mlops/mlops-preview/pp-deploy-history.html#service-health-history): In-page section heading.
- [Accuracy history](https://docs.datarobot.com/en/docs/classic-ui/mlops/mlops-preview/pp-deploy-history.html#accuracy-history): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [MLOps](https://docs.datarobot.com/en/docs/classic-ui/mlops/index.html): Linked from this page.
- [MLOps preview features](https://docs.datarobot.com/en/docs/classic-ui/mlops/mlops-preview/index.html): Linked from this page.
- [Service Health](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html): Linked from this page.
- [Accuracy](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-accuracy.html): Linked from this page.
- [Deployments](https://docs.datarobot.com/en/docs/classic-ui/mlops/manage-mlops/deploy-inventory.html): Linked from this page.
- [setting up accuracy for deployments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html): Linked from this page.
- [actuals](https://docs.datarobot.com/en/docs/reference/glossary/index.html#actuals): Linked from this page.

## Documentation content

# Service Health and Accuracy history

> [!NOTE] Availability information
> Deployment history for service health and accuracy is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.
> 
> Feature flag: Enable Deployment History

When analyzing a deployment, [Service Health](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html) and [Accuracy](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-accuracy.html) can provide critical information about the performance of current and previously deployed models. However, comparing these models can be a challenge as the charts are displayed separately, and the scale adjusts to the data. To improve the usability of the service health and accuracy comparisons, the [Service Health > History](https://docs.datarobot.com/en/docs/classic-ui/mlops/mlops-preview/pp-deploy-history.html#service-health-history) and [Accuracy > History](https://docs.datarobot.com/en/docs/classic-ui/mlops/mlops-preview/pp-deploy-history.html#accuracy-history) tabs (now available for preview) allow you to compare the current model with previously deployed models in one place, on the same scale.

## Service Health history

The [Service Healthpage](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html) displays metrics you can use to assess a deployed model's ability to respond to prediction requests quickly and reliably. In addition, on the History tab, you can access visualizations representing the service health history of up to five of the most recently deployed models, including the currently deployed model. This history is available for each metric tracked in a model's service health, helping you identify bottlenecks and assess capacity, which is critical to proper provisioning. For example, if a deployment's response time seems to have slowed, the Service Health page for the model's deployment can help diagnose the issue. If the service health metrics show that median latency increases with an increase in prediction requests, you can then check the History tab to compare the currently deployed model with previous models. If the latency increased after replacing the previous model, you could consult with your team to determine whether to deploy a better-performing model.

To access the Service Health > History tab:

1. ClickDeploymentsand select a deployment from the inventory.
2. On the selected deployment'sOverview, clickService Health.
3. On theService Health > Summarypage, clickHistory. TheHistorytab tracks the following metrics: MetricReportsTotal PredictionsThe number of predictions the deployment has made.Requests overxmsThe number of requests where the response time was longer than the specified number of milliseconds. The default is 2000 ms; click in the box to enter a time between 10 and 100,000 ms or adjust with the controls.Response Time (ms)The time (in milliseconds) DataRobot spent receiving a prediction request, calculating the request, and returning a response to the user. The report does not include time due to network latency. Select theMedianprediction request time or90th percentile,95th percentile, or99th percentile. The display reports a dash if you have made no requests against it or if it's an external deployment.Execution Time (ms)The time (in milliseconds) DataRobot spent calculating a prediction request. Select theMedianprediction request time or90th percentile,95th percentile, or99th percentile.Data Error Rate (%)The percentage of requests that result in a 4xx error (problems with the prediction request submission). This is a component of the value reported as the Service Health Summary in theDeploymentspage top banner.System Error Rate (%)The percentage of well-formed requests that result in a 5xx error (problem with the DataRobot prediction server). This is a component of the value reported as the Service Health Summary in theDeploymentspage top banner.
4. To view the details for a data point in a service health history chart, you can hover over the related bin on the chart:

## Accuracy history

The [Accuracypage](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-accuracy.html) analyzes the performance of model deployments over time using standard statistical measures and visualizations. Use this tool to analyze a model's prediction quality to determine if it is decaying and if you should consider replacing it. In addition, on the History page, you can access visualizations representing the accuracy history of up to five of the most recently deployed models, including the currently deployed model, allowing you to compare model accuracy directly. These accuracy insights are rendered based on the problem type and its associated optimization metrics.

> [!NOTE] Note
> Accuracy monitoring is not enabled for deployments by default. To enable it, first upload the data that contains predicted and actual values for the deployment collected outside of DataRobot. For more information, see the documentation on [setting up accuracy for deployments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html) by adding [actuals](https://docs.datarobot.com/en/docs/reference/glossary/index.html#actuals).

To access the Accuracy > History tab:

1. ClickDeploymentsand select a deployment from the inventory.
2. On the selected deployment'sOverview, clickAccuracy.
3. On theAccuracy > Summarypage, clickHistory. TheHistorytab tracks the following: MetricReportsAccuracy Over TimeA line graph visualizing the change in the selected accuracy metric over time for up to five of the most recently deployed models, including the currently deployed model. The available accuracy metrics depend on the project type.Predictions vs Actuals Over TimeA line graph visualizing the difference between the average predicted values and average actual values over time for up to five of the most recently deployed models, including the currently deployed model. For classification projects, you can display results per-class. Accuracy Over TimePredictions vs Actuals Over TimeThe accuracy over time chart plots the selected accuracy metric for each prediction range along a timeline. The accuracy metrics available depend on the type of modeling project used for the deployment:Project typeAvailable metricsRegressionRMSE, MAE, Gamma Deviance, Tweedie Deviance, R Squared, FVE Gamma, FVE Poisson, FVE Tweedie, Poisson Deviance, MAD, MAPE, RMSLEBinary classificationLogLoss, AUC, Kolmogorov-Smirnov, Gini-Norm, Rate@Top10%, Rate@Top5%, TNR, TPR, FPR, PPV, NPV, F1, MCC, Accuracy, Balanced Accuracy, FVE BinomialYou can select an accuracy metric from theMetricdropdown list.The Predictions vs Actuals Over Time chart plots the average predicted value next to the average actual value for each prediction range along a timeline. In addition, the volume chart below the graph displays the number of predicted and actual values corresponding to the predictions made within each plotted time range. The shaded area represents the number of uploaded actuals, and the striped area represents the number of predictions without corresponding actuals.The timeline and bucketing work the same for classification and regression projects; however, for classification projects, you can use theClassdropdown to display results for that class.
4. To view the details for a data point in an accuracy history chart, you can hover over the related bin on the chart:
