Service Health tab¶
The Service Health tab tracks metrics about a deployment's ability to respond to prediction requests quickly and reliably. This helps identify bottlenecks and assess capacity, which is critical to proper provisioning.
For example, if a model seems to have generally slowed in its response times, the Service Health tab for the model's deployment can help. You might notice in the tab that median latency goes up with an increase in prediction requests. If latency increases when a new model is switched in, you can consult with your team to determine whether the new model can instead be replaced with one offering better performance.
To access Service Health, select an individual deployment from the deployment inventory page and, from the resulting Overview page, choose the Service Health tab. The tab provides informational tiles and a chart to help assess the activity level and health of the deployment.
Time of Prediction
The Time of Prediction value can differ between the Data Drift and Accuracy tabs and the Service Health tab. On the Data Drift and Accuracy tabs, the Time of Prediction is the time you submitted the prediction request (i.e., the prediction timestamp). On the Service Health tab, the Time of Prediction is the time the prediction server processed the prediction request. The change in the prediction time tracking method on the Service Health tab is intended to accurately represent the prediction service's health for diagnostic purposes.
Use the time range and resolution dropdowns¶
The controls—model version and data time range selectors—work the same as those available on the Data Drift tab. The Service Health tab also supports segmented analysis, allowing you to view service health statistics for individual segment attributes and values.
Understand the metric tiles¶
DataRobot displays informational statistics based on your current settings for model and time frame. That is, tile values correspond to the same units as those selected on the slider. If the slider interval values are weekly, the displayed tile metrics show values corresponding to weeks. Clicking a metric tile updates the chart below.
Service Health reports on the following metrics:
|Statistic||Reports for selected time period...|
|Total Predictions||The number of predictions the deployment has made.|
|Total Requests||The number of prediction requests the deployment has received (a single request can contain multiple prediction requests).|
|Requests over...||The number of requests where the response time was longer than the specified number of milliseconds. The default is 2000 ms; click in the box to enter a time between 10 and 100,000 ms or adjust with the controls.|
|Response Time||The time (in milliseconds) DataRobot spent receiving a prediction request, calculating the request, and returning a response to the user. The report does not include time due to network latency. Select the median prediction request time or 90th, 95th, or 99th percentile. The display reports a dash if you have made no requests against it or if it's an external deployment.|
|Execution Time||The time (in milliseconds) DataRobot spent calculating a prediction request. Select the median prediction request time or 90th, 95th, or 99th percentile.|
|Median/Peak Load||The median and maximum number of requests per minute.|
|Data Error Rate||The percentage of requests that result in a 4xx error (problems with the prediction request submission). This is a component of the value reported as the Service Health Summary in the Deployments page top banner.|
|System Error Rate||The percentage of well-formed requests that result in a 5xx error (problem with the DataRobot prediction server). This is a component of the value reported as the Service Health Summary in the Deployments page top banner.|
|Consumers||The number of distinct users (identified by API key) who have made prediction requests against this deployment.|
|Cache Hit Rate||The percentage of requests that used a cached model (the model was recently used by other predictions). If not cached, DataRobot has to look the model up, which can cause delays. The prediction server cache holds 16 models by default, dropping the least-used dropped when the limit is reached.|
Understand the Service Health chart¶
The chart below the tiled metrics displays individual metrics over time, helping to identify patterns in the quality of service. Clicking on a metric tile updates the chart to represent that information; you can also export it. Adjust the data range slider to narrow in on a specific period:
Some charts will display multiple metrics: