Notifications and monitoring¶
DataRobot provides automated monitoring with a notification system. You can configure notifications to alert you when the service health, data drift status, model accuracy, or fairness exceed your defined acceptable levels.
You can change deployment notification preferences from the Settings tab. The actions you can control depend on your role—Owner or User. Both can set the type of notifications to receive; Owners can set up schedules and thresholds to monitor for:
Note that those with the role of Consumer only receive notifications when a deployment is shared with them and when a previously shared deployment is deleted. They are not notified about other events.
Notifications trigger emails. They are off by default but can be enabled by a deployment Owner. Keep in mind that notifications only control whether emails are sent to subscribers. If notifications are disabled, monitoring of service health, data drift, accuracy, and fairness statistics still occurs.
Configure notifications¶
To set the types of notifications you want to receive:
-
Select a deployment and select Settings > Notifications.
-
Select whether to email notifications, and if so, whether to send them for all events or just critical events.
-
Specify a delivery schedule on the Monitoring tab.
Notifications are delivered as emails and must be set for each deployment you want to monitor.
Note
You can also schedule deployment reports on the Notifications tab.
Schedule monitoring¶
Deployment Owners can schedule the frequency that service health, data drift, accuracy, and fairness email notifications are sent.
-
From your deployment, select Settings >Monitoring.
Note
Only Owners of a deployment can modify monitoring settings. Users can, however, configure the conditions under which notifications are sent to them. Consumers cannot modify monitoring or notification settings.
The following table lists the scheduling options. All times are displayed in the user's configured time zone:
Frequency Description Hour - Service Health: Each hour on the 0 minute
- Data Drift: Not available
Day Each day at the configured hour* Week Configurable day and hour Month Configurable date and hour Quarter Configurable number of days (1-31) past the first day of January, April, July, October, at the configured hour * Note that the cadence setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday.
-
After updating the scheduling settings, click Save new settings at the top of the page. At the configured time, DataRobot sends emails to subscribers.
Set up service health monitoring¶
Service health tracks metrics about a deployment’s ability to respond to prediction requests quickly and reliably. You can visualize service health on the Service Health tab.
To set up monitoring of service health for a deployment:
-
Under Service Health on the Monitoring tab, schedule notifications for monitoring service health.
-
Click Save new settings at the top of the page.
Set up data drift monitoring¶
Drift assesses how the distribution of data changes across all features, for a specified range. The thresholds you set determine the amount of drift you will allow before a notification is triggered.
Use the Data Drift section of the Monitoring tab to set thresholds for drift and importance:
- Drift is a measure of how new prediction data differs from the original data used to train the model.
- Importance allows you to separate the features you care most about from those that are less important.
For both drift and importance, you can visualize the thresholds and how they separate the features on the Data Drift tab.
By default, the data drift status for deployments is marked as "Failing" () when at least one high-importance feature exceeds the set drift metric threshold; it is marked as "At Risk" (
) when no high-importance features but at least one low-importance feature exceeds the threshold.
Deployment owners can customize the rules used to calculate the drift status for each deployment. Customization happens in a number of ways:
-
Define or override the list of high or low importance features in order to monitor features that are important to you or put less emphasis on features that are less important.
-
Exclude features expected to drift from drift status calculation and alerting so you do not get false alarms.
-
Customize what "At Risk" and "Failing" drift statuses mean to personalize and tailor the drift status of each deployment to your needs.
To set up monitoring of drift status for a deployment:
-
Under Data Drift on the Monitoring tab, configure the settings for monitoring data drift:
Element Description Send notifications Schedules notifications for monitoring data drift. Range Adjusts the time range of the Reference period, which compares training data to prediction data. Select a time range from the dropdown menu. Drift metric and threshold Configures the thresholds of the drift metric. DataRobot only supports the Population Stability Index (PSI) metric. When drift thresholds are changed, the Feature Drift vs. Feature Importance chart updates to reflect the changes. For more information, see the note on Drift metric support below. excluded features Excludes features (including the target) from drift status calculations. A dialog box prompts you to enter the names of features you want to exclude. Excluded features do not affect drift status for the deployment but still display on the Feature Drift vs. Feature Importance chart. See an example. Importance metric and threshold Configures the thresholds of the Importance metric. The Importance metric measures the most impactful features in the training data. DataRobot only supports the Permutation Importance metric. When drift thresholds are changed, the Feature Drift vs. Feature Importance chart updates to reflect the changes. See an example. starred features Sets features to be treated as high importance even if they were initially assigned low importance. A dialog box prompts you to enter the names of features to star. Once added, these features are assigned high importance. They ignore the importance thresholds, but still display on the Feature Drift vs. Feature Importance chart. See an example. "At Risk" / "Failing" thresholds Configure the values that trigger drift statuses for "At Risk" ( ) and "Failing" (
). See an example.
-
After updating the data drift monitoring settings, click Save new settings at the top of the page.
Drift metric support
While the DataRobot UI only supports the Population Stability Index (PSI) metric, the API supports Kullback-Leibler Divergence, Hellinger Distance, Kolmogorov-Smirnov, Histogram Intersection, Wasserstein Distance, and Jensen–Shannon Divergence. In addition, using the Python API client, you can retrieve a list of supported metrics.
Example of an excluded feature¶
In the example below, the excluded feature, which appears as a gray circle, would normally change the drift status to "Failing" (). Because it is excluded, the status remains as Passing.
Example of configuring the importance and drift thresholds¶
In the example below, the chart has adjusted the importance and drift thresholds (indicated by the arrows), resulting in more features "At Risk" and "Failing" than the chart above.
Example of starring a feature to assign high importance¶
In the example below, the starred feature, which appears as a white circle, would normally cause drift status to be "At Risk" due to its initially low importance. However, since it is assigned high importance, the feature will change the drift status to "Failing" ().
Example of setting a drift status rule¶
The following example configures the rule for a deployment to mark its drift status as "At Risk" if one of the following is true:
-
The number of low importance features above the drift threshold is greater than 1.
-
The number of high importance features above the drift threshold is greater than 3.
Set up accuracy monitoring¶
For Accuracy, the notification conditions relate to a performance optimization metric for the underlying model in the deployment. Select from the same set of metrics that are available on the Leaderboard. You can visualize accuracy using the Accuracy over Time graph and the Prediction & Actual graph.
Accuracy monitoring is defined by a single accuracy rule. Every 30 seconds, the rule evaluates the deployment's accuracy. Notifications trigger when this rule is violated.
Prior to configuring accuracy notifications and monitoring for a deployment, set an association ID. If not set, DataRobot displays the following message when you try to modify accuracy notification settings:
Deployment owners can customize the rules used to calculate the accuracy status for each deployment.
To set up accuracy monitoring:
-
Under Accuracy on the Monitoring tab, configure the settings for monitoring accuracy:
Element Description Send notifications Schedules notifications for monitoring accuracy. Metric Evaluates accuracy for your deployment. The metrics available from the dropdown menu are the same as those supported by the Accuracy tab. Measurement Defines the unit of measurement for the accuracy metric and its thresholds. You can select value or percent from the dropdown. The value option measures the metric and thresholds by specific values, and the percent option measures by percent changed. Percent is unavailable for model deployments that do not have training data. "At Risk" / "Failing" thresholds Sets the values or percentages that, when exceeded, trigger notifications. Two thresholds are supported: when the deployment's accuracy is "At Risk" and when it is "Failing." DataRobot provides default values for the thresholds of the first accuracy metric provided (LogLoss for classification and RMSE for regression deployments) based on the deployment's training data. Deployments without training data populate default threshold values based on their prediction data instead. If you change metrics, default values are not provided. -
After updating the accuracy monitoring settings, click Save new settings at the top of the page.
Note
Only deployment Owners can change the definition of an accuracy rule. They can set no more than one accuracy rule per deployment. Deployment Users can see explained status information by hovering over the accuracy status icon:
Examples of accuracy monitoring settings¶
Each combination of metric and measurement determines the expression of the rule. For example, if you use the LogLoss metric measured by value, the rule triggers notifications when accuracy "is greater than" the values of 5 or 10:
However, if you change the metric to AUC and the measurement to percent, the rule triggers notifications when accuracy "decreases by" the values set for the threshold:
Set up fairness monitoring¶
Configure notifications to alert you when a production model is at risk of or fails to meet predefined fairness criteria. You can visualize fairness status on the Fairness tab.
Fairness monitoring uses a primary fairness metric and two thresholds—protected features considered to be "At Risk" and "Failing"—to monitor fairness. If not specified, DataRobot uses the default thresholds and the primary fairness metric defined in Settings > Data.
Deployment owners can customize the rules used to calculate the fairness status for each deployment:
-
Under Fairness on the Monitoring tab, configure the settings for monitoring fairness:
Element Description Send notifications Schedules notifications for monitoring fairness. Primary fairness metric Defines the statistical measure of parity constraints used to assess fairness. Protected features Displays the features specified as protected (read only). Click Edit Fairness Settings to open the Data tab where you can modify the fairness settings, including protected features. "At Risk" / "Failing" thresholds Sets the values that, when exceeded, trigger notifications. Two thresholds are supported: "At Risk" and "Failing". The value in each field corresponds to the number of protected features below the bias threshold. -
After updating the fairness monitoring settings, click Save new settings at the top of the page.
Save monitoring settings¶
After updating monitoring settings, click Save new settings at the top of the page.
Note
Changes to thresholds affect the periods of time in which predictions are made across the entire history of a deployment. These updated thresholds are reflected in the performance monitoring visualizations on the Data Drift, Accuracy, and Fairness tabs.
If you are not satisfied with the configuration or want to restore the default settings, click Reset to defaults.