Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Notifications

You can change deployment notification preferences from the Settings tab. The actions you can control depend on your role—Owner or User. Both can set the type of notifications to receive; Owners can set notification scheduling for service health and data drift status. Note that those with the role of Consumers only receive notifications when a deployment is shared with them and when a previously shared deployment is deleted. They are not notified about other events.

Notifications trigger emails for service health and data drift reporting. They are off by default but can be enabled by a deployment Owner on the Monitoring tab. Keep in mind that when disabled it does not affect monitoring of either of these statistics, it only controls whether emails are sent to subscribers.

Usage

Set the type(s) of notifications you want to receive from the Notifications tab. The delivery schedule is managed via the Monitoring tab and can be set by deployment Owners.

Notifications are delivered as emails and must be set for each deployment you want to monitor.

Scheduling

From the Monitoring tab, deployment Owners can schedule the frequency that service health, data drift, and accuracy email notifications are sent.

Only Owners of a deployment can modify monitoring settings; Users can, however, configure the notification level they want to receive. Consumers cannot modify their monitoring settings.

The following table lists the setting options. All times are displayed in the user's configured time zone:

Frequency Description
Hour Service Health: Each hour on the 0 minuteData Drift: Not available
Day Each day at the configured hour*
Week Configurable day and hour
Month Configurable date and hour
Quarter Configurable number of days (1-31) past the first day of January, April, July, October, at the configured hour

* Note that the cadence setting applies across all days selected. In other words, you cannot set checks to occur every 12 hours on Saturday and every 2 hours on Monday.

At the configured time, DataRobot sends emails to subscribers.

Accuracy notifications

To enable Accuracy notifications for a deployment, set an association ID. If not set, DataRobot displays the following message when you try to modify accuracy notification settings:

Accuracy monitoring is defined by a single accuracy rule. Every 30 seconds, the rule evaluates a deployment's accuracy. Notifications trigger when this rule is violated. Deployment owners must define the metric, measurement, and threshold components of the accuracy rule:

Field Description
Metric The optimization metric that evaluates accuracy for your deployment. The metrics available from the dropdown menu are the same as those supported by the Accuracy tab.
Measurement Defines the unit of measurement for the accuracy metric and its thresholds. You can select value or percent from the dropdown. The value option measures the metric and thresholds by specific values, and the percent option measures by percent changed. Percent is unavailable for model deployments that do not have training data.
Thresholds Values or percentages that, when exceeded, trigger notifications. Two thresholds are supported: when the deployment's accuracy is "At Risk" and when it is "Failing". DataRobot provides default values for the thresholds of the first accuracy metric provided (LogLoss for classification and RMSE for regression deployments) based on the deployment's training data. Deployments without training data will populate default threshold values based on their prediction data instead. If you change metrics, default values are not provided.

Each combination of metric and measurement determines the expression of the rule. For example, if you use the LogLoss metric measured by value, the rule triggers notifications when accuracy "is greater than" the values of 5 or 10:

However, if you change the metric to AUC and the measurement to percent, the rule triggers notifications when accuracy "decreases by" the values set for the threshold:

Owners can set no more than one accuracy rule per deployment, and this is the only role that can change the definition of an accuracy rule. However, deployment users can see explained health status information by hovering over the accuracy status icon:

Fairness notifications

Configure notifications to alert you when a production model is at risk of or fails to meet predefined fairness criteria. Fairness monitoring uses a primary fairness metric and two thresholds—protected features considered to be "At Risk" and "Failing"—to monitor fairness. If not specified, DataRobot uses the default thresholds and the primary fairness metric defined in Settings > Data.

Field Description
Primary fairness metric The statistical measure of parity constraints used to assess fairness.
Thresholds Values that, when exceeded, trigger notifications. Two thresholds are supported: "At Risk" and "Failing." The value in each field corresponds to the number of protected features below the bias threshold.

To open the Data tab and modify the fairness settings, click Edit Fairness Settings.

Prediction warnings

Prediction warnings help you confirm that deployments behave as expected in production. This feature detects when deployments produce predictions with outlier values, summarized in a report that returns with your predictions. Prediction warnings allow you to mitigate risk and make models more robust by identifying when predictions do not match their expected result.

Enable prediction warnings

To enable prediction warnings for a deployment, navigate to the Data tab within a deployment's Settings page. Under the Inference Data heading, turn on the toggle "Compute ranges for prediction warnings".

Once enabled, DataRobot presents computed thresholds derived from the Holdout partition of your model. These are the boundaries for outlier detection--DataRobot reports any prediction result outside of these limits. You can choose to accept the holdout-based thresholds or manually define the ranges instead. After making any desired changes, click Save ranges.

Note

Prediction warnings are not a retroactive feature. For example, if your upper-bound threshold for outliers is 40, then a prediction with a value of 50 made prior to setting up your thresholds is not retroactively detected as an outlier. Prediction warnings will only return with prediction requests made after the feature is enabled.

Use prediction warnings

After saving your settings, navigate to the deployment's Predictions > Prediction API tab. To generate a report of prediction warnings, check the Prediction Warnings box.

Once checked, copy the snippet and make predictions. Enabling prediction warnings modifies the snippet to report any detected outliers alongside your prediction results.

Every prediction result contains the isOutlierPrediction key and the result, marked true for a detected outlier and false when not detected.

When DataRobot detects outlier predictions, consider applying the following solutions when interpreting your results:

  • Substitute the outlier with the minimum or maximum value of your target feature in the training dataset.
  • Substitute the outlier with the mean or median value of your target feature in the training dataset.

Updated November 18, 2021
Back to top