Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Humility rules

MLOps allows you to create humility rules for deployments using the Humility tab.

Availability information

The Humility tab is only available for DataRobot MLOps users. Contact your DataRobot representative for more information about enabling this feature.

Humility rules enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before. Unlike data drift, model humility does not deal with broad statistical properties over time—it is instead triggered for individual predictions, allowing you to set desired behaviors with rules that depend on different triggers.

Using humility rules to add triggers and corresponding actions to a prediction helps mitigate risk for models in production. Humility rules help to identify and handle data integrity issues during monitoring and to better identify the root cause of unstable predictions.

Note

To enable humility-over-time monitoring for a deployment (displayed on the Summary page), Data Drift monitoring must also be enabled.

Specific humility rules are available for multiseries projects. While they follow the same general workflow as AutoML projects, they have specific settings and options, described below.

Create a new humility rule

Create humility rules for a deployment from the Humility > Rules tab. If you have not yet enabled humility for a model, toggle on Enable humility. Then, create a new rule by selecting Add Rule. Alternatively, you can opt to use the recommended rules provided by DataRobot.

In the resulting dialog, click the pencil icon to provide a name for your rule.

Each rule, applied to the deployment's predicted values, is composed of two components: a trigger that detects a rule violation and an action to apply to the violating prediction.

Choose a trigger

Select a Trigger for the rule you want to create. There are three triggers available. Each trigger has a modal used to configure how it detects violations. See below for details on how each trigger is configured.

Trigger Description
Uncertain Prediction Detects whether a prediction's value violates the configured thresholds. You must set lower-bound and upper-bound thresholds for prediction values.
Outlying Input Detects if the input value of a numeric feature is outside of the configured thresholds.
Low Observation Region Detects if the input value of a categorical feature value is not included in the list of specified values.

Uncertain Prediction

To configure the uncertain prediction trigger, set lower-bound and upper-bound thresholds for prediction values. You can either enter these values manually or click Calculate to use computed thresholds derived from the Holdout partition of the model (only available for DataRobot models). For regression models, the trigger detects any values outside of the configured thresholds. For binary classification models, the trigger detects any prediction's probability value that is inside the thresholds. You can view the type of model for your deployment from the Settings > Data tab.

Outlying Input

To configure the outlying input trigger, select a numeric feature and set the lower-bound and upper-bound thresholds for its input values. Enter the values manually or click Calculate to use computed thresholds derived from the training data of the model (only available for DataRobot models).

Low Observation Region

To configure the low observation region trigger, select a categorical feature and indicate one or more values. Any input value that appears in prediction requests that does not match the indicated values triggers an action.

Choose an Action

Select an Action for DataRobot to apply if a trigger indicates a rule violation. There are three actions available.

Action Description
Override prediction Modifies predicted values for rows violating the trigger with the value configured by the action.
Throw error Rows in violation of the trigger return a 480 HTTP error with the predictions, which also contributes to the data error rate on the Service Health tab.
No operation No changes are made to the detected prediction value.

Override prediction

To configure the override action, set a value that will overwrite the returned value for predictions violating the trigger. For binary classification and multiclass models, the indicated value can be set to either of the model's class labels (e.g., "True" or "False"). For regression models, manually enter a value or use the maximum, minimum, or mean provided by DataRobot (only provided for DataRobot models).

Throw error

To configure the throw error action, you can use the default error message provided or specify your own custom error message. This error message will appear along a 480 HTTP error with the predictions.

Save rule configurations

When rule configuration is complete, a rule explanation displays below the rule describing what happens for the configured trigger and respective action.

Click Add to save the rule. To save all humility rule changes for the deployment, you must then click Submit. If you navigate away from the Humility tab without clicking Submit, your changes are not saved.

Once saved and submitted, DataRobot monitors the deployment using the new and any previously created rules. After a rule is created, the prediction response body returns the humility object. Refer to the Prediction API documentation for more information.

Edit humility rules

Existing rules can be edited or deleted from the Setup tab by users with Owner permissions.

Note

Edits to humility rules can have significant impact on deployment predictions, as prediction values can be overwritten with new values or return errors based on the rules configured.

To edit a rule, select the pencil icon for that rule.

You can change the trigger, action, and any associated values for a rule. When you are finished, click Save Changes.

To delete a rule, select the trash can icon for that rule.

To re-order the rules listed, simply drag and drop them in the desired order.

After you have made any of the changes described above, click Submit to finalize your edits (in addition to having selected Save Changes after editing a rule). Submit is the only way to permanently save rule changes.

Warning

Because you cannot save a new or edited rule if it is a duplicate of any existing rule, a warning message notifies you if a duplication has resulted from your changes.

Rule application order

The displayed list order of your rules determines the order in which they are applied. Although every humility rule trigger is applied, if multiple rules match the trigger of a prediction response, DataRobot applies the first rule in the list that changes the prediction value. However, if any triggered rule has the "Throw Error" action, that rule takes priority.

For example, consider a deployment with the following rules:

Trigger Action Thresholds
Rule 1: Uncertain Prediction Override the prediction value to 55. Lower: 1

Upper: 50
Rule 2: Uncertain Prediction Override the prediction value to 66. Lower: 45

Upper: 50

If a prediction returns the value 47, both rules will trigger, as both rules detect an uncertain prediction outside of their thresholds. Rule 1 takes priority, so the prediction value is overwritten to 55. The action to overwrite the value to 66 (based on Rule 2) is ignored.

In another example, consider a deployment with the following rules:

Trigger Action Thresholds
Rule 1: Uncertain Prediction Override the prediction value to 55. Lower: 1

Upper: 55
Rule 2: Uncertain Prediction Throw an error. Lower: 45

Upper: 60

If a prediction returns the value 50, both rules will trigger. However, Rule 2 takes priority over Rule 1 because it is configured to return an error. Therefore, the value is not overwritten, as the action to return an error is higher priority than the numerical order of the rules.

View humility data over time

After configuring rules and making predictions with humility monitoring enabled, you can view the humility data collected for a deployment from the Humility > Summary tab.

The X-axis measures the range of time that predictions have been made for the deployment. The Y-axis measures the number of times humility rules have triggered for the given period of time.

The controls—model version and data time range selectors—work the same as those available on the Data Drift tab.

Prediction warnings

Prediction warnings allow you to mitigate risk and make models more robust by identifying when predictions do not match their expected result in production. This feature detects when deployments produce predictions with outlier values, summarized in a report that returns with your predictions.

Prediction warnings provide the same functionality as the Uncertain Prediction trigger that is part of humility monitoring. You may want to enable both, however, because prediction warning results are integrated into the Predictions over time chart on the Data Drift tab.

Enable prediction warnings

To enable prediction warnings for a deployment, navigate to the Data tab within a deployment's Settings page. Under the Inference Data heading, toggle on "Compute ranges for prediction warnings".

Once enabled, DataRobot presents computed thresholds derived from the Holdout partition of your model. These are the boundaries for outlier detection—DataRobot reports any prediction result outside of these limits. You can choose to accept the Holdout-based thresholds or manually define the ranges instead. After making any desired changes, click Save ranges.

Prediction warnings are also reported on the Predictions Over Time chart in the Data Drift tab.

Note

Prediction warnings are not retroactive. For example, if you set the upper-bound threshold for outliers to 40, a prediction with a value of 50, made prior to setting up thresholds, is not retroactively detected as an outlier. Prediction warnings will only return with prediction requests made after the feature is enabled.

Use prediction warnings

After saving your settings, navigate to the deployment's Predictions > Python Scoring Code tab. To generate a report of prediction warnings, check the Prediction Warnings box.

Once checked, copy the snippet and make predictions. Enabling prediction warnings modifies the snippet to report any detected outliers alongside your prediction results.

Every prediction result contains the isOutlierPrediction key and the result, marked true for a detected outlier and false when not detected.

When DataRobot detects outlier predictions, consider substituting the outlier value with:

  • the minimum or maximum target value from the training dataset.
  • the mean or median target value from the training dataset.

Humility tab considerations

Consider the following when using the Humility tab:

  • You cannot define more than 10 humility rules for a deployment.
  • Humility rules can only be defined by owners of the deployment. Users of the deployment can view the rules but cannot edit them or define new rules.
  • The "Uncertain Prediction" trigger is only supported for regression and binary classification models.
  • Multiclass models only support the "Override prediction" trigger.

If you want DataRobot to recommend a set of rules for your deployment, click Use Recommended Rules when adding a new humility rule.

This option generates two automatically configured humility rules:

  • Rule 1: The Uncertain prediction trigger and the No operation action.
  • Rule 2: The Outlying input trigger for the most important numeric feature (based on Feature Impact results) and the No operation action.

Both recommended rules have automatically calculated upper- and lower-bound thresholds.

Multiseries humility rules

DataRobot supports multiseries blueprints that support feature derivation and predicting using partial history or no history at all—series that were not trained previously and do not have enough points in the training dataset for accurate predictions. This is useful, for example, in demand forecasting. When a new product is introduced, you may want initial sales predictions. In conjunction with “cold start modeling” (modeling on a series in which there is not sufficient historical data) you can predict on new series, but also keep accurate predictions for series with a history.

With the support in place, you can set up a humility rule that:

  • Triggers off a new series (unseen in training data).
  • Takes a specified action.
  • Optionally, returns a custom error message.

Note

If you replace a model within a deployment using a model from a different project, the humility rule is disabled. If the replacement is a model from the same project, the rule is saved.

Create a humility rules

  1. Select a deployment from your Deployments inventory and click Humility. Toggle Enable humility to on.

  2. Click + Add new rule to begin configuring the rule. Time series deployments can only have one humility rule applied. If you already have a rule, click it to edit if you want to make changes.

  3. Select a trigger. To include new series data, select New series as the trigger. This rule detects if a series is present that was not available in the training data and does not have enough history in the prediction data for accurate predictions.

  4. Select an action.

    Subsequent options are dependent on the selected action, as described in the following table:

    Action If a new series is encountered... Further action
    No operation DataRobot records the event but the prediction is unchanged. N/A
    Use model with new series support The prediction is overridden by the prediction from a selected model with new series support. Select a model that supports unseen series modeling. DataRobot preloads supported models in the dropdown.
    Use global most frequent class (binary classification only) The prediction value is replaced with the most frequent class across all series. N/A
    Use target mean for all series (regression only) The prediction value is overridden by the global target mean for all series. N/A
    Override prediction The prediction value is changed to the specified preferred value. Enter a numeric value to replace the prediction value for any new series.
    Return error The default or a custom error is returned with the 480 error. Use the default or click in the box to enter a custom error message.

When finished, click the Add button to create the new rule or save changes.

Select a model replacement

When you expand the Model with new series support dropdown, DataRobot provides a list of models available from the model registry, not the Leaderboard. Using models available from the registry decouples the model from the project and provides support for packages. In this way, you can use a backup model from any compatible project, as long as it uses the same target and has the same series available.

Note

If no packages are available, deploy a “new series support” model and add it to the model registry. You can identify qualifying models from the Leaderboard by the NEW SERIES OPTIMIZED badge. There is also a notification banner in the Make Predictions tab if you try to use a non-optimized model.


Updated November 5, 2021
Back to top