Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Governance lens

The Governance lens summarizes the social and operational aspects of a deployment, such as the deployment owner, how the model was built, the model's age, and the humility monitoring status. View the governance lens from the deployment inventory.

The following table describes the information available from the Governance lens:

Category Description
Deployment Name The name assigned to a deployment at creation, the type of prediction server used, and the project name (DataRobot models only).
Build Environment The environment in which the model was built.
Owners The owner(s) of each deployment. To view the full list of owners, click on the names listed. A pop-up modal displays the owners with their associated email addresses.
Model Age The length of time the current model has been deployed. This value resets every time the model is replaced.
Humility Monitoring The status of prediction warnings and humility rules for each deployment.
Fairness Monitoring The status of fairness rules based on the number of protected features below the predefined fairness threshold for each deployment.
Actions Menu of additional model management activities, including adding data, replacing a model, setting data drift, and sharing and deleting deployments.

Build environments

The build environment indicates the environment in which the model was built.

The following table details the types of build environments displayed in the inventory for each type of model:

Deployed model Available build environments
DataRobot model DataRobot
Custom model Python, R, Java, or Other (if not specified). Custom models derive their build environment from the model's programming language.
External model DataRobot, Python, Java, R, or Other (if not specified). Specify an external model's build environment from the Model Registry when creating a model package.

Humility Monitoring indicators

The Humility Monitoring column provides an at-a-glance indication of how humility is configured for each deployment. To view more detailed information for an individual model, or enable humility monitoring, click on a deployment in the inventory list and navigate to the Humility tab.

The column indicates the status of two Humility Monitoring features: prediction warnings and humility rules.

In the deployment inventory, interpret the color indicators for each humility feature as follows:

Color Status
Blue Enabled for the deployment.
Light gray Disabled for the deployment.
Dark gray Unavailable for the deployment. Humility Monitoring is only available for non-time-aware regression models and custom regression models that provide holdout data.

Fairness Monitoring indicators

The Fairness column provides an at-a-glance indication of how each deployment is performing based on predefined fairness criteria. To view more detailed information for an individual model or enable fairness monitoring, click on a deployment in the inventory list and navigate to the Settings tab.

In the deployment inventory, interpret the color indicators as follows:

Color Status
Light gray Fairness monitoring is not configured for this deployment.
Green All protected features are passing the fairness tests.
Yellow One protected feature is failing the fairness tests. Default is 1.
Red More than one protected feature is failing the fairness tests. Default is 2.

You can create rules for fairness monitoring in the Definition section of the Fairness > Settings tab. If no rules are specified, fairness monitoring uses the default values for "At Risk" and "Failing."


Updated April 11, 2023