# Console

> Console - The NextGen DataRobot Console provides critical management, monitoring, and governance
> features in a refreshed, modern user interface, familiar to users of MLOps features in DataRobot
> Classic.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:10.022547+00:00` (UTC).

## Primary page

- [Console](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html): Full documentation for this topic (HTML).

## Sections on this page

- [Dashboard and overview](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#dashboard-and-overview): In-page section heading.
- [Monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#monitoring): In-page section heading.
- [Predictions](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#predictions): In-page section heading.
- [Mitigation](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#mitigation): In-page section heading.
- [Activity log](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#activity-log): In-page section heading.
- [Settings](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#settings): In-page section heading.
- [Prediction environments](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#prediction-environments): In-page section heading.
- [Feature considerations](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#feature-considerations): In-page section heading.
- [Time series deployments](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#time-series-deployments): In-page section heading.
- [Multiclass deployments](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#multiclass-deployments): In-page section heading.
- [Challengers](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#challengers): In-page section heading.
- [Prediction results cleanup](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#prediction-results-cleanup): In-page section heading.
- [Managed AI Platform](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html#managed-ai-platform): In-page section heading.

## Related documentation

- [NextGen UI documentation](https://docs.datarobot.com/en/docs/workbench/index.html): Linked from this page.
- [Workbench](https://docs.datarobot.com/en/docs/get-started/day0/predai-start/wb-overview.html): Linked from this page.
- [Registry](https://docs.datarobot.com/en/docs/workbench/nxt-registry/index.html): Linked from this page.
- [Dashboard](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-overview/nxt-dashboard.html): Linked from this page.
- [Overview tab](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-overview/nxt-overview.html): Linked from this page.
- [Deployment actions](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-overview/nxt-deployment-actions.html): Linked from this page.
- [Deployment reports](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-reports.html): Linked from this page.
- [Service health](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-service-health.html): Linked from this page.
- [Data drift](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-data-drift.html): Linked from this page.
- [Accuracy](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-accuracy.html): Linked from this page.
- [Fairness](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-fairness.html): Linked from this page.
- [Usage](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-usage.html): Linked from this page.
- [Custom metrics](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-custom-metrics.html): Linked from this page.
- [Data exploration](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-data-exploration.html): Linked from this page.
- [Monitoring jobs](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-monitoring-jobs.html): Linked from this page.
- [Make predictions](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-predictions/nxt-make-predictions.html): Linked from this page.
- [Prediction API](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-predictions/nxt-pred-api-snippets.html): Linked from this page.
- [Monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-predictions/nxt-monitoring.html): Linked from this page.
- [Prediction intervals](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-predictions/nxt-prediction-intervals.html): Linked from this page.
- [Prediction jobs](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-predictions/nxt-prediction-jobs.html): Linked from this page.
- [Challengers](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-challengers.html): Linked from this page.
- [Retraining](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-retraining.html): Linked from this page.
- [Humility](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-humility.html): Linked from this page.
- [MLOps events](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-mlops-events.html): Linked from this page.
- [Governance](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-governance.html): Linked from this page.
- [Agent events](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-agent-events.html): Linked from this page.
- [Model history](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-model-history.html): Linked from this page.
- [Standard output](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-runtime-logs.html): Linked from this page.
- [Moderation](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-moderation.html): Linked from this page.
- [Logs](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-otel-logs.html): Linked from this page.
- [Comments](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-activity-log/nxt-comments.html): Linked from this page.
- [Set up service health monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-service-health-settings.html): Linked from this page.
- [Set up data drift monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-data-drift-settings.html): Linked from this page.
- [Set up accuracy monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-accuracy-settings.html): Linked from this page.
- [Set up fairness monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-fairness-settings.html): Linked from this page.
- [Set up custom metrics monitoring](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-custom-metrics-settings.html): Linked from this page.
- [Set up humility rules](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-humility-settings.html): Linked from this page.
- [Configure challengers](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-challengers-settings.html): Linked from this page.
- [Configure retraining](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-retraining-settings.html): Linked from this page.
- [Configure predictions settings](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-predictions-settings.html): Linked from this page.
- [Set up timeliness tracking](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-usage-settings.html): Linked from this page.
- [Enable data exploration](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-data-exploration-settings.html): Linked from this page.
- [Configure deployment notifications](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-notification-settings.html): Linked from this page.
- [Configure deployment resource settings](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-resource-settings.html): Linked from this page.
- [custom model assembly](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-workshop/nxt-create-custom-model.html#configure-custom-model-resource-settings): Linked from this page.
- [Add DataRobot Serverless prediction environments](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-prediction-environments/nxt-pred-env.html): Linked from this page.
- [Add external prediction environments](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-prediction-environments/nxt-ext-pred-env.html): Linked from this page.
- [Manage prediction environments](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-prediction-environments/nxt-pred-env-manage.html): Linked from this page.
- [Deploy a model to a prediction environment](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-prediction-environments/nxt-pred-env-deploy.html): Linked from this page.
- [Prediction environment integrations](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-prediction-environments/nxt-prediction-environment-integrations/index.html): Linked from this page.
- [file size requirements](https://docs.datarobot.com/en/docs/reference/data-ref/file-types.html): Linked from this page.
- [Make Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/predictions/predict.html): Linked from this page.
- [external deployment](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/deploy-external-model.html): Linked from this page.
- [Prediction API](https://docs.datarobot.com/en/docs/api/reference/predapi/legacy-predapi/dr-predapi.html): Linked from this page.
- [batch predictions](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/batch-pred-ts.html): Linked from this page.
- [Enable cross-series feature generation](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/ts-reference/ts-adv-opt.html#enable-cross-series-feature-generation): Linked from this page.
- [integrated enterprise databases](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html): Linked from this page.
- [deployment approval workflow](https://docs.datarobot.com/en/docs/classic-ui/mlops/governance/dep-admin.html): Linked from this page.
- [deployment predictions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/index.html): Linked from this page.
- [batch prediction limits](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html#limits): Linked from this page.
- [Challengers](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html): Linked from this page.
- [Feature Discovery](https://docs.datarobot.com/en/docs/classic-ui/data/transform-data/feature-discovery/fd-overview.html): Linked from this page.
- [Unstructured custom inference](https://docs.datarobot.com/en/docs/api/code-first-tools/drum/unstructured-custom-models.html): Linked from this page.
- [organization](https://docs.datarobot.com/en/docs/platform/admin/admin-overview.html#what-are-organizations): Linked from this page.
- [owners](https://docs.datarobot.com/en/docs/reference/misc-ref/roles-permissions.html#deployment-roles): Linked from this page.
- [Data drift analysis](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html): Linked from this page.

## Documentation content

> [!NOTE] Premium
> Access to the management, monitoring, and governance features in Console requires MLOps functionality enabled.
> 
> Feature flag: Enable MLOps

The NextGen DataRobot Console provides a seamless transition from model experimentation in [Workbench](https://docs.datarobot.com/en/docs/get-started/day0/predai-start/wb-overview.html) and registration in the [Registry](https://docs.datarobot.com/en/docs/workbench/nxt-registry/index.html) to model monitoring and management through deployments in Console.

## Dashboard and overview

| Topic | Description |
| --- | --- |
| Dashboard | Navigate the deployment Dashboard, the central hub for deployment management activity. |
| Overview tab | Navigate and interact with the Overview tab, providing a model- and environment-specific summary that describes the deployment, including the information you supplied when creating the deployment and any model replacement activity. |
| Deployment actions | Manage a deployment with the settings and controls available in the actions menu. |
| Deployment reports | Generate a deployment report to summarize the details of a deployment, such as its owner, how the model was built, the model age, and the humility monitoring status. |

## Monitoring

| Topic | Description |
| --- | --- |
| Service health | Track model-specific deployment latency, throughput, and error rate. |
| Data drift | Monitor model accuracy based on data distribution. |
| Accuracy | Analyze the performance of a model over time. |
| Fairness | Monitor deployments to recognize when protected features fail to meet predefined fairness criteria. |
| Usage | Track prediction processing progress for use in accuracy, data drift, and predictions over time analysis. |
| Custom metrics | Create and monitor custom business or performance metrics or add pre-made metrics. |
| Data exploration | Export a deployment's stored prediction data, actuals, and training data to compute and monitor custom business or performance metrics. |
| Monitoring jobs | Monitor deployments running and storing feature data and predictions outside of DataRobot. |
| Deployment reports | Generate a deployment report to summarize the details of a deployment, such as its owner, how the model was built, the model age, and the humility monitoring status. |

## Predictions

| Topic | Description |
| --- | --- |
| Make predictions | Make predictions with large datasets, providing input data and receiving predictions for each row in the output data. |
| Prediction API | Adapt downloadable DataRobot Python code to submit a CSV or JSON file for scoring and integrate it into a production application via the Prediction API. |
| Monitoring | Access monitoring snippets for agent-monitored external models deployed in Console. |
| Prediction intervals | For time series deployments, enable and configure prediction intervals returned alongside the prediction response of deployed models. |
| Prediction jobs | View and manage prediction job definitions for a deployment. |

## Mitigation

| Topic | Description |
| --- | --- |
| Challengers | Compare model performance post-deployment. |
| Retraining | Define the retraining settings and then create retraining policies. |
| Humility | Monitor deployments to recognize, in real-time, when the deployed model makes uncertain predictions or receives data it has not seen before. |

## Activity log

| Topic | Description |
| --- | --- |
| MLOps events | View important deployment events. |
| Governance | View a deployment's available governance log details, including an audit trail for any deployment approval policies triggered for the deployment. |
| Agent events | View management and monitoring events from the MLOps agents. |
| Model history | View a historical log of deployment events. |
| Standard output | View custom model runtime log events. |
| Moderation | View evaluation and moderation events. |
| Logs | View a deployment's OpenTelemetry log events. |
| Comments | View comments added during the deployment approval and configuration process. |

## Settings

| Topic | Description |
| --- | --- |
| Set up service health monitoring | Enable segmented analysis to assess service health, data drift, and accuracy statistics by filtering them into unique segment attributes and values. |
| Set up data drift monitoring | Enable data drift monitoring on a deployment's Data Drift Settings tab. |
| Set up accuracy monitoring | Enable accuracy monitoring on a deployment's Accuracy Settings tab. |
| Set up fairness monitoring | Enable fairness monitoring on a deployment's Fairness Settings tab. |
| Set up custom metrics monitoring | Enable custom metrics monitoring by defining the "at risk" and "failing" thresholds for the custom metrics you created. |
| Set up humility rules | Enable humility monitoring by creating rules that enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before. |
| Configure challengers | Enable challenger comparison by configuring a deployment to store prediction request data at the row level and replay predictions on a schedule. |
| Configure retraining | Enable Automated Retraining for a deployment by defining the general retraining settings and then creating retraining policies. |
| Configure predictions settings | Review the Predictions Settings tab to view details about your deployment's prediction data or, for deployed time series models, enable prediction intervals in the prediction response. |
| Set up timeliness tracking | Enable timeliness indicators show if the prediction or actuals upload frequency meets the standards set by your organization. |
| Enable data exploration | Enable data exploration to compute and monitor custom business or performance metrics. |
| Configure deployment notifications | Enable personal notifications to trigger emails for service health, data drift, accuracy, and fairness monitoring. |
| Configure deployment resource settings | For custom model deployments, view the custom model resource settings defined during custom model assembly. If the custom model is deployed on a DataRobot Serverless prediction environment and the deployment is inactive, you can modify the resource bundle settings. |

## Prediction environments

| Topic | Description |
| --- | --- |
| Add DataRobot Serverless prediction environments | Set up DataRobot Serverless prediction environments and deploy models to those environments to make predictions. |
| Add external prediction environments | Set up prediction environments on your own infrastructure, group prediction environments, and configure permissions and approval workflows. |
| Manage prediction environments | View, edit, delete, and share external prediction environments, or deploy models to external prediction environments. |
| Deploy a model to a prediction environment | Access a prediction environment and deploy a model directly to the environment. |
| Prediction environment integrations | Configure DataRobot-managed prediction environment integrations to deploy and replace DataRobot models. |

## Feature considerations

When curating a prediction request/response dataset from an external source:

- Include the 25 most important features.
- Follow the CSVfile size requirements.
- For classification projects, classes must have a value of 0 or 1, or be text strings.

Additionally, note that:

- Self-Managed AI Platform only: By default, the 25 most important features and the target are tracked for data drift.
- For real-time, deployment predictions, the maximum payload size is 50MB for both Dedicated and Serverless prediction environments.
- TheMake Predictionstab is not available for external deployments.
- DataRobot deployments only track predictions made against dedicated prediction servers bydeployment_id.
- The first 1,000,000 predictions per deployment per hour are tracked for data drift analysis and computed for accuracy. Further predictions within an hour where this limit has been reached are not processed for either metric. However, there is no limit on predictions in general.
- If you score larger datasets (up to 5GB), there will be a longer wait time for the predictions to become available, as multiple prediction jobs must be run. If you choose to navigate away from the predictions interface, the jobs will continue to run.
- After making prediction requests, it can take 30 seconds or so for data drift and accuracy metrics to update. Note that the speed at which the metrics update depends on the model type (e.g., time series), the deployment configuration (e.g., segment attributes, number of forecast distances), and system stability.
- DataRobot recommends that you do not submit multiple prediction rows that use the same association ID—an association ID is auniqueidentifier for a prediction row. If multiple prediction rows are submitted, only the latest prediction uses the associated actual value. All prior prediction rows are, in effect, unpaired from that actual value. Additionally,allpredictions made are included in data drift statistics, even the unpaired prediction rows.
- If you want to write back your predictions to a cloud location or database, you must use thePrediction API.

### Time series deployments

- To make predictions with a time series deployment, the amount of history needed depends on the model used:
- ARIMA family and non-ARIMA cross-series models do not supportbatch predictions.
- Classic only: The ability to create a job definition for all ARIMA and non-ARIMA cross-series models is disabled whenEnable cross-series feature generationis enabled.
- All other time series models support batch predictions. For multiseries, input data must be sorted by series ID and timestamp.
- There is no data limit for time series batch predictions on supported models except that a single series cannot exceed 50MB.
- When scoring regression time series models usingintegrated enterprise databases, you may receive a warning that the target database is expected to contain the following column, which was not found:DEPLOYMENT_APPROVAL_STATUS. The column, which is optional, records whether the deployed model has been approved by an administrator. If your organization has configured adeployment approval workflow, you can: After taking either of the above actions, run the prediction job again, and the approval status appears in the prediction results. If you are not recording approval status, ignore the message, and the prediction job continues.
- To ensure DataRobot can process your time series data fordeployment predictions, configure the dataset to meet the following requirements: For dataset examples, see therequirements for the scoring dataset.

### Multiclass deployments

- Multiclass deployments of up to 100 classes support monitoring for target, accuracy, and data drift.
- Multiclass deployments of up to 100 classes support retraining.
- Multiclass deployments created before Self-Managed AI Platform version 7.0 with feature drift enabled don't have historical data for feature drift of the target; only new data is tracked.
- DataRobot uses holdout data as a baseline for target drift. As a result, for multiclass deployments using certain datasets, rare class values could be missing in the holdout data and in the baseline for drift. In this scenario, these rare values are treated as new values.

### Challengers

- To enableChallengersand replay predictions against them, the deployed model must support target drift trackingandnot be aFeature DiscoveryorUnstructured custom inferencemodel.
- To replay predictions against Challengers, you must be in theorganizationassociated with the deployment. This restriction also applies to deploymentowners.

### Prediction results cleanup

For each deployment, DataRobot periodically performs a cleanup job to delete the deployment's predicted and actual values from its corresponding prediction results table in Postgres. DataRobot does this to keep the size of these tables reasonable while allowing you to consistently generate accuracy metrics for all deployments and schedule replays for challenger models without the danger of hitting table size limits.

The cleanup job prevents a deployment from reaching its "hard" limit for prediction results tables; when the table is full, predicted and actual values are no longer stored, and additional accuracy metrics for the deployment cannot be produced. The cleanup job triggers when a deployment reaches its "soft" limit, serving as a buffer to prevent the deployment from reaching the "hard" limit. The cleanup prioritizes deleting the oldest prediction rows already tied to a corresponding actual value. Note that the aggregated data used to power data drift and accuracy over time are unaffected.

### Managed AI Platform

Managed AI Platform users have the following hourly limitations. Each deployment is allowed:

- Data drift analysis: 1,000,000 predictions or, for each individual prediction instance, 100MB of total prediction requests. If either limit is reached, data drift analysis is halted for the remainder of the hour.
- Prediction row storage: the first 100MB of total prediction requests per deployment per each individual prediction instance. If the limit is reached, no prediction data is collected for the remainder of the hour.
