Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

MLOps and predictions (V9.2)

November 22, 2023

The DataRobot MLOps v9.2 release includes many new features and capabilities, described below. See additional details of Release 9.2 in the data and modeling and code-first release announcements.

New features and enhancements

Features grouped by capability
Name GA Preview
Predictions and MLOps
Monitoring support for generative models ✔*
Public network access for custom models ✔*
Model package creation workflow redesign
Versioning support in the new Model Registry
Extend compliance documentation with key values
Expanded prediction and monitoring job definition access
Custom model deployment status information
DataRobot API spooler type for the monitoring agent
Auto-sampling for client-side aggregation
Databricks JDBC write-back support for batch predictions
Real-time notifications for deployments
Batch monitoring for deployment predictions
Custom jobs in the Model Registry
Hosted custom metrics
Accuracy for monitoring jobs with aggregation enabled
NextGen Predictions and MLOps
NextGen training data predictions in Workbench
NextGen Registry
NextGen model workshop
NextGen jobs
NextGen Console

* Premium feature

New premium features

Monitoring support for generative models

Now available as a premium feature, you can deploy generative Large Language Models (LLMs) to make predictions, monitor service, usage, and data drift statistics, and create custom metrics. DataRobot supports LLMs through two deployment methods:

After you deploy a generative model, you can view service health and usage statistics, export deployment data, create custom metrics, and identify data drift. On the Data Drift tab for a generative model, you can view the Feature Drift vs. Feature Importance, Feature Details, and Drift Over Time charts.

Feature flag OFF by default: Enable Monitoring Support for Generative Models

For more information, see the documentation.

Public network access for custom models

Now available as a premium feature, you can enable full network access for any custom model. When you create a custom model, you can access any fully qualified domain name (FQDN) in a public network so that the model can leverage third-party services. Alternatively, you can disable public network access if you want to isolate a model from the network and block outgoing traffic to enhance the security of the model. To review this access setting for your custom models, on the Assemble tab, under Resource Settings, check the Network access:

For more information, see the documentation.

New GA features

Model package creation workflow redesign

Now generally available, the improved model package creation workflow provides a clearer and more consistent path to model deployment with visible connections between a model and its associated model packages, listed as registered model versions in the updated Model Registry. Following this new workflow, when you deploy a model, you begin by providing model details and registering the model. Then, you can deploy the registered model version associated with the model package by adding the deployment information.

On the Leaderboard, select the model to use for generating predictions. DataRobot recommends a model with the Recommended for Deployment and Prepared for Deployment badges. Click Predict > Deploy. If the Leaderboard model you select doesn't have the Prepare for Deployment badge, DataRobot recommends you click Prepare for Deployment to run the model preparation process for that model:

On the Deploy model tab, provide the required model package information, and then click Register to deploy:

Then, you can add deployment information and create the deployment.

For more information, see the documentation.

Versioning support in the new Model Registry

Now generally available, the new Model Registry is an organizational hub for the variety of models used in DataRobot. Models are registered as deployment-ready model packages. These model packages are grouped into registered models containing registered model versions, allowing you to categorize them based on the business problem they solve. Registered models can contain DataRobot, custom, external, challenger, and automatically retrained models as versions.

During this update, packages from the Model Registry > Model Packages tab are converted to registered models and migrated to the new Registered Models tab. Each migrated registered model contains a registered model version, and the original packages can be identified in the new tab by the model package ID (registered model version ID) appended to the registered model name.

Once the migration is complete, in the updated Model Registry, you can track the evolution of your predictive and generative models with new versioning functionality and centralized management. In addition, you can access both the original model and any associated deployments and share your registered models (and the versions they contain) with other users.

This update builds on the previous model package workflow changes, requiring the registration of any model you intend to deploy. To register and deploy a model from the Leaderboard, you must first provide model registration details:

  1. On the Leaderboard, select the model to use for generating predictions. DataRobot recommends a model with the Recommended for Deployment and Prepared for Deployment badges. The model preparation process runs feature impact, retrains the model on a reduced feature list, and trains on a higher sample size, followed by the entire sample (latest data for date/time partitioned projects).

  2. Click Predict > Deploy. If the Leaderboard model doesn't have the Prepare for Deployment badge, DataRobot recommends you click Prepare for Deployment to run the model preparation process for that model.

    Tip

    If you've already added the model to the Model Registry, the registered model version appears in the Model Versions list. You can click Deploy next to the model and skip the rest of this process.

  3. Under Deploy model, click Register to deploy.

  4. In the Register new model dialog box, provide the required model package model information:

  5. Click Add to registry. The model opens on the Model Registry > Registered Models tab.

  6. While the registered model builds, click Deploy and then configure the deployment settings.

For more information, see the documentation.

Extend compliance documentation with key values

Now generally available, you can create key values to reference in compliance documentation templates. Adding a key value reference includes the associated data in the generated template, limiting the manual editing needed to complete the compliance documentation. Key values associated with a model in the Model Registry are key-value pairs containing information about the registered model package:

When you build custom compliance documentation templates, you can include string, numeric, boolean, image, and dataset key values:

Then, when you generate compliance documentation for a model package with a custom template referencing a supported key value, DataRobot inserts the matching values from the associated model package; for example, if the key value has an image attached, that image is inserted.

For more information, see the documentation.

Expanded prediction and monitoring job definition access

This release expands role-based access controls (RBAC) for prediction and monitoring jobs to align with deployment permissions. Previously, when deployments were shared between users, job definitions and batch jobs weren’t shared alongside the deployment. With this update, the User role gains read access to prediction and monitoring job definitions associated with any deployments shared with them. The Owner role gains read and write access to prediction and monitoring job definitions associated with any deployments shared with them. For more information on the capabilities of deployment Users and Owners, review the Roles and permissions documentation. Shared job definitions appear alongside your own; however, if you don't have access to the credentials associated with a prediction Source or Destination in the AI Catalog, the connection details are [redacted]:

For more information, see the documentation for Shared prediction job definitions and Shared monitoring job definitions.

Custom model deployment status information

Now generally available, when you deploy a custom model in DataRobot, deployment status information is surfaced through new badges in the Deployments inventory, warnings in the deployment, and events in the MLOps Logs.

After you add deployment information and deploy a custom model, the Creating deployment modal appears, tracking the status of the deployment creation process, including the application of deployment settings and the calculation of the drift baseline. You can monitor the deployment progress from the modal, allowing you to access the Check deployment's MLOps logs link if an error occurs:

In the Deployments inventory, you can see the following deployment status values in the Deployment Name column:

Status Badge
The custom model deployment process is still in progress. You can't currently make predictions through this deployment or access deployment tabs that require an active deployment.
The custom model deployment process completed with errors. You may be unable to make predictions through this deployment; however, if you deactivate this deployment, you can't reactivate it until you resolve the deployment errors. You should check the MLOps Logs to troubleshoot the custom model deployment.
The custom model deployment process failed, and the deployment is Inactive. You can't currently make predictions through this deployment or access deployment tabs that require an active deployment. You should check the MLOps Logs to troubleshoot the custom model deployment.

From a deployment with an Errored or Warning status, you can access the Service Health MLOps logs link from the warning on any tab. This link takes you directly to the Service Health tab:

On the Service Health tab, under Recent Activity, you can click the MLOps Logs tab to view the Event Details. In the Event Details, you can click View logs to access the custom model deployment logs to diagnose the cause of the error:

DataRobot API spooler type for the monitoring agent

Generally available to users, DataRobot has added a new spooler type for the MLOps library to communicate with the monitoring agent using DataRobot's API. The process to configure the DataRobot API spooler is different than typical spooler configuration. Usually, the monitoring agent connects to the spooler, gathers information, and sends that information to DataRobot MLOps. Using the DataRobot API, you do not actually connect to a spooler, and the calls you make to the MLOps library are unchanged. The calls do not go to a spooler or the monitoring agent, and instead go directly to DataRobot MLOps via HTTPS. In this case, you do not need to configure a complex spooler and monitoring agent.

For more information, see the Library and agent spooler configuration documentation.

Auto-sampling for client-side aggregation

Now generally available, large-scale monitoring with the monitoring agent supports the automatic sampling of raw features, predictions, and actuals to support challengers and accuracy tracking. To enable this feature, when configuring large-scale monitoring, define the MLOPS_STATS_AGGREGATION_AUTO_SAMPLING_PERCENTAGE environment variable to determine the percentage of raw data to report to DataRobot using algorithmic sampling. In addition, you must define MLOPS_ASSOCIATION_ID_COLUMN_NAME to identify the column in the input data containing the data for sampling.

For more information, see the documentation.

Databricks JDBC write-back support for batch predictions

With this release, Databricks is supported as a JDBC data source for batch predictions. For more information on supported data sources for batch predictions, see the documentation.

New preview features

Real-time notifications for deployments

DataRobot provides automated monitoring with a notification system, allowing you to configure alerts triggered when service health, data drift status, model accuracy, or fairness values deviate from your organization's accepted values. Now available for preview, you can enable real-time notifications for these status alerts, allowing your organization to quickly respond to changes in model health without waiting for scheduled health status notifications:

For more information, see the notifications documentation.

Feature flag OFF by default: Enable Real-time Notifications for Deployments

Batch monitoring for deployment predictions

Now available for preview, you can view monitoring statistics organized by batch, instead of by time. With batch-enabled deployments, you can access the Predictions > Batch Management tab, where you can create and manage batches. You can then add predictions to those batches and view service health, data drift, accuracy, and custom metric statistics by batch in your deployment. To create batches and assign predictions to a batch, you can use the UI or the API. In addition, each time a batch prediction or scheduled batch prediction job runs, a batch is created automatically, and every prediction from the job is added to that batch.

Feature flags OFF by default:

  • Enable Deployment Batch Monitoring
  • Enable Batch Custom Metrics for Deployments

Preview documentation.

Custom jobs in the Model Registry

Now available as a preview feature, you can create custom jobs in the Model Registry to implement automation (for example, custom tests) for your models and deployments. Each job serves as an automated workload, and the exit code determines if it passed or failed. You can run the custom jobs you create for one or more models or deployments. The automated workload you define when you assemble a custom job can make prediction requests, fetch inputs, and store outputs using DataRobot's Public API.

For more information, see the documentation.

Feature flag OFF by default: Enable Custom Jobs

Hosted custom metrics

Now available as a preview feature, you can not only implement up to five of your organization's custom metrics into a deployment, but also upload and host code using DataRobot Notebooks to easily add custom metrics to other deployments. After configuring a custom metric, DataRobot loads a notebook that contains the code for the metric. The notebook contains one custom metric cell, a unique type of notebook cell that contains Python code defining how the metric is exported and calculated, code for scoring, and code to populate the metric.

For more information, see the documentation.

Feature flags OFF by default:

  • Enable Hosted Custom Metrics
  • Enable Custom Jobs
  • Enable Notebooks Custom Environments

Accuracy for monitoring jobs with aggregation enabled

Now available for preview, monitoring jobs for external models with aggregation enabled can support accuracy tracking. Enable Use aggregation and configure the retention settings, indicating that data is aggregated by the MLOps library and defining how much raw data should be retained for challengers and accuracy analysis; then, to report the Actuals value column for accuracy monitoring, define the Predictions column and Association ID column.

Feature flag OFF by default: Enable Accuracy Aggregation

For more information, see the documentation.

New NextGen features

NextGen training data predictions in Workbench

Now generally available in the NextGen Experience, after you create an experiment and train models in Workbench, you can make predictions on training data from Model actions > Make predictions:

When you make predictions on training data, you can select one of the following options, depending on the project type:

Project type Options
AutoML Select one of the following training data options:
  • Validation
  • Holdout
  • All data
OTV/Time Series Select one of the following training data options:
  • All backtests
  • Holdout
In-sample prediction risk

Depending on the option you select and the sample size the model was trained on, predicting on training data can generate in-sample predictions, meaning that the model has seen the target value during training and its predictions do not necessarily generalize well. If DataRobot determines that one or more training rows are used for predictions, the Overfitting risk warning appears. These predictions should not be used to evaluate the model's accuracy.

For more information, see the documentation.

NextGen Registry

Now available in the NextGen Experience, the Registry is an organizational hub for the variety of models used in DataRobot. The Registry > Model Directory page lists registered models, each containing deployment-ready model packages as versions. These registered models can contain DataRobot, custom, and external models as versions, allowing you to track the evolution of your predictive and generative models and providing centralized management:

From the Registry, you can generate compliance documentation to provide evidence that the components of the model work as intended, manage key values for registered model versions, and deploy the model to production.

For more information, see the documentation.

Feature flag ON by default: Enable NextGen Registry

NextGen model workshop

Now available in the NextGen Experience, the model workshop allows you to upload model artifacts to create, test, register, and deploy custom models to a centralized model management and deployment hub. Custom models are pre-trained, user-defined models that support most of DataRobot's MLOps features. DataRobot supports custom models built in a variety of languages, including Python, R, and Java. If you've created a model outside of DataRobot and want to upload your model to DataRobot, define the model content and the model environment in the model workshop:

What are custom models?

Custom models are not custom DataRobot models. They are user-defined models created outside of DataRobot and assembled in the model workshop for access to deployment, monitoring, and governance. To support the local development of the models you want to bring into DataRobot through the model workshop, the DataRobot Model Runner (or DRUM) provides you with tools to locally assemble, debug, test, and run the model before assembly in DataRobot. Before adding a custom model to the workshop, DataRobot recommends you reference the custom model assembly guidelines for building a custom model to upload to the workshop.

Feature flag ON by default: Enable NextGen Registry

NextGen jobs

Now available in the NextGen Experience, you can use jobs to implement automation (for example, custom tests) for models and deployments. Each job serves as an automated workload, and the exit code determines if it passed or failed. You can run the custom jobs you create for one or more models or deployments. The automated workloads defined through custom jobs can make prediction requests, fetch inputs, and store outputs using DataRobot's Public API:

Feature flag ON by default: Enable NextGen Registry

Feature flag OFF by default: Enable Custom Jobs

NextGen Console

The NextGen DataRobot Console provides important management, monitoring, and governance features in a refreshed, modern user interface, familiar to users of MLOps features in DataRobot Classic:

This updated user interface provides a seamless transition from model experimentation and registration—in the NextGen Workbench and Registry—to model monitoring and management through deployments in Console, all while maintaining the user experience you are accustomed to. This document provides links to the DataRobot Classic documentation for the features you can find in the NextGen experience.

For more information, see the documentation.

Feature flag ON by default: Enable Console

All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them.


Updated June 20, 2024