Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

MLOps (V7.0)

March 15, 2021

The DataRobot MLOps v7.0 release includes many new features and capabilities, described below.

Release v7.0 provides updated UI string translations for the following languages:

  • Japanese
  • French
  • Spanish

New features and enhancements

See details of new deployment features below:

The following new deployment features are currently in public beta. Contact your DataRobot representative for information on enabling them:

New model registry features

The following new model registry features are currently in public beta. Contact your DataRobot representative for information on enabling them:

New deployment features

Configure accuracy metrics for a deployment

DataRobot now allows you to configure the accuracy metrics displayed as tiles in the Accuracy tab for deployments. You can choose the number of metrics displayed (up to 10), and select from a variety of metrics specific to the modeling project type (regression or binary classification).

Now GA: Download the Portable Prediction Server image

You can now access and download the Portable Prediction Server (PPS) image from the Developer Tools page. The PPS image is an all-in-one dockerized solution that runs a fully functional Prediction API server with full model monitoring support via the MLOps Agent. Download the image and configure it to run prediction jobs outside of DataRobot and report prediction statistics back to a DataRobot deployment.

Now GA: Create and manage prediction environments

Now generally available, you can manage prediction environments for deployments running outside of DataRobot. This allows you to fully represent the various platforms you may have running your models externally from within your organization. Prediction environments support the ability to deploy and monitor a model in an external environment (via the Portable Prediction Server or with Scoring Code. Create a prediction environment, add it to DataRobot, and deploy a DataRobot model with the environment specified to establishing a deployment scenario for models running outside of DataRobot.

Now GA: Create external time series deployments

Now generally available, you can create a time series model, deploy that model external to DataRobot, and report prediction statistics back to DataRobot using the MLOps agent. This allows you to develop a time series model in DataRobot, but also export it in an easily usable form while maintaining DataRobot's deployment monitoring and management functionality.

Now GA: Time series deployment support for the Make Predictions tab

Now generally available, you can use the Make Predictions interface to efficiently score datasets with a deployed time series model. The interface allows you to see information about the model’s feature derivation window and forecast rows, ensuring that the data you are trying to score meets the proper requirements. You can also configure the time series options for the dataset you want to score so that you can make predictions for a specific forecast point without having to modify the dataset.

Now GA: Download Scoring Code with MLOps agent integration

Now generally available, you can download the MLOps agent packaged with Scoring Code directly from a deployment. This allows you to quickly integrate the agent and report model monitoring statistics back to DataRobot for models running outside of DataRobot. Once configured, at prediction time the MLOps agent automatically starts, reports metrics, and shuts down without additional setup. You can download the Scoring Code and MLOps agent package from a deployment's Actions menu.

New public beta deployment features

Beta: Challenger models now available for external deployments

Now available as a public beta feature, deployments in remote prediction environments can use the Challengers tab. Remote models can serve as the champion model, and you can compare them to DataRobot and custom models challengers. If you want to replace the champion model with a challenger, you can also replace the model with a custom or DataRobot challenger model and deploy the new champion to your remote prediction environment.

Beta: Improved monitoring support for multiclass deployments

Now available as a public beta feature, you can deploy multiclass models (including custom models) with improved monitoring capabilities. Multiclass deployments can now report accuracy and data drift statistics with proper configuration. Additionally, multiclass models deployed to remote prediction environments can be monitored by the MLOps agent. Multiclass deployments offer class-based configuration to modify the data displayed on the graphs of the Accuracy and Data Drift tabs. By default, the graphs display the five most common classes in the training data. These monitoring capabilities are currently limited to ten classes.

Beta: Integrate the Tableau Analytics Extension with DataRobot deployments

You can now use the Tableau Analytics Extension to integrate DataRobot predictions into your Tableau project. The extension supports the Tableau Analytics API, which allows its users to create endpoints that can send data to and receive data from Tableau. Using the extension, you can visualize prediction information and update it with information based on prediction responses from DataRobot. Establish a connection between DataRobot and Tableau, access the code snippet from your deployment, and make predictions from Tableau via the DataRobot prediction API. Additionally, you can configure the code snippet from your deployment to re-map any features.

Beta: Feature Discovery deployments support governance workflow to manage secondary datasets

With this release, you can manage updates to secondary datasets in Feature Discovery deployments using the governance workflow. After an admin sets up the “Secondary dataset configuration changed” approval policy trigger in User Settings > Approval Policies, any changes to a secondary dataset will prompt a change request that must go through an approval process. The creator of the change request can view its status under History in Deployments > Overview, and reviewers will see a pending changes notification requesting that they review the update.

New model registry features

Custom model deployment logs

When you deploy a custom model, it generates unique log reports that allow you to debug custom code and troubleshoot prediction request failures from within DataRobot. You can access two types of logs:

  • Runtime logs are captured from the Docker container running the custom model. Use them to troubleshoot failed prediction requests. The logs are cached for 5 minutes after you make a prediction request.

  • Deployment logs are automatically captured if the custom model fails while deploying. The logs are stored permanently as part of the deployment.

Now GA: Create unstructured custom inference models

Now generally available, DataRobot supports unstructured custom models that do not use the conventional regression or binary classification target types. Unstructured models do not need to conform to a specific input/output schema like regression and binary classification models do. They may use any arbitrary data for their inputs and outputs. This allows you to deploy and monitor any type of model with DataRobot, regardless of the target type, and affords you more control over how you read the data from a prediction request and response.

New public beta model registry features

Beta: Custom Models now support portable predictions

You can now deploy custom models to their own Portable Prediction Server (PPS). A downloadable bundle containing the custom model, a custom environment, and the MLOps agent is available to generate and launch a PPS image. Once started, the custom model PPS installation serves predictions via a REST API. The MLOps agent can be configured to report prediction statistics back to a DataRobot deployment for the custom model.

Beta: GitHub Enterprise and Bitbucket Server integration for custom models

Users can now register GitHub Enterprise and Bitbucket Server repositories in the Model Registry to pull artifacts into DataRobot and build custom inference models. Integrating either of these repositories allows you to directly transfer between a governed, code-centric machine learning development environment and a governed MLOps environment.

Beta: Custom inference anomaly detection models

Now available as a public beta feature, you can create a custom inference model for anomaly detection problems. When creating a custom model, you can indicate "Anomaly Detection" as a target type. Additionally, access the DRUM template for anomaly detection models. For deployed custom inference anomaly detection models, note that the following functionality is not supported:

  • Data drift
  • Accuracy and association IDs
  • Challenger models
  • Humility rules
  • Prediction intervals

All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them.


Updated October 24, 2023