Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

MLOps (V7.3)

December 13, 2021

The DataRobot MLOps v7.3 release includes many new features and capabilities, described below.

Release v7.3 provides updated UI string translations for the following languages:

  • Japanese
  • French
  • Spanish
  • Korean

New features and enhancements

See details of new features below:

New deployment features

New prediction features

New Model Registry features

New governance features

Preview features

New deployment features

Release v7.3 introduces the following new deployment features.

Automated Retraining

To maintain model performance after deployment, DataRobot provides automatic retraining for deployments, eliminating extensive manual work.

To set up automatic retraining, provide a retraining dataset and define up to five retraining policies on each deployment, each consisting of a trigger, a modeling strategy, modeling settings, and a replacement action. When triggered, retraining produces a new model based on these settings and notifies you to consider promoting it.

Learn how to set up automatic retraining.

New MLOps agent channel: Azure Event Hubs

The MLOps agent now supports Microsoft Azure Event Hubs as a channel, in addition to previously supported channels: File, AWS SQS, Google Pub, Google Sub, RabbitMQ, and Kafka.

To support Azure Event Hubs as a tracking agent spooler type, DataRobot leverages the existing Kafka spooler type.

This release also adds the ability to authenticate with Event Hubs using Azure Active Directory.

For more details see the Azure Event Hubs spooler configuration documentation.

mTLS support for MLOps agent

The RabbitMQ MLOps agent channel now supports mutual Transport Layer Security (mTLS) authentication—ensuring that traffic is secure in both directions between the client and server. With mTLS, the server originating a message and the server receiving it exchange certificates from a mutually trusted certificate authority (CA). See the RabbitMQ configuration documentation for details on configuring the spooler.

Public availability of MLOps agent Python libraries

You can now download the MLOps agent Python libraries from the public Python Package Index site. Download and install the DataRobot MLOps metrics reporting library and the DataRobot MLOps Connected Client. These pages include instructions for installing the libraries.

Portable batch predictions scoring support

Portable batch predictions (PBP) now support scoring on time series and visual AI models. For an example illustrating the job definition fields required for PBP time series scoring, see Time series scoring over Azure Blob. No new fields are required for PBP visual AI scoring.

MLOps agent channel dequeuing

You can now configure the MLOps agent to wait until processing is complete before dequeuing a message. The dequeueing operation behaves as follows in the different channels:

  • In SQS: Deletes a message.
  • In RabbitMQ and PubSub: Acknowledges the message as complete.
  • In Kafka and Filesystem: Moves the offset.

This feature ensures that messages are not dropped when there are connectivity issues. Thus, even if there are connection errors, the message can be re-sent. To configure the feature, set the MLOPS_SPOOLER_DEQUEUE_ACK_RECORDS environment variable to true. Enabling this feature is highly recommended.

Learn how to enable the dequeuing feature.

New prediction features

Release v7.3 introduces the following new prediction features.

Prediction Explanations in Scoring Code

Prediction explanations provide a quantitative indicator of the effect variables have on predictions. You can now receive Prediction Explanations anywhere you deploy a model: in DataRobot, with the Portable Prediction Server, and now in Java Scoring Code (or when executed in Snowflake). Enable Prediction Explanations on the Portable Predictions tab when downloading a model via Scoring Code.

BigQuery adapter for batch predictions

Now generally available, DataRobot supports BigQuery for ingest and export of data while scoring with batch predictions. You can use the BigQuery REST API to export data from a table into Google Cloud Storage (GCS) as an asynchronous job, score data with the GCS adapter, and bulk update the BigQuery table with a batch loading job.

BigQuery batch prediction example

Oracle write-back in batch predictions

Support has been added for Oracle write-back in batch predictions. See the complete list of data sources supported for batch predictions. See also the intake and output adapter documentation.

Prediction job enhancements

Batch prediction jobs have been enhanced as follows:

  • When creating a prediction job, you can now select BigQuery as a Prediction source and a Prediction destination on the Deployments page. You can also now select the AI Catalog as a Prediction source. Following are the supported prediction source and destination types:

    Prediction source types:

    Prediction destination types:

  • When setting up JDBC connections as prediction sources and destinations, you can edit existing connections rather than configuring them from scratch.

  • You can now save and run a prediction job immediately when you create the job description rather than locating it on the Job Definitions tab and running it from there.

New model registry features

Release v7.3 introduces the following new model registry features.

Model Registry compliance documentation

You can now generate Automated Compliance Documentation for models from the Model Registry, accelerating pre-deployment review and sign-off that might be necessary for your organization.

DataRobot automates many critical compliance tasks associated with developing a model and, by doing so, decreases the time-to-deployment in highly regulated industries. For each model, you can generate individualized documentation to provide comprehensive guidance on what constitutes effective model risk management. Then, you can download the report as an editable Microsoft Word document (DOCX). The generated report includes the appropriate level of information and transparency necessitated by regulatory compliance demands.

See how to generate compliance documentation from the Model Registry.

Add files to a custom model with GitLab

If you add a custom model to the Custom Model Workshop, you can now use Gitlab Cloud and Gitlab Enterprise repositories to pull artifacts and use them to build custom models. Register and authorize a repository to add files to a custom model.

New governance features

Release v7.3 introduces the following feature.

Fairness monitoring and alerting for deployments

With MLOps, you can now monitor deployed production models for fairness by configuring tests that make models capable of recognizing, in real-time, when protected features in the dataset fail to meet predefined fairness conditions—triggering an alert so you can investigate bias as soon as it’s detected.

The Fairness tab for individual models provides two interactive and exportable charts—Per-Class Bias and Fairness Over Time—helping you understand why a deployment is failing fairness tests and which features are below the predefined fairness threshold.

Per-Class Bias users the fairness threshold and fairness score of each class to determine if certain classes are experiencing bias in the model's predictive behavior. Fairness Over Time illustrates how the distribution of a protected feature's fairness scores have changed over time.

Preview features

Champion and challenger comparisons

Now available as a preview feature, Champion/Challenger Comparison allows you to compare the composition, reliability, and behavior of models using powerful visualizations. Choose two models to go head-to-head so that you can be sure that the currently deployed champion model is the best model for your purposes.

The Model Comparison page allows you to select models for comparison and provides valuable data and visualizations under Model Insights:

  • The Accuracy tab lets you compare metrics:

  • The Dual lift tab lets you compare how the models over- or under-predict along the distribution of predictions:

  • The Lift tab lets you compare how well the models predict the target:

  • The ROC tab provides visualizations for comparing classification models:

  • The Prediction Difference tab lets you compare predictions of the models on a row-by-row basis:

If the challenger outperforms the current champion, you can promote the challenger to champion directly from the Model Comparison page.

Preview documentation

Association ID support in the AI App Builder

Now available for preview, you can create applications from deployments with an association ID. When the selected deployment has an association ID, the association ID is added as a field to the Add New Row widget for single predictions.

Data drift and accuracy are tracked for all single and batch predictions made using the application, however, they are not tracked for synthetic predictions made in the What-If and Optimizer widget.

For details, see Association ID support in the AI App Builder.

All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them.


Updated December 3, 2024