March 14, 2022
Release v8.0 provides updated UI string translations for the following languages:
New features and enhancements¶
See details of new features below:
New deployment features
- Cancel retraining policies
- DataRobot MLOps library and third-party spooler types
- Challenger accuracy
- Challenger insights
- Database integrations removed from integrations tab
New prediction features
- Enhancements to Prediction Batch Job Definitions
- Leaderboard Scoring Code enhancements
- Scoring Code in Snowflake
- Batch Prediction write support for Presto
New governance features
Public preview features
New deployment features¶
Cancel retraining policies¶
To manage the automatic retraining of deployed models, you set up retraining policies. The policies can be triggered manually or in response to a schedule, drift status, or accuracy status. Now you can cancel policy runs that are in progress or scheduled. You cannot cancel a run if it has finished successfully, has failed, has a status of "Creating challenger" or "Replacing model," or has already been cancelled.
DataRobot MLOps library and third-party spooler types¶
The datarobot-mlops library no longer includes AWS (SQS) and RabbitMQ dependencies by default. If you are using these spooler types, you must install the spooler-specific dependencies See the documentation on installing the DataRobot MLOps metrics reporting library for details.
On a deployment's Challengers tab, the Deployment Challengers overview now includes an Accuracy column for the champion and every challenger. This column reports a model's accuracy score for the selected date range and, for challenger models, a comparison with the champion's accuracy score. You can use the Accuracy metric dropdown menu to compare different metrics.
For more information on challenger accuracy comparison, see Challenger models overview.
Now generally available, the Model Insights on the Model Comparison tab allow you to compare the composition, reliability, and behavior of champion and challenger models using powerful visualizations. Choose two models to go head-to-head to determine if a challenger model outperforms the current champion and should replace the champion model in production.
After you select two models, DataRobot computes the following model comparisons for those models:
The Accuracy list contains two columns to report accuracy metrics for each model. Highlighted numbers represent favorable values. In this example, the champion, Model 1, outperforms Model 2 for most metrics shown:
A dual lift chart is a visualization comparing how two selected models underpredict or overpredict the actual values across the distribution of their predictions.
A lift chart depicts how well a model segments the target population and how capable it is of predicting the target, allowing you to visualize the model's effectiveness.
The ROC tab is only available for binary classification projects.
An ROC curve plots the true-positive rate against the false-positive rate for a given data source. Use the ROC curve to explore classification, performance, and statistics for the models you're comparing.
The Predictions Difference histogram shows the percentage of predictions that fall within the match threshold you specify in the Prediction match threshold field (along with the corresponding numbers of rows).
The list below the histogram shows the 1000 most divergent predictions (in terms of absolute value). The Difference column shows how far apart the predictions are.
For more information on these challenger insights, see Challenger model comparisons.
Database integrations removed from integrations tab¶
To simplify the prediction database integration process, the Database section on the Settings > Integrations tab is now fully deprecated. This functionality is replaced by the Prediction > Job Definitions tab.
For more information on setting up prediction sources, see Schedule recurring batch prediction jobs.
New prediction features¶
Enhancements to Prediction Batch Job Definitions¶
Disable a job definition¶
In previous releases, you could disable a job description only by editing it and turning off the Run this job automatically on a schedule toggle. Now, you can disable the description by selecting Disable definition in the action menu for a job definition. Jobs scheduled from that job definition will cease to run. Select Enable definition to resume the jobs.
Clone a job definition¶
You can now create a copy of an existing job description and update it by selecting Clone definition in the action menu for a job definition. Update the fields as needed, and click Save prediction job definition. Note that the Jobs schedule settings are turned off by default.
Prediction source configurations¶
When you set a data source for a prediction job, DataRobot validates that the data is applicable for the deployed model. DataRobot also displays the user that configured the prediction source, the modification date, and a badge that represents the type of the source (in this case, STATIC).
Select the default prediction instance¶
Now when you create a batch job definition, you can use the default prediction instance. The advanced options now include a Use default prediction instance toggle:
DataRobot checks that the default or previously selected prediction instance is accessible and valid, and if not, displays an error message.
If you turn off the toggle, you can select a different prediction instance:
Snowflake and Synapse connection improvements¶
The following improvements have been made to the prediction source and destination configurations for Snowflake and Synapse connections.
The Use external stage options for Snowflake and Synapse are now optional. Toggling them off updates the connection to use a JDBC adapter directly.
You can now switch between a JDBC Snowflake connection setting and a Snowflake top-level connection without losing the connection details.
JDBC-Snowflake and JDBC-Synapse connections now display as top-level connections, with the Use external stage option toggled off.
Batch job filtering¶
From the Prediction Jobs tab, you can now filter by prediction job ID, along with the existing filters: status, job type (based on the method used to generate the job), job start and end time, deployment, job definition ID, the prediction job ID, and the prediction environment.
Leaderboard Scoring Code enhancements¶
The Leaderboard Scoring Code functionality on the Portable Predictions page has been updated so that it is consistent with that of the Portable Predictions page for deployments. The page now includes the option of including Prediction Explanations in the Scoring Code, as well.
See Download Scoring Code from the Leaderboard for details.
Scoring Code in Snowflake¶
Now GA, you can use Scoring Code as a UDF in Snowflake. Bringing Scoring Code inside of the Snowflake database removes the need to extract and load data, resulting in a significant decrease in the time to score large data sets on comparable infrastructure.
See how to generate UDF Scoring Code.
Batch Prediction write support for Presto¶
You can now write prediction data to Presto. To do so, set up Presto as a JDBC data connection. In your batch prediction job definition (Predictions > Job Definitions), select JDBC as the Prediction destination:
From the list of connectors, select the Presto connector:
Select the schema:
Select the output table or create a new table.
Presto requires the use of
auto commit: true for many of the underlying connectors which can delay writes.
New governance features¶
Data drift separated into target monitoring and feature tracking¶
To provide more granular control of data drift, accuracy, and fairness monitoring, the Enable data drift tracking setting on a deployment’s Settings > Data tab is now divided into two settings: Enable target monitoring and Enable feature drift tracking.
You need to enable target monitoring to track accuracy (Accuracy tab) and fairness (Bias and Fairness tab). Feature tracking must be enabled to monitor for data drift (Data Drift tab). These settings are enabled by default. If you turn off either setting, you can still view historical data in the visualizations on the corresponding tabs.
For more details, see Deployment settings.
Public preview features¶
MLOps agent event log¶
Now available for public preview, on a deployment's Service Health tab, you can view MLOps agent Management events (e.g., deployment actions) and Monitoring events (e.g., spooler channel events). Using Monitoring Spooler Channel error events, you can quickly diagnose and fix spooler configuration issues.
To view Monitoring events, you must provide a
predictionEnvironmentID in the agent configuration file (
conf\mlops.agent.conf.yaml). If you haven't already installed and configured the MLOps agent, see the Installation and configuration guide.
For more information on enabling and reading the MLOps agent event log, see the documentation.
Multipart upload for batch prediction API¶
Now available for public preview, multipart upload for the batch prediction API allows you to upload scoring data through multiple files to improve file intake for large datasets. The multipart upload process calls for multiple
PUT requests followed by a
POST request (
finalizeMultipart) to finalize the upload manually.
This feature adds two new endpoints to the batch prediction API:
||Upload scoring data in multiple parts to the URL specified by
||Finalize the multipart upload process. Make sure each part of the upload has finished before finalizing.|
The feature adds two new intake settings for the local file adapter:
For more information on the multipart upload for batch predictions process, see the public preview documentation.
All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them.