Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Make predictions before deploying a model

This section describes the Leaderboard Make Predictions tab used for models that are not yet deployed. For predictions on deployed models, use the Make Predictions tab available under Deployments.

This section also describes:

Tip

When working with time series predictions, the Make Predictions tab works slightly differently than with traditional modeling. Continue on this page for a general description of using Make Predictions; see the time series documentation for details unique to time series modeling.

Workflow for predictions

Use the following steps to generate predictions on a new dataset. See below for details on making predictions on an external dataset or using your training data.

Tip

A particular upload method may be disabled on your cluster. If a method is not available, the corresponding ingest option will be greyed out (contact your system administrator for more information, if needed).

Note that there are slight differences to the Make Predictions tab depending on your project type. Binary classification projects include a prediction threshold setting that is not applicable to regression projects.

  1. From the Make Predictions tab, upload your test data to run against the model. You can drag-and-drop a file onto the screen, click Import data from to upload a local file (browse), or specify a URL. Additionally, you can choose to use a configured data source or create a new one. If you choose the Data Source option, you will be prompted for database login credentials.

    The image below shows importing data for a binary classification project. In a regression project, there is no need to set a prediction threshold (the value that determines a cutoff for assignment to the positive class) so the field does not display.

  2. Once the file is uploaded, click Compute Predictions for the selected dataset. The Compute Predictions button changes to Computing predictions... for the selected dataset and the job status appears in the Worker Queue on the right sidebar.

  3. When the prediction has finished running, you can append up to five columns to the prediction dataset by clicking the dropdown arrow next to Optional Features (0 of 5). You can only append a column that was present in the original dataset, although the column does not have to have been part of the feature list used to build the model.

    Click in the box Optional Features (0 of 5):

    Enter the column name, then click Add to append the column.

    Note

    The Optional Features (0 of 5) feature is not available via the API.

  4. Click Download to save prediction results to a CSV file. To upload and run predictions on additional datasets, use the Import data from dropdown to begin the process again.

Make predictions on an external dataset

To better evaluate model performance, you can upload any number of additional test datasets after project data has been partitioned and models have been trained. An external test dataset is one that is not part of the original data set (you didn't train on any part of it), but that does have actuals (values for the target). This allows you to compare model accuracy against the predictions.

By uploading an external dataset and using the original model's dataset partitions, you can compare metric scores and visualization to ensure consistent performance prior to deployment. Select the external test set as if it was a partition in the original project data. Support for external test sets is available for all project types except supervised time series (unsupervised time series is supported).

To use the feature:

  1. Upload new test data in the same way you would upload a prediction dataset. For supervised learning, the external set must contain the target column and all columns present in the training dataset (although additional columns can be added). The workflow is slightly different for unsupervised projects.

  2. Once uploaded, you'll see the label EXTERNAL TEST next to the entry. Click Run external test to calculate predicted values and compute statistics comparing actual target values to predicted values.

  3. To view external test scores, from the Leaderboard menu select Show External Test Column.

    The Leaderboard now includes an External test column.

  4. From the External test column, choose the test data to display results for or click Add external test to return to the Make Predictions tab to add additional test data.

    You can now sort models by external test scores or calculate scores for more models.

Supply actual values for unsupervised projects

In unsupervised projects you must set an actuals column that identifies the outcome or future results to compare to predicted results. This provides a measure of accuracy for the event you are predicting on. The prediction dataset must contain the same columns as those in the training set with at least one column for known anomalies. Select the known anomaly column as the Actuals value.

Compare insights with external test sets

Expand the Data Selection dropdown to select an external test sets as if it was a partition in the original project data.

This option is available when using the following insights:

Note the following:

  • If a dataset has fewer than 10 rows, insights are not computed but metric scores are computed and displayed on the Leaderboard.
  • The ROC Curve is disabled for binary classification projects that have only a single class.

Predict on training data

Less commonly (although there are reasons), you may want to download predictions for your original training data, which DataRobot automatically imports. From the dropdown, select the partition(s) to use when generating predictions.

For small datasets, predictions are calculated by doing stacked predictions and therefore can use all partitions. Because those calculations are too “expensive” to run on large datasets (750MB and higher by default), predictions are based on holdout and/or validation partitions, as long as the data wasn’t used in training.

Dropdown option Description for small datasets Description for large datasets
All data Predictions are calculated by doing stacked predictions on training, validation, and holdout partitions, regardless of whether they were used for training the model or if holdout has been unlocked. Not available
Validation and holdout Predictions are calculated using the validation and holdout partitions. If validation was used in training, this option is disabled. Predictions are calculated using the validation and holdout partitions. If validation was used in training or the project was created without a holdout partition, this option is not available.
Validation If the project was created without a holdout partition, this option replaces the Validation and holdout option. If the project was created without a holdout partition, this option replaces the Validation and holdout option.
Holdout Predictions are calculated using the holdout partition only. If holdout was used in training, this option is not available (only the All data option is valid). Predictions are calculated using the holdout partition only. If holdout was used in training, predictions are not available for the dataset.

Note

For OTV projects, holdout predictions are generated using a model retrained on the holdout partition. If you upload the holdout as an external test dataset instead, the predictions are generated using the model from backtest 1. In this case, the predictions from the external test will not match the holdout predictions.

Select Compute predictions to generate predictions for the selected partition on the existing dataset. Select Download predictions to save results as a CSV.

Note

The Partition field of the exported results indicates the source partition name or fold number of the cross-validation partition. The value -2 indicates the row was "discarded" (not used in TVH. This could be because the target was missing, the partition column (Date/Time-, Group, or Partition Feature-partitioned projects) was missing, or smart downsampling was enabled and those rows were discarded from the majority class as part of downsampling.

Stacked predictions

Without some kind of manipulation, predictions from training data would appear to have misleadingly high accuracy. To address this, DataRobot uses a technique called stacked predictions for the training dataset.

With stacked predictions, DataRobot builds multiple models on different subsets of the data. The prediction for any row is made using a model that excluded that data from training. In this way, each prediction is effectively an "out-of-sample" prediction. See the data partitioning overview for more detail explaining partitions and stacked predictions.

Consider a sample of downloaded predictions:

DataRobot makes obvious which is the holdout partition. The validation partition will be labeled as 0.

Why use training data for predictions?

Although less common, there are occasions when you want to make predictions on your original training dataset. The most common application of the functionality is for use on large datasets. Because running stacked predictions on large datasets is often too computationally expensive, the Make Predictions tab allows you to download predictions using data from the validation and or holdout partitions (as long as they weren’t used in training).

Some sample use cases:

Clark the software developer needs to know the full distribution of his predictions, not just the mean. His dataset is large enough that stacked predictions are not available. With weekly modeling using the R API, he downloads holdout and validation predictions onto his local machine and loads them into R to produce the report he needs.

Lois the data scientist wants to verify that she can reproduce model scores exactly as well in DataRobot as when using an in-house metric. She partitions the data, specifying holdout during modeling. After modeling completes, she unlocks holdout, selects the top model, and computes and downloads predictions for just the holdout set. She then compares predictions of that brief exercise to the result of her previous many-month-long project.


Updated November 5, 2021
Back to top