Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Model recommendation process

As a result of the Autopilot modeling process (both full and Quick), one of the models—the most accurate individual, non-blender model—is selected and then prepared for deployment. Accuracy is based on the up-to-validation sample size (typically 64%). The resulting prepared model is marked with the Recommended for Deployment and Prepared for Deployment badges. You can also select any model from the Leaderboard and initiate the deployment preparation process.

The following describes the preparation process for date/time partitioned projects; the time-aware recommendation process differs slightly.

  1. First, DataRobot calculates feature impact for the selected model and uses it to generate a reduced feature list.

  2. Next, the app retrains the selected model (typically 64% sample size) on the reduced feature list. If the new model performs better than the original model, DataRobot uses the new model for the next stage. Otherwise, the original model is used.

  3. DataRobot then retrains the selected model at an up-to-holdout sample size (typically 80%). As long as the sample is under the frozen threshold (1.5GB), the stage is not frozen.

  4. Finally, DataRobot retrains the selected model as a frozen run (hyperparameters are not changed from the up-to-holdout run) using a 100% sample size and selects it as Recommended for Deployment.

  5. If the project was run using Quick mode, the Recommended for Deployment model is computed at a final 16% sample size, allowing the Learning Curves graph to show the model across all preset sizes.

Depending on the size of the dataset, the insights for the recommended model are either based on the up-to-holdout model or, if DataRobot can use out-of-sample predictions, based on the 100%, recommended model.

Prepare a model for deployment

Although Autopilot recommends and prepares a single model for deployment, you can initiate the Autopilot recommendation and deployment preparation stages for any Leaderboard model. To do so, select a model from the Leaderboard and navigate to Predict > Deploy.

Click Prepare for Deployment. DataRobot begins running the recommendation stages described above for the selected model (view progress seen in the right panel). In other words, DataRobot runs feature impact, retrains the model on a reduced feature list, trains on a higher sample size, and then the full sample size (for non date/time partitioned projects) or most recent data (for time-aware projects).

Once the process completes, DataRobot marks the new, final model built at 100% with the Prepared for Deployment badge. (The originally recommended model also maintains its badge.) From the Deploy tab of the original model, click Go to model to see the prepared model on the Leaderboard.

Click the new model's blueprint number to see the new feature list and sample sizes associated with the process:

If you return to the model that you made the original request from (for example, the 64% sample size) and access the Deploy tab, you'll see that it is now linked to the prepared model.

Notes and considerations

  • When retraining the final Recommended for Deployment model at 100%, it is always executed as a frozen run. This makes model retraining faster, and also ensures that the 100% model uses the same settings as the 80% model.

  • If the model that is recommended for deployment has been trained into the validation set, DataRobots unlocks and displays the Holdout score for this model, but not the other Leaderboard models. Holdout can be unlocked for the other models from the right panel.

  • If the model that is recommended for deployment has been trained into the validation set, or the project was created without a holdout partition, the ability to compute predictions using validation and holdout data is not available.

  • The heuristic logic of automatic model recommendation may differ across different projects types. For example, retraining a model with non-redundant features is implemented in regression and binary classification while retraining a model at a higher sample size is implemented in regression, binary classification, and multiclass projects.

  • If you terminate a model that is being trained on a higher sample size, or training on a higher sample size does not successfully finish, that model will not be a candidate for the Recommended for Deployment model.

Deprecated badges

Projects created prior to v6.1 may also have been tagged with the Most Accurate and/or Fast & Accurate badges. With improvements made to Autopilot automation, these badges are no longer necessary but are still visible, if they were assigned, to pre-v6.1 projects. Contact your DataRobot representative for code snippets that can help transition automation built around the deprecated badges.

  • The model marked Most Accurate is typically, but not always, a blender. As the name suggests, it is the most accurate model on the Leaderboard, determined by a ranking of validation or cross-validation scores.

  • The Fast & Accurate badge, applicable only to non-blender models, is assigned to the model that is both the most accurate and is the fastest to make predictions. To evaluate, DataRobot uses prediction timing from:

    • a project’s holdout set.
    • a sample of the training data for a project without holdout.

    Not every project has a model tagged as Fast & Accurate. This happens if the prediction time does not meet the minimum speed threshold determined by an internal algorithm.

Updated December 11, 2021
Back to top