Manage experiments¶
After modeling has started, DataRobot constructs a model Leaderboard to help learn and understand models and the data that built them. Also from the Leaderboard you can add or retrain models and create custom blueprints from the Leaderboard blueprint using built-in tasks and custom Python/R code.
Tiles on the left-side of the experiment Leaderboard provide all the tools necessary for managing predictive experiments. They are described in the following section:
Tile | Name | Description |
---|---|---|
![]() |
Experiment setup | Opens the experiment setup summary page. |
![]() |
Data preview | Displays a more visual representation of the features in your dataset, including frequent values. |
![]() |
Features | Displays features in a table format alongside feature importance and summary statistics. Select specific features to view more detailed data insights than those shown on the Data preview tile. |
![]() |
Feature lists | Allows you to create new feature lists, manage existing ones, and retrain all the models in an experiment on a different feature list. |
![]() |
Data insights | Helps you track and visualize associations within your data using the Feature Associations insight. |
![]() |
Blueprint repository | Opens library of modeling blueprints available for a selected experiment. |
![]() |
Model Leaderboard | Opens a list of all built models and overview information for each. Provides access to the model's available insights. |
![]() |
Experiment insights | Opens experiment-level insights for all models. |
![]() |
Model comparison | Opens a tool for comparing compatible models within or across experiments. Model comparison is not available for time-aware experiments but for Use Cases with non-time-aware experiments, you can initiate a compare from within a time-aware experiment. |
Updated March 26, 2025
Was this page helpful?
Great! Let us know what you found helpful.
What can we do to improve the content?
Thanks for your feedback!