Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Hyperparameter optimization workflow

Access this AI accelerator on GitHub

In machine learning, hyperparameter tuning is the act of adjusting the "settings" (referred to as hyperparameters) in a machine learning blueprint or pipeline. For example, adjustable hyperparameters might be the learning rate for an XGBoost model, the activation function in a neural network, or grouping limits in one-hot encoding for categorical features. Many methods for doing this exist, with the simplest being a brute force search over a wide range of possible parameter value combinations. While this requires little effort for the human, it's extremely time-consuming for the machine, as each distinct combination of hyperparameter values requires fitting the blueprint again. To this end, practitioners strive to find more efficient ways to search for the best combination of hyperparameters.

This AI Accelerator shows how to leverage open source optimization modules to further tune parameters in DataRobot blueprints. Build on the native DataRobot hyperparameter tuning functionality by integrating the hyperopt module into the API workflow. The hyperopt module allows for a particular Bayesian approach to parameter tuning known as the Tree-structured Parzen Estimator (TPE), though more generally this Accelerator should be seen as an example of how to leverage DataRobot's API to integrate with any parameter tuning optimization framework.

You will explore the following:

  • Identify specific blueprints from a DataRobot project and review their hyperparameters through the API.
  • Define a search space and optimization algorithm with hyperopt.
  • Tune hyperparameters with hyperopt's Tree-structured Parzen Estimator (Bayesian) approach.

Updated April 23, 2024