# MLFlow experiment tracking

> MLFlow experiment tracking - Automate machine learning experimentation using DataRobot, MLFlow, and
> Papermill for tracking experiments and logging results.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.577278+00:00` (UTC).

## Primary page

- [MLFlow experiment tracking](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/exp-track-and-tune/mlflow.html): Full documentation for this topic (HTML).

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [Developer learning](https://docs.datarobot.com/en/docs/api/dev-learning/index.html): Linked from this page.
- [AI accelerators](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/index.html): Linked from this page.
- [Experiment tracking and tuning](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/exp-track-and-tune/index.html): Linked from this page.

## Documentation content

[Access this AI accelerator on GitHub](https://github.com/datarobot-community/ai-accelerators/tree/main/advanced_ml_and_api_approaches/MLFLOW_w_datarobot_experiments/)

Experimentation is a mandatory activity in any machine learning developer’s day-to-day activities. For time series projects, the number of parameters and settings to tune for achieving the best model is in itself a vast search space.

Many of the experiments in time series use cases are common and repeatable. Tracking these experiments and logging results is a task that needs streamlining. Manual errors and time limitations may lead to selection of suboptimal models leaving better models lost in global minima.

The integration of DataRobot API, Papermill, and MLFlow automates machine learning experimentation so that is becomes easier, robust, and easy to share.

As illustrated below, you will use the [orchestration notebook](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced_ml_and_api_approaches/MLFLOW_w_datarobot_experiments/orchestration_notebook.ipynb) to design and run the [experiment notebook](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced_ml_and_api_approaches/MLFLOW_w_datarobot_experiments/experiment_notebook.ipynb), with the permutations of parameters handled automatically by DataRobot. At the end of the experiments, copies of the experiment notebook will be available, with the outputs for each permutation for collaboration and reference.

You can review [the dependencies](https://github.com/datarobot-community/ai-accelerators/blob/main/advanced_ml_and_api_approaches/MLFLOW_w_datarobot_experiments/requirements.txt) for the accelerator.

This accelerator covers the following activities:

- Acquiring a training dataset.
- Building a new DataRobot project.
- Deploying a recommended model.
- Scoring via Spark using DataRobot's exportable Java Scoring Code.
- Scoring via DataRobot's Prediction API.
- Reporting monitoring data to the MLOps agent framework in DataRobot.
- Writing results back to a new table.
