# Fine-tune & deploy LLMs

> Fine-tune & deploy LLMs - Review an end-to-end workflow for fine-tuning and deployment an LLM using
> features of Hugging Face, Weights and Biases (W&B), and DataRobot.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.580923+00:00` (UTC).

## Primary page

- [Fine-tune & deploy LLMs](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/model-building-tuning/finetune-codespace.html): Full documentation for this topic (HTML).

## Sections on this page

- [Considerations](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/model-building-tuning/finetune-codespace.html#considerations): In-page section heading.

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [Developer learning](https://docs.datarobot.com/en/docs/api/dev-learning/index.html): Linked from this page.
- [AI accelerators](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/index.html): Linked from this page.
- [Model building and fine-tuning](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/model-building-tuning/index.html): Linked from this page.

## Documentation content

[Access this AI accelerator on GitHub](https://github.com/datarobot-community/ai-accelerators/blob/main/generative_ai/fine-tuning-in-codespaces/Fine-tuning%20in%20DataRobot%20Codespaces.ipynb)

This accelerator illustrates an end-to-end workflow for fine-tuning and deployment an LLM using features of Huggingface, Weights and Biases (W&B), and DataRobot.

Specifically, the accelerator walks you through the following steps:

- Downloading an LLM from the Hugging Face Hub.
- Acquiring a dataset from Hugging Face.
- Leveraging DataRobot codespaces, notebooks, and GPU resources to facilitate fine-tuning via Hugging Face and W&B.
- Leveraging DataRobot MLOps to register and deploy a model as an inference endpoint.
- Leveraging DataRobot's RAG playground to evaluate and compare your fine-tuned LLM against available LLMs.

The accelerator uses Hugging Face as a common example that you can modify based on your needs. It uses Weights and Biases to help keep track of your experiments. It is helpful to visualize training loss in real time as well as log prompt results for review during fine-tuning. Also, if you decide to do some hyperparameter tuning, you can do so with W&B Sweeps.

## Considerations

This accelerator has been tested in a DataRobot codespace with a GPU resource bundle. requirement.txt has a pinned version of the required libraries.

Notebooks images in DataRobot have limited writable space (about 20GB). Therefore, checkpointing models during finetuning is not encouraged, and if you do checkpoint, limit it. This accelerator opts to fine-tune llama-3.2-1B since it is on the smaller side.

Use Weights and Biases to track the experiment. The W&B API Key is available in `.env.` If you don't have a W&B account, get one at the [W&B sign up page](https://www.wandb.ai/).
