# GenAI workflow overview

> GenAI workflow overview - Review a generalized discussion of the generative LLM building
> workflow—vector databases, building and comparing LLM blueprints, and adding evaluation metrics.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.941115+00:00` (UTC).

## Primary page

- [GenAI workflow overview](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html): Full documentation for this topic (HTML).

## Sections on this page

- [Get started](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#get-started): In-page section heading.
- [Create a vector database](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#create-a-vector-database): In-page section heading.
- [Build LLM blueprints](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#build-llm-blueprints): In-page section heading.
- [Chat and compare LLM blueprints](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#chat-and-compare-llm-blueprints): In-page section heading.
- [Use LLM evaluation tools](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#use-llm-evaluation-tools): In-page section heading.
- [Deploy an LLM](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#deploy-an-LLM): In-page section heading.
- [What's next?](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/genai-workflow.html#whats-next): In-page section heading.

## Related documentation

- [Get started](https://docs.datarobot.com/en/docs/get-started/index.html): Linked from this page.
- [First time here?](https://docs.datarobot.com/en/docs/get-started/day0/index.html): Linked from this page.
- [Start with GenAI](https://docs.datarobot.com/en/docs/get-started/day0/genai-start/index.html): Linked from this page.
- [GenAI how-to](https://docs.datarobot.com/en/docs/get-started/how-to/genai-walk-basic.html): Linked from this page.
- [full documentation](https://docs.datarobot.com/en/docs/agentic-ai/index.html): Linked from this page.
- [creating a Use Case](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/usecases/build-usecase.html): Linked from this page.
- [adding a RAG playground](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/playground-overview.html#add-a-playground): Linked from this page.
- [add a vector database](https://docs.datarobot.com/en/docs/agentic-ai/vector-database/index.html): Linked from this page.
- [Vector databases can be versioned](https://docs.datarobot.com/en/docs/agentic-ai/vector-database/vector-versions.html#create-a-version): Linked from this page.
- [create an LLM blueprint](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/build-llm-blueprints.html): Linked from this page.
- [comparison tool](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/compare-llm.html): Linked from this page.
- [best practices for prompt engineering](https://docs.datarobot.com/en/docs/reference/gen-ai-ref/prompting-reference.html#best-practices-for-prompt-engineering): Linked from this page.
- [metrics and compliance tests](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/playground-eval-metrics.html): Linked from this page.
- [send it](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/deploy-llm.html): Linked from this page.
- [Registry workshop](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-workshop/index.html): Linked from this page.
- [Console](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html): Linked from this page.

## Documentation content

This section provides a generalized discussion of the generative LLM building workflow, which can include:

- Creating and versioning vector databases .
- Creating LLM blueprints .
- Chatting with and compare LLM blueprints.
- Applying evaluation metrics and creating compliance tests .
- Preparing LLM blueprints for deployment .

> [!TIP] Tip
> For a hands-on experience, try the [GenAI how-to](https://docs.datarobot.com/en/docs/get-started/how-to/genai-walk-basic.html).

See the [full documentation](https://docs.datarobot.com/en/docs/agentic-ai/index.html) for information on using your own data and LLMs, working with code instead of the UI, and working with NVIDIA NIM.

## Get started

It all begins by [creating a Use Case](https://docs.datarobot.com/en/docs/workbench/nxt-workbench/usecases/build-usecase.html) and [adding a RAG playground](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/playground-overview.html#add-a-playground). A playground is a dedicated LLM-focused experimentation environment within Workbench, where you can build, review, compare, evaluate, and deploy.

## Create a vector database

Once your playground is set up, optionally [add a vector database](https://docs.datarobot.com/en/docs/agentic-ai/vector-database/index.html). The role of the vector database is to enrich the prompt with relevant context before it is sent to the LLM. When creating a vector database, you:

- Choose a provider.
- Add data.
- Set a basic configuration and text chunking details.

[Vector databases can be versioned](https://docs.datarobot.com/en/docs/agentic-ai/vector-database/vector-versions.html#create-a-version) to make sure the most up-to-date data is available to ground LLM responses.

## Build LLM blueprints

An LLM blueprint represents the full context for what is needed to generate a response from an LLM; the resulting output is what can then be compared within the playground.

When you click to [create an LLM blueprint](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/build-llm-blueprints.html), the playground opens. Select an LLM to get started and then set the configuration options.

In the configuration panel, optionally add a vector database and set the prompting strategy.

After you save, the new LLM blueprint is listed on the left.

## Chat and compare LLM blueprints

Once the LLM blueprint configuration is saved, try sending it prompts (rag-chatting) to determine whether further refinements are needed.

Then, add several blueprints to use the [comparison tool](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/compare-llm.html) to test different LLM blueprints using the same prompt. This helps pick the best LLM blueprint for deployment.

See the [best practices for prompt engineering](https://docs.datarobot.com/en/docs/reference/gen-ai-ref/prompting-reference.html#best-practices-for-prompt-engineering) when chatting and doing comparisons.

## Use LLM evaluation tools

Using [metrics and compliance tests](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/playground-eval-metrics.html), DataRobot monitors how models are used in production, intervening and blocking bad outputs.

Add metrics before or after configuring LLM blueprints:

**Before:**
[https://docs.datarobot.com/en/docs/images/gen-fund-9.png](https://docs.datarobot.com/en/docs/images/gen-fund-9.png)

**After:**
[https://docs.datarobot.com/en/docs/images/gen-fund-10.png](https://docs.datarobot.com/en/docs/images/gen-fund-10.png)


Add evaluation datasets, or generate a synthetic dataset from within DataRobot, to create a systematic assessment of how well the model performs for its intended tasks.

Combine evaluation metrics and an evaluation dataset to automate the detection of compliance issues through test prompt scenarios. Use DataRobot-supplied evaluations or create your own.

## Deploy an LLM

Once you are satisfied with the LLM blueprint, you can [send it](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/deploy-llm.html) to the Registry's workshop from the playground.

The [Registry workshop](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-workshop/index.html) is where you test the LLM custom model and ultimately deploy it to [Console](https://docs.datarobot.com/en/docs/workbench/nxt-console/index.html), a centralized hub for monitoring and model management.

## What's next?

- Try it
- Watch it
- Build your own
