# Add a text generation NVIDIA NIM to a Playground

> Add a text generation NVIDIA NIM to a Playground - Add a deployed text generation NVIDIA NIM to a
> blueprint in the playground to access an array of comparison and evaluation tools.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.558179+00:00` (UTC).

## Primary page

- [Add a text generation NVIDIA NIM to a Playground](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/add-deployed-nvidia-nim.html): Full documentation for this topic (HTML).

## Related documentation

- [Agentic AI](https://docs.datarobot.com/en/docs/agentic-ai/index.html): Linked from this page.
- [RAG workflows](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/index.html): Linked from this page.
- [Vector database](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/build-llm-blueprints.html#add-a-vector-database): Linked from this page.

## Documentation content

> [!NOTE] Premium
> The use of NVIDIA Inference Microservices (NIM) in DataRobot requires access to premium features for GenAI experimentation and GPU inference. Contact your DataRobot representative or administrator for information on enabling the required features.

In a Use Case, you can add NVIDIA Inference Microservices (NIM) to the playground for prompting, comparison, and evaluation. A [playground](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/index.html) is a Use Case asset for creating and interacting with LLM blueprints. LLM blueprints represent the full context for what is needed to generate a response from an LLM, captured in the LLM blueprint settings. Within the playground you compare LLM blueprint responses to determine which blueprint to use in production for solving a business problem.

> [!NOTE] Text generation NVIDIA NIM support in the playground
> The following text generation models aren't supported in the playground:
> 
> llama-2-70b-chat
> llama-3-swallow-70b-instruct-v0.1
> llama-3-taiwan-70b-instruct
> llama3-70b-instruct
> llama-3.1-nemotron-ultra-253b-v1
> llama-3.2-90b-vision-instruct
> mixtral-8x22b-instruct-v01
> nemotron-3-super-120b-a12b

To add a deployed text generation NVIDIA NIM to the playground:

1. InWorkbench, select a Use Case from theUse Case directory, and open or create a playground on thePlaygroundstab.
2. On theLLM blueprintstab within a playground, clickCreate LLM blueprintto add a new blueprint. Then, from the playground's blueprintConfigurationpanel, in theLLMdropdown, clickAdd deployed LLM:
3. In theAdd deployed LLMdialog box, enter a deployed LLMName, then select a DataRobot deployment in theDeployment namedropdown. Enter theChat model IDto set themodelparameter for requests from the playground to the deployed LLM, then clickValidate and add. Chat model ID valueTheChat model IDcan be set todatarobot-deployed-llm, allowing the value to populate dynamically. To hard code the value, review theChat model IDtable below, locate the NVIDIA NIM you're adding to the playground, and copy the value from theChat model IDcolumn.
4. After you add a custom LLM and validation is successful, back in the blueprint'sConfigurationpanel, in theLLMdropdown, clickDeployed LLM, and then select theValidation IDof the custom model you added:
5. Configure theVector databaseandPromptingsettings, and clickSave configurationto add the blueprint to the playground.

**Chat model ID list**

For NIM model deployments, the Chat model ID can be set to `datarobot-deployed-llm`, allowing the value to populate dynamically. To hard code the chat model ID value, review the table below and copy the value from the Chat model ID column.

| Model name | Chat model ID |
| --- | --- |
| codellama-13b-instruct | codellama/codellama-13b-instruct |
| codellama-34b-instruct | codellama/codellama-34b-instruct |
| codellama-70b-instruct | codellama/codellama-70b-instruct |
| deepseek-r1-distill-llama-8b | deepseek-ai/deepseek-r1-distill-llama-8b |
| deepseek-r1-distill-qwen-7b | deepseek-ai/deepseek-r1-distill-qwen-7b |
| deepseek-r1-distill-qwen-14b | deepseek-ai/deepseek-r1-distill-qwen-14b |
| deepseek-r1-distill-qwen-32b | deepseek-ai/deepseek-r1-distill-qwen-32b |
| gemma-2-2b-instruct | google/gemma-2-2b-instruct |
| gemma-2-9b-it | google/gemma-2-9b-it |
| gpt-oss-120b | openai/gpt-oss-120b |
| gpt-oss-20b | openai/gpt-oss-20b |
| llama-2-13b-chat | meta/llama-2-13b-chat |
| llama-2-7b-chat | meta/llama-2-7b-chat |
| llama-3-sqlcoder-8b | defog/llama-3-sqlcoder-8b |
| llama-3.1-70b-instruct | meta/llama-3.1-70b-instruct |
| llama-3.1-8b-instruct | meta/llama-3.1-8b-instruct |
| llama-3.1-8b-instruct | meta/llama-3.1-8b-instruct |
| llama-3.1-70b-instruct | meta/llama-3.1-70b-instruct |
| llama-3.1-nemotron-nano-8b-v1 | nvidia/llama-3.1-nemotron-nano-8b-v1 |
| llama-3.1-nemotron-70b-instruct | nvidia/llama-3.1-nemotron-70b-instruct |
| llama-3.1-swallow-70b-instruct-v0.1 | tokyotech-llm/llama-3.1-swallow-70b-instruct-v0.1 |
| llama-3.2-1b-instruct | meta/llama-3.2-1b-instruct |
| llama-3.2-3b-instruct | meta/llama-3.2-3b-instruct |
| llama-3.2-11b-vision-instruct | meta/llama-3.2-11b-vision-instruct |
| llama-3.3-70b-instruct | meta/llama-3.3-70b-instruct |
| llama-3.3-nemotron-super-49b-v1 | nvidia/llama-3.3-nemotron-super-49b-v1 |
| llama-3.3-nemotron-super-49b-v1.5 | nvidia/llama-3-3-nemotron-super-49b-v1-5 |
| llama3-8b-instruct | meta/llama3-8b-instruct |
| mistral-7b-instruct-v0.3 | mistralai/mistral-7b-instruct-v0.3 |
| mistral-nemo-12b-instruct | mistral-nemo-12b-instruct |
| mistral-nemo-minitron-8b-8k-instruct | nv-mistralai/mistral-nemo-minitron-8b-8k-instruct |
| mixtral-8x7b-instruct-v01 | mistralai/mixtral-8x7b-instruct-v0.1 |
| nemotron-3-nano | nvidia/nemotron-3-nano |
| nemotron-3-super-120b-a12b | nvidia/nemotron-3-super-120b-a12b |
| phi-3-mini-4k-instruct | microsoft/phi-3-mini-4k-instruct |
| qwen-2.5-7b-instruct | qwen/qwen-2.5-7b-instruct |
| starcoder2-7b | bigcode/starcoder2-7b |
