# Call prompts in an agent

> Call prompts in an agent - Employ the saved prompt in agent code; test the results in an agentic
> playground.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.560947+00:00` (UTC).

## Primary page

- [Call prompts in an agent](https://docs.datarobot.com/en/docs/agentic-ai/prompt-mgmt/prompt-in-agent.html): Full documentation for this topic (HTML).

## Related documentation

- [Agentic AI](https://docs.datarobot.com/en/docs/agentic-ai/index.html): Linked from this page.
- [Prompt management](https://docs.datarobot.com/en/docs/agentic-ai/prompt-mgmt/index.html): Linked from this page.
- [Customize agents](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-development.html#modify-agent-prompts): Linked from this page.

## Documentation content

Prompt templates are reusable prompts created in DataRobot, often containing placeholder variables (like `{{ topic }}`) to be defined in the agent code. The prompt template itself is stored in DataRobot and can have multiple versions, allowing for prompt iteration and fine-tuning without changing the agent code.

> [!NOTE] Token limits
> Prompt templates count towards the LLM's maximum completion token limit.

In the agent code, the framework gets the template through the DataRobot API and provides the variable values. These variables are then substituted into the template text to create the final prompt sent to the LLM. Each framework template handles initial input formatting differently, so the method for using DataRobot prompt templates varies. Before modifying the `myagent.py` file, create a prompt template in DataRobot and note the template ID. If needed, also note the version ID of the specific prompt template version to use. Then, in the `myagent.py` file, modify the appropriate method or property in the `MyAgent` class based on the framework.

The examples below show modifications to the existing framework templates in this repository. Each example assumes a prompt template exists in DataRobot containing a `{{ topic }}` variable. For example, the prompt template might be: `Write an article about {{ topic }} in 1997.` The user prompt sent to the agent is combined with this template by substituting the user input into the `{{ topic }}` variable.

**LangGraph:**
LangGraph uses a `prompt_template` property that returns a `ChatPromptTemplate`. Add `import datarobot as dr` at the top of the file, then modify this property to use DataRobot prompt templates:

```
# Added to imports
import datarobot as dr

# Modified in MyAgent class
@property
def prompt_template(self) -> ChatPromptTemplate:
    prompt_template = dr.genai.PromptTemplate.get("PROMPT_TEMPLATE_ID")
    prompt_template_version = prompt_template.get_latest_version()
    # To use a specific version instead: 
    # prompt_template_version = prompt_template.get_version("PROMPT_VERSION_ID")
    # Convert {{ variable }} format to {variable} format for LangGraph's ChatPromptTemplate
    # The {topic} variable is filled by the framework at runtime
    prompt_text = prompt_template_version.to_fstring()
    return ChatPromptTemplate.from_messages(
        [
            (
                "user",
                prompt_text,
            ),
        ]
    )
```

Replace the prompt template ID ( `"PROMPT_TEMPLATE_ID"`) with the appropriate template ID from DataRobot. The example uses `get_latest_version()` to automatically use the latest version without redeployment.

This example uses `to_fstring()` to convert the template's `{{ topic }}` variable to `{topic}` format, which LangGraph's `ChatPromptTemplate` replaces at runtime. If the variables in the prompt template change across versions (for example, if a new version uses `{{ subject }}` instead of `{{ topic }}`), update this code to handle all variables appropriately, otherwise the code may break when fetching a new version.

> [!NOTE] Multi-agent workflows
> The `prompt_template` property is used for the initial user input. In the [Agentic Starter](https://github.com/datarobot-community/datarobot-agent-application) and related templates, each graph node is built with LangChain's [create_agent](https://python.langchain.com/docs/modules/agents/) and a `system_prompt` argument (often wrapped with `make_system_prompt` from `datarobot_genai`). To ensure all agents follow the prompt template instructions, incorporate the formatted template text into each node's `system_prompt` (not the `prompt=` parameter used by some other LangGraph examples such as `langgraph.prebuilt.create_react_agent`). See [Customize agents](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-development.html#modify-agent-prompts) ( Modify agent prompts, LangGraph tab) for the API used in DataRobot templates.

**LlamaIndex:**
LlamaIndex uses a `make_input_message` method that returns a string. Add `import datarobot as dr` at the top of the file, then modify this method to use DataRobot prompt templates:

```
# Added to imports
import datarobot as dr

# Modified in MyAgent class
def make_input_message(self, completion_create_params: Any) -> str:
    user_prompt_content = extract_user_prompt_content(completion_create_params)
    prompt_template = dr.genai.PromptTemplate.get("PROMPT_TEMPLATE_ID")
    prompt_template_version = prompt_template.get_latest_version()
    # To use a specific version instead: 
    # prompt_template_version = prompt_template.get_version("PROMPT_VERSION_ID")
    # Render the prompt template with variables (assumes {{ topic }} in the template)
    prompt_text = prompt_template_version.render(topic=user_prompt_content)
    return prompt_text
```

Replace the prompt template ID ( `"PROMPT_TEMPLATE_ID"`) with the appropriate template ID from DataRobot. The example uses `get_latest_version()` to automatically use the latest version without redeployment.

This example assumes the prompt template contains a `{{ topic }}` variable. If the variables in the prompt template change across versions (for example, if a new version uses `{{ subject }}` instead of `{{ topic }}`), update this code to handle all variables appropriately, otherwise the code may break when fetching a new version.

> [!NOTE] Multi-agent workflows
> The `make_input_message` method affects only the initial input message. Each agent has its own `system_prompt` property. To ensure all agents follow the prompt template instructions, incorporate the formatted prompt template into each agent's `system_prompt` property.

**CrewAI:**
CrewAI uses agent properties ( `goal`, `backstory`) that can contain prompt templates. Add `import datarobot as dr` at the top of the file, then modify agent properties to use DataRobot prompt templates:

```
# Added to imports
import datarobot as dr

# Modified in MyAgent class
@property
def agent_planner(self) -> Agent:
    prompt_template = dr.genai.PromptTemplate.get("PROMPT_TEMPLATE_ID")
    prompt_template_version = prompt_template.get_latest_version()
    # To use a specific version instead: 
    # prompt_template_version = prompt_template.get_version("PROMPT_VERSION_ID")
    # For properties that use {topic} (f-string format), use to_fstring()
    prompt_text = prompt_template_version.to_fstring()

    return Agent(
        role="Planner",
        goal=f"Plan engaging and factually accurate content on {{ topic }}. {prompt_text}",
        backstory=f"You're working on planning a blog article about the topic: {{ topic }}. {prompt_text} "
        "You collect information that helps the audience learn something and make informed decisions. "
        "Your work is the basis for the Content Writer to write an article on this topic.",
        # ... other properties
    )
```

Replace the prompt template ID ( `"PROMPT_TEMPLATE_ID"`) with the appropriate template ID from DataRobot. The example uses `get_latest_version()` to automatically use the latest version without redeployment.

This example modifies `agent_planner` to use the prompt template in its `goal` and `backstory` properties. Since these properties use `{topic}` (f-string format that CrewAI will fill at runtime), the example uses `to_fstring()` to convert `{{ topic }}` to `{topic}` format so CrewAI can replace it with the user's input.

This example assumes the prompt template contains a `{{ topic }}` variable. If the variables in the prompt template change across versions (for example, if a new version uses `{{ subject }}` instead of `{{ topic }}`), update this code to handle all variables appropriately, otherwise the code may break when fetching a new version.

> [!NOTE] Multi-agent workflows
> Apply prompt templates to each agent's `goal` or `backstory` properties where you want the instructions to be followed. For properties that use `{topic}`, use `to_fstring()`. For plain text properties, use `render()`.
