Develop agentic workflows with templates¶
This tutorial explains how to create and run an agent that interacts with the chat interface.
Premium feature
The agent workflow template is a premium feature. Contact your DataRobot representative or administrator for information on enabling the feature.
Feature flags: Enable GenAI Agentic Workflow, Enable Agentic Workflow Tracing, Enable Agentic Workflow in MLOps, and Enable LLM Gateway
Prerequisites¶
Before proceeding, install the following required and review the considerations. If you experience issues while running this workflow, reference the troubleshooting section.
- uv fast Python package installer.
- Task task runner and build tool.
- Pulumi CLI for defining and managing cloud infrastructure with common programming languages.
Run an agent¶
The first step when developing an agent locally is to clone the DataRobot agentic templates repository.
Copy and rename the provided sample environment file (.env.sample), and provide the necessary environment variables as part of the file.
cp .env.sample .env
-
Configure
DATAROBOT_DEFAULT_EXECUTION_ENVIRONMENTas follows. This defaults to the latest stable execution environment. Using the name will auto-select the most up-to-date environment by resolving the latest ID with Pulumi.DATAROBOT_DEFAULT_EXECUTION_ENVIRONMENT="[DataRobot] Python 3.11 GenAI Agents". -
Define and provide a
PULUMI_CONFIG_PASSPHRASE. The passphrase can be any string.
Next, run quickstart.py and select the type of agent you want to create.
uv run quickstart.py
After running quickstart, prepare virtualenv so that you can install additional libraries to run the agent.
task setup
Now you can customize the code of your agent in the {{ agent_name }}/custom_model folder or customize its environment in {{ agent_name }}/docker_context.
To test the code locally, use the following command:
task agent:cli -- execute --user_prompt 'Hi, how are you?'
You can also send a structured query as a prompt if your agent requires it.
task agent:cli -- execute --user_prompt '{"topic":"Generative AI"}'
After testing, you can experiment with the agent in DataRobot.
task build
Pulumi lists the resources that it is going to create. When you are ready for the resources to be built, press Enter to continue.
Allow time for Pulumi to create an execution environment. Once created, Pulumi reports a custom model ID and a chat interface endpoint for it.
Agent Chat Completion Endpoint [agent_generic_base]: "https://app.datarobot.com/api/v2/genai/agents/fromCustomModel/683ed1fcd767c535b580bc9d/chat/"
Below is an example Python script that you can copy to your project and use to interact with the agent. You can also navigate to the agentic playground in the DataRobot application for UI-based experimentation.
# /// script
# dependencies = [
# "requests",
# "dotenv",
# ]
# ///
import os
import time
import dotenv
import requests
dotenv.load_dotenv(".env")
CUSTOM_MODEL_ID = "68408115b1d5764be0b15389"
# Make a chat completion request to /genai/agent/fromCustomModel/id/chat
headers = {
"Authorization": f"Bearer {os.environ['DATAROBOT_API_TOKEN']}",
"Content-Type": "application/json"
}
data = {
"messages": [
{"role": "user", "content": 'Hi, how are you'}
]
}
response = requests.post(
f"{os.environ['DATAROBOT_ENDPOINT']}/genai/agents/fromCustomModel/{CUSTOM_MODEL_ID}/chat/",
headers=headers,
json=data
)
# Something wrong
if not response.ok or not response.headers.get("Location"):
raise Exception(response.content)
# Wait for the agent to complete
status_location = response.headers["Location"]
while response.ok:
time.sleep(1)
response = requests.get(status_location, headers=headers, allow_redirects=False)
if response.status_code == 303:
agent_response = requests.get(response.headers["Location"], headers=headers).json()
# Show the agent response
break
else:
status_response = response.json()
if status_response["status"] in ["ERROR", "ABORTED"]:
raise Exception(status_response)
else:
raise Exception(response.content)
if "errorMessage" in agent_response and agent_response["errorMessage"]:
print("Error message:")
print(agent_response["errorMessage"])
print("Error details:")
print(agent_response["errorDetails"])
elif "choices" in agent_response:
print(agent_response["choices"][0]["message"]["content"])
else:
print(agent_response)
Run the Python script with uv.
uv run test_agent.py
Allow time for the script to run as it prepares a new execution environment.
If the script returns any errors in the code, you can modify it locally, run task build, and restart the query. This syncs the changes to your custom model, preparing it for a new request.
Experimentation mode does not require you to redeploy a custom model to see changes when you interact with the /chat interface. It will sync up file changes in the running session.
When you change the agent's execution environment, allow time for DataRobot to restart the environment for a chat request.
Deploy an agent¶
When you have finished experimenting, you can deploy the agent.
task deploy
Check that the agent works correctly by running the following command to receive an agent response.
task agent:cli -- execute-deployment --deployment_id %your_id% --user_prompt 'Hi how are you?'
Troubleshooting¶
In some cases the LLM gateway may not be connecting to the model, or you failed to connect the gateway or DataRobot. To fix this:
- Ensure you have access to the model. If you specified a model you don't have access to (or a retired model), you can connect to the gateway, but then the action fails with a "no model access" error.
- Ensure you have enabled the "Enable LLM Gateway" feature flag and that the
ENABLE_LLM_GATEWAY_INFERENCEruntime parameter is provided in themodel-metadata.yamlfile and set totrue. - If you are using a non-gateway model, run
task deployonce even if it fails to get your LLM deployment to run.