# Configure LLM providers in code

> Configure LLM providers in code - Learn how to configure different LLM providers for your agentic
> workflows including DataRobot Gateway, external APIs, and custom deployments.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.224523+00:00` (UTC).

## Primary page

- [Configure LLM providers in code](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html): Full documentation for this topic (HTML).

## Sections on this page

- [DataRobot LLM gateway](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#datarobot-llm-gateway): In-page section heading.
- [DataRobot hosted LLM deployments](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#datarobot-hosted-llm-deployments): In-page section heading.
- [DataRobot NIM deployments](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#datarobot-nim-deployments): In-page section heading.
- [OpenAI API configuration](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#openai-api-configuration): In-page section heading.
- [Anthropic API configuration](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#anthropic-api-configuration): In-page section heading.
- [Gemini API configuration](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#gemini-api-configuration): In-page section heading.
- [Connect to other providers](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#connect-to-other-providers): In-page section heading.
- [Review framework documentation](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#framework-documentation): In-page section heading.
- [Use LiteLLM for universal connectivity](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers.html#use-litellm-for-universal-connectivity): In-page section heading.

## Related documentation

- [Agentic AI](https://docs.datarobot.com/en/docs/agentic-ai/index.html): Linked from this page.
- [Build](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/index.html): Linked from this page.
- [DataRobot LLM gateway](https://docs.datarobot.com/en/docs/agentic-ai/genai-code/dr-llm-gateway.html): Linked from this page.
- [Configure LLM providers with metadata](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers-metadata.html): Linked from this page.
- [Deploy an LLM from the DataRobot Playground](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/deploy-llm.html): Linked from this page.
- [Hugging Face models as LLM deployments on DataRobot](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-workshop/nxt-open-source-textgen-template.html): Linked from this page.
- [Predictions](https://docs.datarobot.com/en/docs/agentic-ai/genai-integrations/nvidia-ngc-nim-import.html#make-predictions-with-the-deployed-nvidia-nim): Linked from this page.
- [DataRobot NIM documentation](https://docs.datarobot.com/en/docs/agentic-ai/genai-integrations/genai-nvidia-integration.html): Linked from this page.

## Documentation content

# Configure LLM providers in code

One of the key components of an LLM agent is the underlying LLM provider. DataRobot allows users to connect to virtually any LLM backend for their agentic workflows. LLM connections can be simplified by using the DataRobot LLM gateway or a DataRobot deployment (including NIM deployments). Alternatively, you can connect to any external LLM provider that supports the OpenAI API standard.

DataRobot agent templates provide multiple methods for defining an agent LLM:

- Use the DataRobot LLM gateway as the agent LLM, allowing you to use any model available in the gateway.
- Connect to use a previously deployed custom model or NIM using the DataRobot API by providing the deployment ID.
- Connect directly to an LLM provider API (such as OpenAI, Anthropic, or Gemini) by providing the necessary API credentials, enabling access to providers supporting a compatible API.

This document focuses on configuring LLM providers by manually creating LLM instances directly in your `myagent.py` file. This approach gives you fine-grained control over LLM initialization and is shown in the framework-specific examples below.

> [!NOTE] Alternative configuration method
> If you prefer to configure LLM providers using environment variables and Pulumi (infrastructure-level configuration), see [Configure LLM providers with metadata](https://docs.datarobot.com/en/docs/agentic-ai/agentic-develop/agentic-llm-providers-metadata.html).

The following sections provide example code snippets for connecting to various LLM providers using the CrewAI, LangGraph, LlamaIndex, and NAT (NVIDIA NeMo Agent Toolkit) frameworks. You can use these snippets as a starting point and modify them as needed to fit your specific use case.

## DataRobot LLM gateway

The LLM gateway provides a streamlined way to access LLMs proxied via DataRobot. The gateway is available for both cloud and on-premise users.

You can retrieve a list of available models for your account using the following methods:

**cURL:**
```
curl -X GET -H "Authorization: Bearer $DATAROBOT_API_TOKEN" "$DATAROBOT_ENDPOINT/genai/llmgw/catalog/" | jq '[.data[] | select(.isActive == true) | .model]'
```

**Python SDK:**
```
from datarobot.models.genai import LLMGatewayCatalog
print("\n".join(LLMGatewayCatalog.get_available_models()))
```


The following code examples demonstrate how to programmatically connect to the DataRobot LLM gateway in the CrewAI, LangGraph, and LlamaIndex frameworks. These samples show how to configure the model, API endpoint, and authentication.

> [!NOTE] Model format for LLM gateway
> When using the DataRobot LLM gateway, the model name format is `datarobot/<provider>/<model>` (e.g., `datarobot/azure/gpt-5-mini-2025-08-07`).

**CrewAI:**
```
from crewai import LLM

def llm(self) -> LLM:
    """Returns a CrewAI LLM instance configured to use DataRobot's LLM gateway."""
    return LLM(
        model="datarobot/azure/gpt-5-mini-2025-08-07",  # Define the model name you want to use (format: datarobot/<provider>/<model>)
        # Note: The `/chat/completions` endpoint will be automatically appended by LiteLLM
        api_base="https://app.datarobot.com",  # DataRobot endpoint
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LangGraph:**
```
from langchain_community.chat_models import ChatLiteLLM

def llm(self) -> ChatLiteLLM:
    """Returns a ChatLiteLLM instance configured to use DataRobot's LLM gateway."""
    return ChatLiteLLM(
        model="datarobot/azure/gpt-5-mini-2025-08-07",  # Define the model name you want to use (format: datarobot/<provider>/<model>)
        # Note: The `/chat/completions` endpoint will be automatically appended by LiteLLM
        api_base="https://app.datarobot.com",  # DataRobot endpoint
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LlamaIndex:**
```
# DataRobotLiteLLM class is included in the `myagent.py` file

def llm(self) -> DataRobotLiteLLM:
    """Returns a DataRobotLiteLLM instance configured to use DataRobot's LLM gateway."""
    return DataRobotLiteLLM(
        model="datarobot/azure/gpt-5-mini-2025-08-07",  # Define the model name you want to use (format: datarobot/<provider>/<model>)
        # Note: The `/chat/completions` endpoint will be automatically appended by LiteLLM
        api_base="https://app.datarobot.com",  # DataRobot endpoint
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**NAT:**
In NAT templates, LLMs are configured in the `workflow.yaml` file. To use the DataRobot LLM gateway, define an LLM in the `llms` section:

```
llms:
  datarobot_llm:
    _type: datarobot-llm-gateway
    model_name: azure/gpt-4o-mini  # Define the model name you want to use
    temperature: 0.0
```

Then, define the LLM a specific agent should use through the `llm_name` in the definition of that agent in the `functions` section:

```
functions:
  planner:
    _type: chat_completion
    llm_name: datarobot_llm  # Reference the LLM defined below
    system_prompt: |
      You are a content planner...
```

If more than one LLM is defined in the `llms` section, the various `functions` can use different LLMs to suit the task.

> [!TIP] NAT-provided LLM interfaces
> Alternatively, you can use any of the [NAT-provided LLM interfaces](https://docs.nvidia.com/nemo/agent-toolkit/latest/workflows/llms/index.html) instead of the LLM gateway. To use a NAT LLM interface, add the required configuration parameters such as `api_key`, `url`, and other provider-specific settings directly into the `workflow.yaml` file.


## DataRobot hosted LLM deployments

You can easily connect to DataRobot-hosted LLM deployments as an LLM provider for your agents. To do this, [Deploy an LLM from the DataRobot Playground](https://docs.datarobot.com/en/docs/agentic-ai/playground-tools/deploy-llm.html) or host [Hugging Face models as LLM deployments on DataRobot](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-workshop/nxt-open-source-textgen-template.html). DataRobot-hosted LLMs can also provide access to moderations and guardrails for managing and governing models.

To use a deployed custom model, manually configure the deployment URL directly in your agent code for `api_base=` param, following the examples below.

> [!NOTE] Deployment ID
> In the examples below, `DEPLOYMENT_ID` should be replaced with your actual DataRobot deployment ID, which you can obtain from the DataRobot platform.

> [!TIP] Model name string construction
> DataRobot deployments use an [OpenAI-compatible chat completion endpoint](https://docs.litellm.ai/docs/providers/openai_compatible). Therefore, the `model` name string should start with `openai/` to indicate the use of the OpenAI client. After `openai/`, the model name string should be the name of the model in the deployment.
> 
> For LLMs deployed from the playground, the
> model
> string should include the provider name and the model name. In the example below, the full model name is
> azure/gpt-4o-mini
> , provider included, not just
> gpt-4o-mini
> . This results in a final value of
> model="openai/azure/gpt-4o-mini"
> .
> For NIM models, the
> model
> string can be found on the NIM deployment's
> Predictions
> tab or in the NIM documentation. While NIM deployments may work with either
> openai
> or
> meta_llama
> interfaces, it's recommended to use
> openai
> for consistency.

**CrewAI:**
```
from crewai import LLM

def llm(self) -> LLM:
    """Returns a CrewAI LLM instance configured to use a DataRobot Deployment."""
    return LLM(
        # Note: For DataRobot deployments, use the openai provider format
        model="openai/azure/gpt-4o-mini",  # Format: openai/<model-name>
        # Note: The `/chat/completions` endpoint will be automatically appended by LiteLLM
        api_base=f"https://app.datarobot.com/api/v2/deployments/{DEPLOYMENT_ID}/",  # Deployment URL
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LangGraph:**
```
from langchain_community.chat_models import ChatLiteLLM

def llm(self) -> ChatLiteLLM:
    """Returns a ChatLiteLLM instance configured to use a DataRobot Deployment."""
    return ChatLiteLLM(
        # Note: LangGraph uses datarobot provider format for deployments
        model="openai/azure/gpt-4o-mini",  # Format: openai/<model-name>
        # Note: The `/chat/completions` endpoint will be automatically appended by LiteLLM
        api_base=f"https://app.datarobot.com/api/v2/deployments/{DEPLOYMENT_ID}/",  # Deployment URL
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LlamaIndex:**
```
# DataRobotLiteLLM class is included in the `myagent.py` file

def llm(self) -> DataRobotLiteLLM:
    """Returns a DataRobotLiteLLM instance configured to use a DataRobot Deployment."""
    return DataRobotLiteLLM(
        # Note: For DataRobot deployments, use the openai provider format
        model="openai/azure/gpt-4o-mini",  # Format: openai/<model-name>
        # Note: The `/chat/completions` endpoint will be automatically appended by LiteLLM
        api_base=f"https://app.datarobot.com/api/v2/deployments/{DEPLOYMENT_ID}/",  # Deployment URL
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**NAT:**
In NAT templates, LLMs are configured in the `workflow.yaml` file. To use a DataRobot-hosted LLM deployment, define an LLM in the `llms` section:

```
llms:
  datarobot_deployment:
    _type: datarobot-llm-deployment
    model_name: datarobot-deployed-llm  # Optional: Define the model name to pass through to the deployment
    temperature: 0.0
```

The deployment ID is automatically retrieved from the `LLM_DEPLOYMENT_ID` environment variable or runtime parameter.

When you use the DataRobot Deployed LLM option, `USE_DATAROBOT_LLM_GATEWAY` is automatically set to `0` so inference uses your deployment rather than the LLM gateway.

To use this deployment, define the LLM a specific agent should use through the `llm_name` in the definition of that agent in the `functions` section:

```
functions:
  planner:
    _type: chat_completion
    llm_name: datarobot_deployment  # Reference the LLM defined above
    system_prompt: |
      You are a content planner...
```

If more than one LLM is defined in the `llms` section, the various `functions` can use different LLMs to suit the task.

> [!TIP] NAT-provided LLM interfaces
> Alternatively, you can use any of the [NAT-provided LLM interfaces](https://docs.nvidia.com/nemo/agent-toolkit/latest/workflows/llms/index.html) instead of the LLM gateway. To use a NAT LLM interface, add the required configuration parameters such as `api_key`, `url`, and other provider-specific settings directly in the `workflow.yaml` file.


## DataRobot NIM deployments

The template supports using NIM deployments as an LLM provider, which allows you to use any NIM deployment hosted on DataRobot as an LLM provider for your agent. When using LiteLLM with NIM deployments, use the `openai` provider interface. The model name depends on your specific deployment and can be found in the Predictions tab of your deployment in DataRobot. For example, if the deployment uses a model named `meta/llama-3.2-1b-instruct`, use `openai/meta/llama-3.2-1b-instruct` for the model string. This tells LiteLLM to use the `openai` API adapter and the model name `meta/llama-3.2-1b-instruct`.

To create a new NIM deployment, you can follow the instructions in the [DataRobot NIM documentation](https://docs.datarobot.com/en/docs/agentic-ai/genai-integrations/genai-nvidia-integration.html).

> [!NOTE] Deployment ID
> In the examples below, `DEPLOYMENT_ID` should be replaced with your actual DataRobot deployment ID, which you can obtain from the DataRobot platform.

> [!TIP] Model name string construction
> DataRobot deployments use an [OpenAI-compatible chat completion endpoint](https://docs.litellm.ai/docs/providers/openai_compatible). Therefore, the `model` name string should start with `openai/` to indicate the use of the OpenAI client. After `openai/`, the model name string should be the name of the model in the deployment.
> 
> For LLMs deployed from the playground, the
> model
> string should include the provider name and the model name. In the example below, the full model name is
> azure/gpt-4o-mini
> , provider included, not just
> gpt-4o-mini
> . This results in a final value of
> model="openai/azure/gpt-4o-mini"
> .
> For NIM models, the
> model
> string can be found on the NIM deployment's
> Predictions
> tab or in the NIM documentation. While NIM deployments may work with either
> openai
> or
> meta_llama
> interfaces, it's recommended to use
> openai
> for consistency.

**CrewAI:**
```
from crewai import LLM

def llm(self) -> LLM:
    """Returns a CrewAI LLM instance configured to use a NIM deployed on DataRobot."""
    return LLM(
        # Use the openai provider with the model name from your deployment's Predictions tab
        model="openai/meta/llama-3.2-1b-instruct",  # Format: openai/<model-name-from-deployment>
        api_base=f"https://app.datarobot.com/api/v2/deployments/{DEPLOYMENT_ID}",  # NIM Deployment URL
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LangGraph:**
```
from langchain_openai import ChatOpenAI

def llm(self) -> ChatOpenAI:
    """Returns a ChatOpenAI instance configured to use a NIM deployed on DataRobot."""
    return ChatOpenAI(
        # Use the model name from your deployment's Predictions tab
        model="meta/llama-3.2-1b-instruct",  # Model name from deployment's Predictions tab
        api_base=f"https://app.datarobot.com/api/v2/deployments/{DEPLOYMENT_ID}",  # NIM deployment URL
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LlamaIndex:**
```
from llama_index.llms.openai_like import OpenAILike

def llm(self) -> OpenAILike:
    """Returns an OpenAILike instance configured to use a NIM deployed on DataRobot."""
    return OpenAILike(
        # Use the model name from your deployment's Predictions tab
        model="meta/llama-3.2-1b-instruct",  # Model name from the deployment's Predictions tab
        api_base=f"https://app.datarobot.com/api/v2/deployments/{DEPLOYMENT_ID}/v1",  # NIM deployment URL with /v1 endpoint
        api_key=self.api_key,  # Your DataRobot API key
        timeout=self.timeout,  # Optional timeout for requests
        is_chat_model=True,  # Enable chat model mode for NIM endpoints
    )
```

**NAT:**
In NAT templates, LLMs are configured in the `workflow.yaml` file. To use a DataRobot NIM deployment, define an LLM in the `llms` section:

```
llms:
  datarobot_nim:
    _type: datarobot-nim
    model_name: meta/llama-3.2-1b-instruct  # Optional: Define the model name to pass through to the deployment
    temperature: 0.0
```

The deployment ID is automatically retrieved from the `NIM_DEPLOYMENT_ID` environment variable or runtime parameter.

To use this deployment, define the LLM a specific agent should use through the `llm_name` in the definition of that agent in the `functions` section:

```
functions:
  planner:
    _type: chat_completion
    llm_name: datarobot_nim  # Reference the LLM defined above
    system_prompt: |
      You are a content planner...
```

If more than one LLM is defined in the `llms` section, the various `functions` can use different LLMs to suit the task.

> [!TIP] NAT-provided LLM interfaces
> Alternatively, you can use any of the [NAT-provided LLM interfaces](https://docs.nvidia.com/nemo/agent-toolkit/latest/workflows/llms/index.html) instead of the LLM gateway. To use a NAT LLM interface, add the required configuration parameters such as `api_key`, `url`, and other provider-specific settings directly in the `workflow.yaml` file.


## OpenAI API configuration

There are cases where you may want to use an external LLM provider that supports the OpenAI API standard, such as OpenAI itself. The template supports connecting to any OpenAI-compatible LLM provider. Here are examples for directly connecting to OpenAI using the CrewAI and LangGraph frameworks.

**CrewAI:**
```
from crewai import LLM

def llm(self) -> LLM:
    """Returns a CrewAI LLM instance configured to use OpenAI."""
    return LLM(
        model="gpt-4o-mini", # Define the OpenAI model name
        api_key="YOUR_OPENAI_API_KEY", # Your OpenAI API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LangGraph:**
```
from langchain_openai import ChatOpenAI

def llm(self) -> ChatOpenAI:
    """Returns a ChatOpenAI instance configured to use OpenAI."""
    return ChatOpenAI(
        model="gpt-4o-mini", # Define the OpenAI model name
        api_key="YOUR_OPENAI_API_KEY", # Your OpenAI API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LlamaIndex:**
```
from llama_index.llms.openai import OpenAI

def llm(self) -> OpenAI:
    """Returns an OpenAI instance configured to use OpenAI."""
    return OpenAI(
        model="gpt-4o-mini", # Define the OpenAI model name
        api_key="YOUR_OPENAI_API_KEY", # Your OpenAI API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```


## Anthropic API configuration

You can connect to Anthropic's Claude models using the Anthropic API. The template supports connecting to Anthropic models through both CrewAI and LangGraph frameworks. You'll need an Anthropic API key to use these models.

**CrewAI:**
```
from crewai import LLM

def llm(self) -> LLM:
    """Returns a CrewAI LLM instance configured to use Anthropic."""
    return LLM(
        model="claude-3-5-sonnet-20241022", # Define the Anthropic model name
        api_key="YOUR_ANTHROPIC_API_KEY", # Your Anthropic API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LangGraph:**
```
from langchain_anthropic import ChatAnthropic

def llm(self) -> ChatAnthropic:
    """Returns a ChatAnthropic instance configured to use Anthropic."""
    return ChatAnthropic(
        model="claude-3-5-sonnet-20241022", # Define the Anthropic model name
        anthropic_api_key="YOUR_ANTHROPIC_API_KEY", # Your Anthropic API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LlamaIndex:**
```
from llama_index.llms.anthropic import Anthropic

def llm(self) -> Anthropic:
    """Returns an Anthropic instance configured to use Anthropic."""
    return Anthropic(
        model="claude-3-5-sonnet-20241022", # Define the Anthropic model name
        api_key="YOUR_ANTHROPIC_API_KEY", # Your Anthropic API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```


## Gemini API configuration

You can also connect to Google's Gemini models using the Gemini API. The template supports connecting to Gemini models through both CrewAI and LangGraph frameworks. You'll need a Google AI API key to use these models.

**CrewAI:**
```
from crewai import LLM

def llm(self) -> LLM:
    """Returns a CrewAI LLM instance configured to use Gemini."""
    return LLM(
        model="gemini/gemini-1.5-flash", # Define the Gemini model name
        api_key="YOUR_GEMINI_API_KEY", # Your Google AI API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LangGraph:**
```
from langchain_google_genai import ChatGoogleGenerativeAI

def llm(self) -> ChatGoogleGenerativeAI:
    """Returns a ChatGoogleGenerativeAI instance configured to use Gemini."""
    return ChatGoogleGenerativeAI(
        model="gemini-1.5-flash", # Define the Gemini model name
        google_api_key="YOUR_GEMINI_API_KEY", # Your Google AI API key
        timeout=self.timeout,  # Optional timeout for requests
    )
```

**LlamaIndex:**
```
from llama_index.llms.gemini import Gemini

def llm(self) -> Gemini:
    """Returns a Gemini instance configured to use Google's Gemini."""
    return Gemini(
        model="gemini-1.5-flash", # Define the Gemini model name
        api_key="YOUR_GEMINI_API_KEY", # Your Google AI api key
        timeout=self.timeout,  # Optional timeout for requests
    )
```


## Connect to other providers

You can connect to any other LLM provider that supports the OpenAI API standard by following the patterns shown in the examples above. For providers that don't natively support the OpenAI API format, you have several options to help bridge the connection:

### Review framework documentation

Each framework provides comprehensive documentation for connecting to various LLM providers:

- CrewAI : Visit the CrewAI LLM documentation for detailed examples of connecting to different providers
- LangGraph : Check the LangChain LLM integrations for extensive provider support
- LlamaIndex : Refer to the LlamaIndex LLM modules for various LLM integrations
- NAT : Refer to the NVIDIA NeMo Agent Toolkit documentation for LLM configuration in workflow.yaml

### Use LiteLLM for universal connectivity

[LiteLLM](https://docs.litellm.ai/) is a library that provides a unified interface for connecting to 100+ LLM providers. It translates requests to match each provider's specific API format, making it easier to connect to providers like:

- Azure OpenAI
- AWS Bedrock
- Google Vertex AI
- Cohere
- Hugging Face
- Ollama
- And more

When using LiteLLM, the model string uses a compound format: `provider/model-name`

- Provider : The API adapter/provider to use (e.g., openai , azure , etc.).
- Model name : The model name to pass to that provider.

For example, if the deployment uses a model named `meta/llama-3.2-1b-instruct`, use `openai/meta/llama-3.2-1b-instruct` for the model string. This tells LiteLLM to use the [openaiAPI adapter](https://docs.litellm.ai/docs/providers/openai_compatible) and the model name `meta/llama-3.2-1b-instruct`.

This format allows LiteLLM to route requests to the appropriate provider API while using the correct model identifier for that provider.

For the most up-to-date list of supported providers and configuration examples, visit the [LiteLLM documentation](https://docs.litellm.ai/docs/providers).
