Skip to content

Version 11.2

This page contains the new features, enhancements, and fixed issues for DataRobot's 11.2 release. This is not a long-term support (LTS) release. Release 11.1 is the most recent long-term support release.

Release 11.2: October 2025

Version 11.2 was released October 30, 2025. It includes the following new features and fixed issues.

Agentic AI

MCP server template and integration

An MCP server template has been added to the DataRobot Community, allowing users to deploy an MCP server locally or to their DataRobot deployments for agents to access. Getting the server set up is as easy as cloning the repo and running a few basic scripts, allowing you to test the server on your local machine or a fully-deployed custom tool on your DataRobot cluster. The template also provides instructions for getting several popular MCP clients connected to the server, as well as frameworks for integrating custom and dynamic tools to further customize your MCP setup.

For full instructions, refer to the MCP server template ReadMe.

Open-source Milvus now supported as vector database provider

You can now create a direct connection to Milvus, in addition to Pinecone and Elasticsearch, when wanting to use an external data source for vector database creation. Milvus, a leading open-source vector database project, is distributed under the Apache 2.0 license. Additionally, you can now select distance (similarity) metrics for connected providers. These metrics measure how similar vectors are; selecting the appropriate metric can substantially boost the effectiveness of classification and clustering tasks.

New LLMs added

With this release, the ever-growing LLM library is again extended, adding Cerebras, TogetherAI, and first-party Anthropic solutions. As always, see the full list of LLMs, available for all subscribed enterprise users and Trial users. Leverage the LLM gateway, to access any supported LLMs, leveraging DataRobot's credentials under-the-hood for experimentation (playground) and production (custom model deployment) or bring-your-own LLM credentials for access in production.

LLM gateway rate limiting

The LLM gateway now enforces rate limits on chat completion calls to ensure fair and efficient use of LLM resources. Organizations may be subject to a maximum number of LLM calls per 24-hour period, with error messages indicating when the limit is reached and when it will reset. To adjust or remove these limits, administrators can contact DataRobot support.

NVIDIA AI Enterprise and DataRobot provide a pre-built AI stack solution, designed to integrate with your organization's existing DataRobot infrastructure, which gives access to robust evaluation, governance, and monitoring features. This integration includes a comprehensive array of tools for end-to-end AI orchestration, accelerating your organization's data science pipelines to rapidly deploy production-grade AI applications on NVIDIA GPUs in DataRobot Serverless Compute.

In DataRobot, create custom AI applications tailored to your organization's needs by selecting NVIDIA Inference Microservices (NVIDIA NIM) from a gallery of AI applications and agents. NVIDIA NIM provides pre-built and pre-configured microservices within NVIDIA AI Enterprise, designed to accelerate the deployment of generative AI across enterprises.

With the release of version 11.2 in October 2025, DataRobot added new GPU-optimized containers to the NIM Gallery, including:

  • gpt-oss-20b
  • gpt-oss-120b
  • llama-3.1-nemotron-nano-vl-8b-v1
  • llama-3.3-nemotron-super-49b-v1.5
  • llama-4-scout-17b-16e-instruct
  • nvidia-nemotron-nano-9b-v2
  • qwen3-next-80b-a3b-thinking
  • qwen3-32b

DataRobot agent templates improvements

This release introduces significant updates to the datarobot-agent-templates repository, including new features and performance enhancements. Recent highlights include the introduction of streaming support for agentic workflows and agent:cli tools (11.2.4), a major optimization to agent startup time (11.2.3), the migration of all build and infrastructure logic to pyproject.toml and uv (11.2.2), and initial support for NVIDIA-NAT environments. Additionally, in 11.2.4, the def run method in the agent class was renamed to def invoke and refactored to use a unified return response. For a comprehensive list of all changes and fixes, review the full changelog.

Apps

Talk to my Docs application template

Use the Talk to my Docs application template to ask questions about your documents using agentic workflows. This application allows you to rapidly gain insight from documents across different providers—Google Drive, Box, and your local computer—via a chat interface to upload or connect to documents, ask questions, and visualize answers with insights.

Decision-makers depend on data-driven insights but are often frustrated by the time and effort it takes to get them. They dislike waiting for answers to simple questions and are willing to invest significantly in solutions that eliminate this frustration. This application directly addresses this challenge by providing a plain language chat interface to your documents. It searches and catalog various documents to create actionable insights through intuitive conversation. With the power of AI, teams get faster analysis helping them make informed decisions in less time.

Data

Support for Google Drive and SharePoint added to NextGen

Support for the Google Drive connector and SharePoint has been added to NextGen in DataRobot. To connect to either Google Drive or SharePoint, go to Account Settings > Data connections or create a new vector database. To configure the connection, you can use OAuth, service account (Google Drive), or service principal (SharePoint) as the authentication method. Note that this connector only supports unstructured data, meaning you can only use it as a data source for vector databases.

Updated redirect URI for BigQuery OAuth users

The redirect URI for BigQuery OAuth has been updated to: https://my.datarobot.com/account/google/google_authz_return. The old redirect URI https://my.datarobot.com/account/google/bigquery_authz_return will be deprecated in an upcoming release. Please ask your admin to update the redirect URI in Google OAuth configuration. If you need to continue using the old redirect URI, admin can set the EngConfig BIGQUERY_OAUTH_USE_OLD_REDIRECT_URI to True. For more information, see the BigQuery & Google Drive section in the DataRobot Installation guide.

Core AI

OpenTelemetry logs for deployments

The DataRobot OpenTelemetry service now collects OpenTelemetry-compliant logs, allowing for deeper analysis and troubleshooting of deployments. The new Logs tab in the Activity log section lets users view and analyze logs reported for a deployment in the OpenTelemetry standard format. Logs are available for all deployment and target types, with access restricted to users with "Owner" and "User" roles. The system supports four logging levels (INFO, DEBUG, WARN, ERROR) and offers flexible time filtering options, including Last 15 min, Last hour, Last day, or a Custom range. Logs are retained for 30 days before automatic deletion.

Additionally, the OTel logs API enables programmatic export of logs, supporting integration with third-party observability tools. The standardized OpenTelemetry format ensures compatibility across different monitoring platforms.

Quota management for deployments

Comprehensive quota management capabilities help deployment owners control resource usage and ensure fair access across teams and applications. Quota management is available during deployment creation and in the Settings > Quota tab for existing deployments. Configure default quota limits for all agents or set individual entity rate limits for specific users, groups, or deployments. This system supports three metrics: Requests (prediction request volume), Tokens (token processing limits), and Input sequence length (prompt/query token count), with flexible time resolutions of Minute, Hour, or Day.

In addition, Agent API keys are automatically generated for Agentic workflow deployments, appearing in the API keys and tools section under the Agent API keys tab. These keys differentiate between various applications and agents using a deployment, enabling better quota tracking and management.

These enhancements prevent single agents from monopolizing resources, ensure fair access across teams, and provide cost control through usage limits. Quota policy changes take up to 5 minutes to apply due to gateway cache updates.

Platform

Sharing notification improvements

With this release, email notifications have been streamlined when sharing a Use Case with other members of your team. Previously, an individual email was sent for each asset within the shared Use Case. Now, all email notifications for Use Case sharing have been consolidated into a single email.

Use Case admin role

DataRobot's RBAC functionality has been updated with a new Use Case Admin role. Users assigned as Use Case Admins can view all use cases in their organization, rather than being restricted to those they have created or have been shared with them. This view can be toggled on the Use Cases table:

For more information, see the Use Case Admin section in the Use Case overview and the RBAC details.

Set re-authentication time limit for external applications

With this release, administrators now have the ability to define and set the time limit for external application user re-authentication. Note that the time limit resets each time a user logs in, granting them a fresh limit (i.e., if org admin sets limit to 30 days, the user has 30 days from last log in). To define this limit, open Admin Settings > Organizations. Then, on the Profile tab, populate the Custom app external user session TTL field.

When end-users first access the application, an external user will need to log in. After that, each time they access the app link they will land directly on the app unless they do not log in within the designated time frame.

Code-first

Python client v3.9

v3.9 for DataRobot's Python client is now generally available. For a complete list of changes introduced in v3.9, view the Python client changelog.

New PySDK release

This release of the Python API client adds a new package extra, core, which simplifies configuration of application templates, codespaces, custom models, and custom applications.

Issues fixed in Release 11.2

Data fixes

  • DM-18690: Fixes an issue where defaultPositiveClass could be incorrectly set to 0 due to inconsistent ordering of feature.plot values. Now, numeric class labels are sorted to ensure correct class ordering. This fix applies only to binary classification projects.

  • DM-18661: Now, RBAC Data Admins can delete AI Catalog/Data Registry datasets that they do not otherwise have permadelete, RBAC, or Owner permissions for.

Core AI fixes

  • MODEL-20479: Fixes an issue with payload parsing, where errors could interfere with data drift. This may happen, for example, in cases where the datetime format isn't correct or if a tracked feature is not present in the request. Now, MLOPS events will be sent if prediction payloads fail to parse.

  • MODEL-20152: Fixes an issue where data could not be exported using the export function from an insight. The system was returning a 500 code due to the invalid target values. Now those rows are ignored.

  • MODEL-20720: Restrictions for RULEFIT blueprintS were removed and now they are always added to the repository.

  • MMM-20313: Payload parsing errors can interfere with data drift, for instance in cases where the datetime format isn't correct or if a tracked feature is not present in the request. MLOPS events will now be sent if prediction payloads fail to parse.

  • PRED-11915: Increases the payload size to 50 MB for the following endpoints:

    • /api/v2/deployments/:id/predictions
    • /api/v2/deployments/:id/predictionsUnstructured
    • /api/v2/deployments/:id/chat/completions
    • /api/v2/deployments/:id/directAccesss
  • PRED-11893: Fixes an issue with batch monitoring enabled on a deployment. Previously, real-time scoring requests often failed with an error indicating the header {{X-DataRobot-Model-Monitoring-Batch-Id}} was missing, even when the header was included in the request.

  • PRED-11884: The Max Concurrent Jobs value is no longer reset to 1 during prediction environment updates when it is not explicitly provided.

  • PRED-11515: Fixed an issue where when a deployment was created as inactive and was later activated, it could not serve predictions.

  • RAPTOR-14216: Larger resource bundles are now available for custom jobs and are correctly displayed.

Code fixes

Codespaces integration with private Git repositories improved

In previous versions, there were several issues impacting Codespace integrations with private Git repositories that required periodic token refreshes and frequent reconnections.

This release implements several fixes to automatically refresh authentication tokens, making the integrations persistent so they don't require frequent user intervention to keep active.

Platform fixes

  • RAPTOR-13880: Introduces new env variables to control probes for custom model containers. As a part of this change, the LRS_CUSTOM_MODEL_STARTUP_TIMEOUT_SECONDS variable was removed. Refer to the latest installation guide for more details.

All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them.