Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Python API client changelog

Reference the changes introduced to new versions of DataRobot's Python API client.

Python client v3.4

New features

  • Added the following classes for generative AI. Importing these from datarobot._experimental.models.genai is deprecated and will be removed by the release of DataRobot 10.1 and client v3.5.

    • Playground to manage generative AI playgrounds.
    • LLMDefinition to get information about supported LLMs.
    • LLMBlueprint to manage LLM blueprints.
    • Chat to manage chats for LLM blueprints.
    • ChatPrompt to submit prompts within a chat.
    • ComparisonChat to manage comparison chats across multiple LLM blueprints within a playground.
    • ComparisonPrompt to submit a prompt to multiple LLM blueprints within a comparison chat.
    • VectorDatabase to create vector databases from datasets in the AI Catalog for retrieval augmented generation with an LLM blueprint.
    • CustomModelVectorDatabaseValidation to validate a deployment for use as a vector database.
    • CustomModelLLMValidation to validate a deployment for use as an LLM.
    • UserLimits to get counts of vector databases and LLM requests for a user.
  • Extended the advanced options available when setting a target to include new parameter: incrementalLearningEarlyStoppingRounds (part of the AdvancedOptions object). This parameter allows you to specify when to stop for incremental learning automation.

  • Added experimental support for Chunking Service:

    • DatasetChunkDefinition for defining how chunks are created from a data source.
      • DatasetChunkDefinition.create to create a new dataset chunk definition.
      • DatasetChunkDefinition.get to get a specific dataset chunk definition.
      • DatasetChunkDefinition.list to list all dataset chunk definitions.
      • DatasetChunkDefinition.get_datasource_definition to retrieve the data source definition.
      • DatasetChunkDefinition.get_chunk to get specific chunk metadata belonging to a dataset chunk definition.
      • DatasetChunkDefinition.list_chunks to list all chunk metadata belonging to a dataset chunk definition.
      • DatasetChunkDefinition.create_chunk to submit a job to retrieve the data from the origin data source.
      • DatasetChunkDefinition.create_chunk_by_index to submit a job to retrieve data from the origin data source by index.
    • OriginStorageType
    • Chunk
    • ChunkStorageType
    • ChunkStorage
    • DatasourceDefinition
    • DatasourceAICatalogInfo to define the datasource AI catalog information to create a new dataset chunk definition.
    • DatasourceDataWarehouseInfo to define the datasource data warehouse (snowflake, big query, etc) information to create a new dataset chunk definition.
    • RuntimeParameter for retrieving runtime parameters assigned to CustomModelVersion.
    • RuntimeParameterValue to define runtime parameter override value, to be assigned to CustomModelVersion.
    • Added a new attribute, is_descending_order to:
      • DatasourceDefinition
      • DatasourceAICatalogInfo
      • DatasourceDataWarehouseInfo
  • Added Snowflake Key Pair authentication for uploading datasets from Snowflake or creating a project from Snowflake data.

  • Added Project.get_model_records to retrieve models. The method Project.get_models is deprecated and will be removed soon in favor of Project.get_model_records.
  • Extended the advanced options available when setting a target to include a new parameter: chunkDefinitionId (part of the AdvancedOptions object). This parameter allows you to specify the chunking definition needed for incremental learning automation.
  • Extended the advanced options available when setting a target to include new Autopilot parameters: incrementalLearningOnlyMode and incrementalLearningOnBestModel (part of the AdvancedOptions object). These parameters allow you to specify how Autopilot is performed with the chunking service.
  • Added a new method DatetimeModel.request_lift_chart to support Lift Chart calculations for datetime partitioned projects with support of Sliced Insights.
  • Added a new method DatetimeModel.get_lift_chart to support Lift chart retrieval for datetime partitioned projects with support of Sliced Insights.
  • Added a new method DatetimeModel.request_roc_curve to support ROC curve calculation for datetime partitioned projects with support of Sliced Insights.
  • Added a new method DatetimeModel.get_roc_curve to support ROC curve retrieval for datetime partitioned projects with support of Sliced Insights.
  • Update method DatetimeModel.request_feature_impact to support the use of Sliced Insights.
  • Update method DatetimeModel.get_feature_impact to support use of Sliced Insights.
  • Update method DatetimeModel.get_or_request_feature_impact to support use of Sliced Insights.
  • Update method DatetimeModel.request_feature_effect to support use of Sliced Insights.
  • Update method DatetimeModel.get_feature_effect to support use of Sliced Insights.
  • Update method DatetimeModel.get_or_request_feature_effect to support use of Sliced Insights.
  • Added a new method FeatureAssociationMatrix.create<datarobot.models.FeatureAssociationMatrix.create> to support the creation of FeatureAssociationMatricies for Featurelists.
  • Introduced a new method Deployment.perform_model_replace as a replacement for Deployment.replace_model.
  • Introduced a new property, model_package, which provides an overview of the currently used model package in datarobot.models.Deployment.
  • Updated the client configuration flow to enhance flexibility and user control. Client configuration will be overwritten via params passed to datarobot.client.Client().
  • Added new parameter prediction_threshold to BatchPredictionJob.score_with_leaderboard_model and BatchPredictionJob.score that automatically assigns the positive class label to any prediction exceeding the threshold.
  • Added two new enum values to datarobot.models.data_slice.DataSlicesOperators, "BETWEEN" and "NOT_BETWEEN", which are used to allow slicing.
  • Added a new class Challenger for interacting with DataRobot challengers to support the following methods: Challenger.get to retrieve challenger objects by ID. Challenger.list to list all challengers. Challenger.create to create a new challenger. Challenger.update to update a challenger. Challenger.delete to delete a challenger.
  • Added a new method Deployment.get_challenger_replay_settings to retrieve the challenger replay settings of a deployment.
  • Added a new method Deployment.list_challengers to retrieve the challengers of a deployment.
  • Added a new method Deployment.get_champion_model_package to retrieve the champion model package from a deployment.
  • Added a new method Deployment.list_prediction_data_exports to retrieve deployment prediction data exports.
  • Added a new method Deployment.list_actuals_data_exports to retrieve deployment actuals data exports.
  • Added a new method Deployment.list_training_data_exports to retrieve deployment training data exports.
  • Manage deployment health settings with the following methods:
    • Get health settings Deployment.get_health_settings
    • Update health settings Deployment.update_health_settings
    • Get default health settings Deployment.get_default_health_settings
  • Added new enum value to datarobot.enums._SHARED_TARGET_TYPE to support Text Generation use case.
  • Added new enum value datarobotServerless`` to ``datarobot.enums.PredictionEnvironmentPlatform to support DataRobot Serverless prediction environments.
  • Added new enum value notApplicable to datarobot.enums.PredictionEnvironmentHealthType to support new health status from DataRobot API.
  • Added a new enum value to datarobot.enums.TARGET_TYPE and datarobot.enums.CUSTOM_MODEL_TARGET_TYPE to support text generation custom inference models.
  • Updated datarobot.CustomModel to support the creation of text generation custom models.
  • Added a new class CustomMetric for interacting with DataRobot custom metrics to support the following methods: CustomMetric.get to retrieve a custom metric object by ID from a given deployment. CustomMetric.list to list all custom metrics from a given deployment. CustomMetric.create to create a new custom metric for a given deployment. CustomMetric.update to update a custom metric for a given deployment. CustomMetric.delete to delete a custom metric for a given deployment. CustomMetric.unset_baseline to remove baseline for a given custom metric. CustomMetric.submit_values to submit aggregated custom metrics values from code. The provided data should be in the form of a dict or a Pandas DataFrame. CustomMetric.submit_single_value to submit a single custom metric value. CustomMetric.submit_values_from_catalog to submit aggregated custom metrics values from a dataset via the AI Catalog. CustomMetric.get_values_over_time to retrieve values of a custom metric over a time period. CustomMetric.get_summary to retrieve the summary of a custom metric over a time period. CustomMetric.get_values_over_batch to retrieve values of a custom metric over batches. CustomMetric.get_batch_summary to retrieve the summary of a custom metric over batches.
  • Added CustomMetricValuesOverTime to retrieve custom metric over time information.
  • Added CustomMetricSummary to retrieve custom metric over time summary.
  • Added CustomMetricValuesOverBatch to retrieve custom metric over batch information.
  • Added CustomMetricBatchSummary to retrieve custom metric batch summary.
  • Added Job and JobRun to create, read, update, run, and delete jobs in the Registry.
  • Added KeyValue to create, read, update, and delete key values.
  • Added a new class PredictionDataExport for interacting with DataRobot deployment data export to support the following methods: PredictionDataExport.get to retrieve a prediction data export object by ID from a given deployment. PredictionDataExport.list to list all prediction data exports from a given deployment. PredictionDataExport.create to create a new prediction data export for a given deployment. PredictionDataExport.fetch_data to retrieve a prediction export data as a DataRobot dataset.
  • Added a new class ActualsDataExport for interacting with DataRobot deployment data export to support the following methods: ActualsDataExport.get to retrieve an actuals data export object by ID from a given deployment. ActualsDataExport.list to list all actuals data exports from a given deployment. ActualsDataExport.create to create a new actuals data export for a given deployment. ActualsDataExport.fetch_data to retrieve an actuals export data as a DataRobot dataset.
  • Added a new class TrainingDataExport for interacting with DataRobot deployment data export to support the following methods: TrainingDataExport.get to retrieve a training data export object by ID from a given deployment. TrainingDataExport.list to list all training data exports from a given deployment. TrainingDataExport.create to create a new training data export for a given deployment. TrainingDataExport.fetch_data to retrieve a training export data as a DataRobot dataset.
  • Added a new parameter base_environment_version_id to CustomModelVersion.create_clean for overriding the default environment version selection behavior.
  • Added a new parameter base_environment_version_id to CustomModelVersion.create_from_previous for overriding the default environment version selection behavior.
  • Added a new class PromptTrace for interacting with DataRobot prompt trace to support the following methods: PromptTrace.list to list all prompt traces from a given playground. PromptTrace.export_to_ai_catalog to export prompt traces for the playground to AI catalog.
  • Added a new class InsightsConfiguration for describing available insights and configured insights for a playground. InsightsConfiguration.list to list the insights that are available to be configured.
  • Added a new class Insights for configuring insights for a playground. Insights.get to get the current insights configuration for a playground. Insights.create to create or update the insights configuration for a playground.
  • Added a new class CostMetricConfiguration for describing available cost metrics and configured cost metrics for a Use Case. CostMetricConfiguration.get to get the cost metric configuration. CostMetricConfiguration.create to create a cost metric configuration. CostMetricConfiguration.update to update the cost metric configuration. CostMetricConfiguration.delete to delete the cost metric configuration.Key
  • Added a new class LLMCostConfiguration for the cost configuration of a specific llm within a Use Case.
  • Added a new class EvaluationDatasetConfiguration for configuration of evaluation datasets. EvaluationDatasetConfiguration.get to get an evaluation dataset configuration. EvaluationDatasetConfiguration.list to list the evaluation dataset configurations for a Use Case. EvaluationDatasetConfiguration.create to create an evaluation dataset configuration. EvaluationDatasetConfiguration.update to update an evaluation dataset configuration. EvaluationDatasetConfiguration.delete to delete an evaluation dataset configuration.
  • Added a new class EvaluationDatasetMetricAggregation for metric aggregation results. EvaluationDatasetMetricAggregation.list to get the metric aggregation results. EvaluationDatasetMetricAggregation.create to create the metric aggregation job. EvaluationDatasetMetricAggregation.delete to delete metric aggregation results.
  • Added a new class: SyntheticEvaluationDataset for synthetic dataset generation. Use SyntheticEvaluationDataset.create to create a synthetic evaluation dataset.

Key changes

  1. Parameter Overrides: Users can now override most of the previously set configuration values directly through parameters when initializing the Client.
  2. Exceptions: The endpoint and token values must be initiated from one source(client params, environment, or config file) and cannot be overridden individually for security and consistency reasons.

Configuration priority:

  1. Client Params
  2. Client config_path param
  3. Environment Variables
  4. Default to reading YAML config file from ~/.config/datarobot/drconfig.yaml

  5. DATAROBOT_API_CONSUMER_TRACKING_ENABLED now always defaults to True in all cases.

  6. Added Databricks personal access token and service principal (also shared credentials via secure config) authentication for uploading datasets from Databricks or creating a project from Databricks data.
  7. Added secure config support for AWS long term credentials.
  8. Implemented support for dr-database-v1 to DataStore, DataSource, and DataDriver. Added enum classes to support the changes.
  9. You can retrieve the canonical URI for a Use Case using UseCase.get_uri.
  10. You can open a Use Case in a browser using UseCase.open_in_browser.

Enhancements

  • Added a new parameter to Dataset.create_from_url to support fast dataset registration:
    • sample_size
  • Added a new parameter to Dataset.create_from_data_source to support fast dataset registration:
    • sample_size
  • Job.get_result_when_complete returns datarobot.models.DatetimeModel instead of the datarobot.models.Model if a datetime model was trained.
  • Dataset.get_as_dataframe can handle downloading parquet files as well as csv files.
  • Implement support for dr-database-v1 in DataStore
  • Added two new parameters to BatchPredictionJobDefinition.list for paginating long job definitions lists:
    • offset
    • limit
  • Added two new parameters to BatchPredictionJobDefinition.list for filtering the job definitions:
    • deployment_id
    • search_name
  • Added new parameter to Deployment.validate_replacement_model to support replacement validation based on model package ID:
    • new_registered_model_version_id
  • Added support for Native Connectors to Connector for everything other than Connector.create and Connector.update

Deprecation summary

  • Removed Model.get_leaderboard_ui_permalink and Model.open_model_browser
  • Deprecated Project.get_models in favour of Project.get_model_records.
  • BatchPredictionJobDefinition.list will no longer return all job definitions after version 3.6 is released. To preserve current behavior please pass limit=0.
  • new_model_id parameter in Deployment.validate_replacement_model will be removed after version 3.6 is released.
  • Deployment.replace_model will be removed after version 3.6 is released. Method Deployment.perform_model_replace should be used instead.
  • CustomInferenceModel.assign_training_data was marked as deprecated in v3.2. The deprecation period has been extended, and the feature will now be removed in v3.5. Use CustomModelVersion.create_clean and CustomModelVersion.create_from_previous instead.

Documentation changes

  • Updated genai_example.rst to utilize latest genAI features and methods introduced most recently in the API client.

Experimental changes

  • Added new attribute, prediction_timeout to CustomModelValidation <datarobot.models.genai.custom_model_validation.CustomModelValidation>.
  • Added new attributes, feedback_result, metrics, and final_prompt to ResultMetadata <datarobot._experimental.models.genai.chat_prompt.ResultMetadata>.
  • Added use_case_id to CustomModelValidation.
  • Added llm_blueprints_count and user_name to Playground.
  • Added custom_model_embedding_validations to SupportedEmbeddings.
  • Added embedding_validation_id and is_separator_regex to VectorDatabase.

  • Added optional parameters, use_case, name, and model to CustomModelValidation.create.

  • Added a method CustomModelValidation.list, to list custom model validations available to a user with several optional parameters to filter the results.
  • Added a method CustomModelValidation.update, to update a custom model validation.

  • Added an optional parameter, use_case, to LLMDefinition.list, to include in the returned LLMs the external LLMs available for the specified use_case as well.

  • Added optional parameter, playground to VectorDatabase.list to list vector databases by playground.

  • Added optional parameter, comparison_chat, to ComparisonPrompt.list, to list comparison prompts by comparison chat.

  • Added optional parameter, comparison_chat, to ComparisonPrompt.create, to specify the comparison chat to create the comparison prompt in.
  • Added optional parameter, feedback_result, to ComparisonPrompt.update, to update a comparison prompt with feedback.

  • Added optional parameters, is_starred to LLMBlueprint.update to update the LLM Blueprint's starred status.

  • Added optional parameters, is_starred to LLMBlueprint.list to filter the returned LLM blueprints to those matching is_starred.

  • Added a new enum PromptType, PromptType to identify the LLMBlueprint's prompting type.

  • Added optional parameters, prompt_type to LLMBlueprint.create, to specify the LLM Blueprint's prompting type. This can be set with PromptType.
  • Added optional parameters, prompt_type to LLMBlueprint.update>, to specify the updated LLM Blueprint's prompting type. This can be set with PromptType.

  • Added a new class, ComparisonChat, for interacting with DataRobot generative AI comparison chats.

  • ComparisonChat.get retrieves a comparison chat object by ID.
  • ComparisonChat.list lists all comparison chats available to the user.
  • ComparisonChat.create creates a new comparison chat.
  • ComparisonChat.update updates the name of a comparison chat.
  • ComparisonChat.delete deletes a single comparison chat.

  • Added optional parameters, playground and chat to ChatPrompt.list, to list chat prompts by playground and chat.

  • Added optional parameter, chat to ChatPrompt.create, to specify the chat to create the chat prompt in.
  • Added a new method, ChatPrompt.update, to update a chat prompt with custom metrics and feedback.

  • Added a new class, Chat, for interacting with DataRobot generative AI chats.

  • Chat.get retrieves a chat object by ID.
  • Chat.list lists all chats available to the user.
  • Chat.create creates a new chat.
  • Chat.update updates the name of a chat.
  • Chat.delete deletes a single chat.

  • Removed the model_package module. Use RegisteredModelVersion instead.

  • Added new class UserLimits
    • Added support to get the count of users' LLM API requests. UserLimits.get_llm_requests_count
    • Added support to get the count of users' vector databases. UserLimits.get_vector_database_count
  • Added new methods to the class Notebook which includes Notebook.run and Notebook.download_revision. See the documentation for example usage.
  • Added new class NotebookScheduledJob.
  • Added new class NotebookScheduledRun.
  • Added a new method Model.get_incremental_learning_metadata that retrieves incremental learning metadata for a model.
  • Added a new method Model.start_incremental_learning that starts incremental learning for a model.
  • Updated the API endpoint prefix for all GenerativeAI routes to align with the publicly documented routes.

Bugfixes

  • Fixed how async url is build in Model.get_or_request_feature_impact
  • Fixed setting ssl_verify by env variables.
  • Resolved a problem related to tilde-based paths in the Client's 'config_path' attribute.
  • Changed the force_size default of ImageOptions to apply the same transformations by default, which are applied when image archive datasets are uploaded to DataRobot.

Updated April 26, 2024