Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Changelog

3.6.0b0

New features

API changes

  • Updated ServerError’s exc_message to be constructed with a request ID to help with debugging.
  • Added method Deployment.get_capabilities to retrieve a list of :class: Capability <datarobot.models.deployment.Capability> objects containing capability details.
  • Advanced options parameters: ‘modelGroupId’, ‘modelRegimeId’, and ‘modelBaselines’ were renamed into ‘seriesId’, ‘forecastDistance’, and ‘forecastOffsets’.
  • Added the parameter use_sample_from_dataset from Project.create_from_dataset. This parameter, when set, uses the EDA sample of the dataset to start the project.
  • Added the parameter quick_compute to ShapImpact functions.
  • Added the parameter copy_insights to Playground.create to copy the insights from existing Playground to the new one.
  • Added the parameter llm_test_configuration_ids, LLMBlueprint.register_custom_model, to run LLM compliance tests when a blueprint is sent to the custom model workshop.

Enhancements

  • Added standard pagination parameters (e.g. limit, offset) to Deployment.list, allowing you to get deployment data in smaller chunks.

Bugfixes

  • Fixed field in CustomTaskVersion for controlling network policies. This is changed from outgoing_network_policy to outbound_network_policy. When performing a GET action, this field was incorrect and always resolved to None. When attempting a POST or PATCH action, the incorrect field would result in a 422. Also changed the name of datarobot.enums.CustomTaskOutgoingNetworkPolicy to datarobot.enums.CustomTaskOutboundNetworkPolicy to reflect the proper field name.
  • Fixed schema for DataSliceSizeInfo, so it now allows an empty list for the messages field.

Deprecation summary

Configuration changes

  • Removed upper bound pin on urllib3 package to allow versions 2.0.2 and above.
  • Upgraded the Pillow library to version 10.3.0. Users installing DataRobot with the “images” extra (pip install datarobot[images]) should note that this is a required library.

Documentation changes

  • The API Reference page has been split into multiple sections for better usability.
  • Fixed docs for Project.refresh to clarify that it does not return a value.
  • Fixed code example for ExternalScores.
  • Added copy button to code examples in ReadTheDocs documentation, for convenience.
  • Removed the outdated ‘examples’ section from the documentation. Please refer to DataRobot’s API Documentation Home for more examples.
  • Removed the duplicate ‘getting started’ section from the documentation.
  • Updated to Sphinx RTD Theme v3.

Experimental changes

3.5.0b0

New features

  • Added support for BYO LLMs using serverless predictions in CustomModelLLMValidation.
  • Added attribute creation_user_name to LLMBlueprint.
  • Added a new class HostedCustomMetricTemplate for hosted custom metrics templates. HostedCustomMetricTemplate.get to retrieve a hosted custom metric template. HostedCustomMetricTemplate.list to list hosted custom metric templates.
  • Added Job.create_from_custom_metric_gallery_template to create a job from a custom metric gallery template.
  • Added a new class HostedCustomMetricTemplate for hosted custom metrics. : HostedCustomMetric.list to list hosted custom metrics. HostedCustomMetric.update to update a hosted custom metrics. HostedCustomMetric.delete to delete a hosted custom metric. HostedCustomMetric.create_from_custom_job to create a hosted custom metric from existing custom job. HostedCustomMetric.create_from_template to create hosted custom metric from template.
  • Added a new class datarobot.models.deployment.custom_metrics.HostedCustomMetricBlueprint for hosted custom metric blueprints. : HostedCustomMetricBlueprint.get to get a hosted custom metric blueprint. HostedCustomMetricBlueprint.create to create a hosted custom metric blueprint. HostedCustomMetricBlueprint.update to update a hosted custom metric blueprint.
  • Added Job.list_schedules to list job schedules.
  • Added a new class JobSchedule for the registry job schedule. : JobSchedule.create to create a job schedule. JobSchedule.update to update a job schedule. JobSchedule.delete to delete a job schedule.
  • Added attribute credential_type to RuntimeParameter.
  • Added a new class EvaluationDatasetConfiguration <datarobot._experimental.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration> for configuration of evaluation datasets. EvaluationDatasetConfiguration.get <datarobot._experimental.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.get> to get an evaluation dataset configuration. EvaluationDatasetConfiguration.list <datarobot._experimental.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.list> to list the evaluation dataset configurations for a Use Case. EvaluationDatasetConfiguration.create <datarobot._experimental.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.create> to create an evaluation dataset configuration. EvaluationDatasetConfiguration.update <datarobot._experimental.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.update> to update an evaluation dataset configuration. EvaluationDatasetConfiguration.delete <datarobot._experimental.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.delete> to delete an evaluation dataset configuration.
  • Added a new class EvaluationDatasetMetricAggregation <datarobot._experimental.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation> for metric aggregation results. EvaluationDatasetMetricAggregation.list <datarobot._experimental.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation.list> to get the metric aggregation results. EvaluationDatasetMetricAggregation.create <datarobot._experimental.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation.create> to create the metric aggregation job. EvaluationDatasetMetricAggregation.delete <datarobot._experimental.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation.delete> to delete metric aggregation results.
  • Added a new class SyntheticEvaluationDataset <datarobot._experimental.models.genai.synthetic_evaluation_dataset_generation.SyntheticEvaluationDataset> for synthetic dataset generation. Use SyntheticEvaluationDataset.create <datarobot._experimental.models.genai.synthetic_evaluation_dataset_generation.SyntheticEvaluationDataset.create> to create a synthetic evaluation dataset.
  • Added a new class SidecarModelMetricValidation <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation> for sidecar model metric validations. SidecarModelMetricValidation.create <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation.create> to create a sidecar model metric validation. SidecarModelMetricValidation.list <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation.list> to list sidecar model metric validations. SidecarModelMetricValidation.get <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation.get> to get a sidecar model metric validation. SidecarModelMetricValidation.revalidate <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation.revalidate> to rerun a sidecar model metric validation. SidecarModelMetricValidation.update <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation.update> to update a sidecar model metric validation. SidecarModelMetricValidation.delete <datarobot._experimental.models.genai.sidecar_model_metric.SidecarModelMetricValidation.delete> to delete a sidecar model metric validation.
  • Added experimental support for Chunking Service:
  • Added a new attribute, is_descending_order to:

Bugfixes

Configuration changes

  • Updated black version to 23.1.0.
  • Removes dependency on package mock, since it is part of the standard library.

Documentation changes

  • Removed incorrect can_share parameters in Use Case sharing example
  • Added usage of external_llm_context_size in llm_settings in genai_example.rst.
  • Updated doc string for llm_settings to include attribute external_llm_context_size for external LLMs.
  • Updated genai_example.rst to link to DataRobot doc pages for external vector database and external LLM deployment creation.

API changes

  • Remove ImportedModel object since it was API for SSE (standalone scoring engine) which is not part of DataRobot anymore.
  • Added number_of_clusters parameter to Project.get_model_records to filter models by number of clusters in unsupervised clustering projects.
  • Remove an unsupported NETWORK_EGRESS_POLICY.DR_API_ACCESS value for custom models. This value was used by a feature that was never released as a GA and is not supported in the current API.
  • Implemented support for dr-connector-v1 to DataStore <datarobot.models.DataStore> and DataSource <datarobot.models.DataStore>.
  • Added a new parameter name to DataStore.list for searching data stores by name.
  • Added a new parameter entity_type to the compute and create methods of the classes ShapMatrix, ShapImpact, ShapPreview. Insights can be computed for custom models if the parameter entity_type="customModel" is passed. See also the User Guide: :ref:SHAP insights overview<shap_insights_overview>.

Experimental changes

3.4.1

New features

Enhancements

Bugfixes

  • Updated the validation logic of RelationshipsConfiguration to work with native database connections

API changes

Deprecation summary

Configuration changes

Documentation changes

Experimental changes

3.4.0

New features

  • Added the following classes for generative AI. Importing these from datarobot._experimental.models.genai is deprecated and will be removed by the release of DataRobot 10.1 and SDK 3.5.
  • Playground to manage generative AI playgrounds.
  • LLMDefinition to get information about supported LLMs.
  • LLMBlueprint to manage LLM blueprints.
  • Chat to manage chats for LLM blueprints.
  • ChatPrompt to submit prompts within a chat.
  • ComparisonChat to manage comparison chats across multiple LLM blueprints within a playground.
  • ComparisonPrompt to submit a prompt to multiple LLM blueprints within a comparison chat.
  • VectorDatabase to create vector databases from datasets in the AI Catalog for retrieval augmented generation with an LLM blueprint.
  • CustomModelVectorDatabaseValidation to validate a deployment for use as a vector database.
  • CustomModelLLMValidation to validate a deployment for use as an LLM.
  • UserLimits to get counts of vector databases and LLM requests for a user.
  • Extended the advanced options available when setting a target to include new parameter: ‘incrementalLearningEarlyStoppingRounds’(part of the AdvancedOptions object). This parameter allows you to specify when to stop for incremental learning automation.
  • Added experimental support for Chunking Service:
  • DatasetChunkDefinition for defining how chunks are created from a data source.
  • OriginStorageType
  • Chunk
  • ChunkStorageType
  • ChunkStorage
  • DatasourceDefinition
  • DatasourceAICatalogInfo to define the datasource AI catalog information to create a new dataset chunk definition.
  • DatasourceDataWarehouseInfo to define the datasource data warehouse (snowflake, big query, etc) information to create a new dataset chunk definition.
  • RuntimeParameter for retrieving runtime parameters assigned to CustomModelVersion.
  • RuntimeParameterValue to define runtime parameter override value, to be assigned to CustomModelVersion.
  • Added Snowflake Key Pair authentication for uploading datasets from Snowflake or creating a project from Snowflake data
  • Added Project.get_model_records to retrieve models. Method Project.get_models is deprecated and will be removed soon in favour of Project.get_model_records.
  • Extended the advanced options available when setting a target to include new parameter: ‘chunkDefinitionId’(part of the AdvancedOptions object). This parameter allows you to specify the chunking definition needed for incremental learning automation.
  • Extended the advanced options available when setting a target to include new Autopilot parameters: ‘incrementalLearningOnlyMode’ and ‘incrementalLearningOnBestModel’ (part of the AdvancedOptions object). These parameters allow you to specify how Autopilot is performed with the chunking service.
  • Added a new method DatetimeModel.request_lift_chart to support Lift Chart calculations for datetime partitioned projects with support of Sliced Insights.
  • Added a new method DatetimeModel.get_lift_chart to support Lift chart retrieval for datetime partitioned projects with support of Sliced Insights.
  • Added a new method DatetimeModel.request_roc_curve to support ROC curve calculation for datetime partitioned projects with support of Sliced Insights.
  • Added a new method DatetimeModel.get_roc_curve to support ROC curve retrieval for datetime partitioned projects with support of Sliced Insights.
  • Update method DatetimeModel.request_feature_impact to support use of Sliced Insights.
  • Update method DatetimeModel.get_feature_impact to support use of Sliced Insights.
  • Update method DatetimeModel.get_or_request_feature_impact to support use of Sliced Insights.
  • Update method DatetimeModel.request_feature_effect to support use of Sliced Insights.
  • Update method DatetimeModel.get_feature_effect to support use of Sliced Insights.
  • Update method DatetimeModel.get_or_request_feature_effect to support use of Sliced Insights.
  • Added a new method FeatureAssociationMatrix.create to support the creation of FeatureAssociationMatricies for Featurelists.
  • Introduced a new method Deployment.perform_model_replace as a replacement for Deployment.replace_model.
  • Introduced a new property, model_package, which provides an overview of the currently used model package in datarobot.models.Deployment.
  • Added new parameter prediction_threshold to BatchPredictionJob.score_with_leaderboard_model and BatchPredictionJob.score that automatically assigns the positive class label to any prediction exceeding the threshold.
  • Added two new enum values to :class: datarobot.models.data_slice.DataSlicesOperators, “BETWEEN” and “NOT_BETWEEN”, which are used to allow slicing.
  • Added a new class Challenger for interacting with DataRobot challengers to support the following methods: Challenger.get to retrieve challenger objects by ID. Challenger.list to list all challengers. Challenger.create to create a new challenger. Challenger.update to update a challenger. Challenger.delete to delete a challenger.
  • Added a new method Deployment.get_challenger_replay_settings to retrieve the challenger replay settings of a deployment.
  • Added a new method Deployment.list_challengers to retrieve the challengers of a deployment.
  • Added a new method Deployment.get_champion_model_package to retrieve the champion model package from a deployment.
  • Added a new method Deployment.list_prediction_data_exports to retrieve deployment prediction data exports.
  • Added a new method Deployment.list_actuals_data_exports to retrieve deployment actuals data exports.
  • Added a new method Deployment.list_training_data_exports to retrieve deployment training data exports.
  • Manage deployment health settings with the following methods:
  • Get health settings Deployment.get_health_settings
  • Update health settings Deployment.update_health_settings
  • Get default health settings Deployment.get_default_health_settings
  • Added new enum value to datarobot.enums._SHARED_TARGET_TYPE to support Text Generation use case.
  • Added new enum value datarobotServerless to datarobot.enums.PredictionEnvironmentPlatform to support DataRobot Serverless prediction environments.
  • Added new enum value notApplicable to datarobot.enums.PredictionEnvironmentHealthType to support new health status from DataRobot API.
  • Added new enum value to datarobot.enums.TARGET_TYPE and datarobot.enums.CUSTOM_MODEL_TARGET_TYPE to support text generation custom inference models.
  • Updated datarobot.CustomModel to support the creation of text generation custom models.
  • Added a new class CustomMetric for interacting with DataRobot custom metrics to support the following methods: CustomMetric.get to retrieve a custom metric object by ID from a given deployment. CustomMetric.list to list all custom metrics from a given deployment. CustomMetric.create to create a new custom metric for a given deployment. CustomMetric.update to update a custom metric for a given deployment. CustomMetric.delete to delete a custom metric for a given deployment. CustomMetric.unset_baseline to remove baseline for a given custom metric. CustomMetric.submit_values to submit aggregated custom metrics values from code. The provided data should be in the form of a dict or a Pandas DataFrame. CustomMetric.submit_single_value to submit a single custom metric value. CustomMetric.submit_values_from_catalog to submit aggregated custom metrics values from a dataset via the AI Catalog. CustomMetric.get_values_over_time to retrieve values of a custom metric over a time period. CustomMetric.get_summary to retrieve the summary of a custom metric over a time period. CustomMetric.get_values_over_batch to retrieve values of a custom metric over batches. CustomMetric.get_batch_summary to retrieve the summary of a custom metric over batches.
  • Added CustomMetricValuesOverTime to retrieve custom metric over time information.
  • Added CustomMetricSummary to retrieve custom metric over time summary.
  • Added CustomMetricValuesOverBatch to retrieve custom metric over batch information.
  • Added CustomMetricBatchSummary to retrieve custom metric batch summary.
  • Added Job and JobRun to create, read, update, run, and delete jobs in the Registry.
  • Added KeyValue to create, read, update, and delete key values.
  • Added a new class PredictionDataExport for interacting with DataRobot deployment data export to support the following methods: PredictionDataExport.get to retrieve a prediction data export object by ID from a given deployment. PredictionDataExport.list to list all prediction data exports from a given deployment. PredictionDataExport.create to create a new prediction data export for a given deployment. PredictionDataExport.fetch_data to retrieve a prediction export data as a DataRobot dataset.
  • Added a new class ActualsDataExport for interacting with DataRobot deployment data export to support the following methods: ActualsDataExport.get to retrieve an actuals data export object by ID from a given deployment. ActualsDataExport.list to list all actuals data exports from a given deployment. ActualsDataExport.create to create a new actuals data export for a given deployment. ActualsDataExport.fetch_data to retrieve an actuals export data as a DataRobot dataset.
  • Added a new class TrainingDataExport for interacting with DataRobot deployment data export to support the following methods: TrainingDataExport.get to retrieve a training data export object by ID from a given deployment. TrainingDataExport.list to list all training data exports from a given deployment. TrainingDataExport.create to create a new training data export for a given deployment. TrainingDataExport.fetch_data to retrieve a training export data as a DataRobot dataset.
  • Added a new parameter base_environment_version_id to CustomModelVersion.create_clean for overriding the default environment version selection behavior.
  • Added a new parameter base_environment_version_id to CustomModelVersion.create_from_previous for overriding the default environment version selection behavior.
  • Added a new class PromptTrace <datarobot._experimental.models.genai.prompt_trace.PromptTrace> for interacting with DataRobot prompt trace to support the following methods: PromptTrace.list <datarobot._experimental.models.genai.prompt_trace.PromptTrace.list> to list all prompt traces from a given playground. PromptTrace.export_to_ai_catalog <datarobot._experimental.models.genai.prompt_trace.PromptTrace.export_to_ai_catalog> to export prompt traces for the playground to AI catalog.
  • Added a new class InsightsConfiguration <datarobot._experimental.models.genai.insights_configuration.InsightsConfiguration> for describing available insights and configured insights for a playground. InsightsConfiguration.list <datarobot._experimental.models.genai.insights_configuration.InsightsConfiguration.list> to list the insights that are available to be configured.
  • Added a new class Insights <datarobot._experimental.models.genai.insights_configuration.Insights> for configuring insights for a playground. Insights.get <datarobot._experimental.models.genai.insights_configuration.Insights.get> to get the current insights configuration for a playground. Insights.create <datarobot._experimental.models.genai.insights_configuration.Insights.create> to create or update the insights configuration for a playground.
  • Added a new class :class: CostMetricConfiguration <datarobot._experimental.models.genai.cost_metric_configurations.CostMetricConfiguration> for describing available cost metrics and configured cost metrics for a Use Case. CostMetricConfiguration.get <datarobot._experimental.models.genai.cost_metric_configurations.CostMetricConfiguration.get> to get the cost metric configuration. CostMetricConfiguration.create <datarobot._experimental.models.genai.cost_metric_configurations.CostMetricConfiguration.create> to create a cost metric configuration. CostMetricConfiguration.update <datarobot._experimental.models.genai.cost_metric_configurations.CostMetricConfiguration.update> to update the cost metric configuration. CostMetricConfiguration.delete <datarobot._experimental.models.genai.cost_metric_configurations.CostMetricConfiguration.delete> to delete the cost metric configuration.Key
  • Added a new class LLMCostConfiguration <datarobot._experimental.models.genai.cost_metric_configurations.LLMCostConfiguration> for the cost configuration of a specific llm within a Use Case.
  • Added new classes ShapMatrix, ShapImpact, ShapPreview to interact with SHAP-based insights. See also the User Guide: :ref:SHAP insights overview<shap_insights_overview>

API changes

  • Parameter Overrides: Users can now override most of the previously set configuration values directly through parameters when initializing the Client. Exceptions: The endpoint and token values must be initialized from one source (client params, environment, or config file) and cannot be overridden individually, for security and consistency reasons. The new configuration priority is as follows:
  • Client Params
  • Client config_path param
  • Environment Variables
  • Default to reading YAML config file from ~/.config/datarobot/drconfig.yaml
  • DATAROBOT_API_CONSUMER_TRACKING_ENABLED now always defaults to True.
  • Added Databricks personal access token and service principal (also shared credentials via secure config) authentication for uploading datasets from Databricks or creating a project from Databricks data.
  • Added secure config support for AWS long term credentials.
  • Implemented support for dr-database-v1 to DataStore <datarobot.models.DataStore>, DataSource <datarobot.models.DataStore>, and DataDriver <datarobot.models.DataDriver>. Added enum classes to support the changes.
  • You can retrieve the canonical URI for a Use Case using UseCase.get_uri.
  • You can open a Use Case in a browser using UseCase.open_in_browser.

Enhancements

Deprecation summary

Documentation changes

  • Updated genai_example.rst to utilize latest genAI features and methods introduced most recently in the API client.

Experimental changes

  • Added new attribute, prediction_timeout to CustomModelValidation.
  • Added new attributes, feedback_result, metrics, and final_prompt to ResultMetadata <datarobot._experimental.models.genai.chat_prompt.ResultMetadata>.
  • Added use_case_id to CustomModelValidation.
  • Added llm_blueprints_count and user_name to Playground.
  • Added custom_model_embedding_validations to SupportedEmbeddings <datarobot._experimental.models.genai.vector_database.SupportedEmbeddings>.
  • Added embedding_validation_id and is_separator_regex to VectorDatabase <datarobot._experimental.models.genai.vector_database.VectorDatabase>.
  • Added optional parameters, use_case, name, and model to CustomModelValidation.create.
  • Added a method CustomModelValidation.list, to list custom model validations available to a user with several optional parameters to filter the results.
  • Added a method CustomModelValidation.update, to update a custom model validation.
  • Added an optional parameter, use_case, to LLMDefinition.list, to include in the returned LLMs the external LLMs available for the specified use_case as well.
  • Added optional parameter, playground to VectorDatabase.list to list vector databases by playground.
  • Added optional parameter, comparison_chat, to ComparisonPrompt.list, to list comparison prompts by comparison chat.
  • Added optional parameter, comparison_chat, to ComparisonPrompt.create, to specify the comparison chat to create the comparison prompt in.
  • Added optional parameter, feedback_result, to ComparisonPrompt.update <datarobot._experimental.models.genai.comparison_prompt.ComparisonPrompt.update>, to update a comparison prompt with feedback.
  • Added optional parameters, is_starred to LLMBlueprint.update to update the LLM blueprint’s starred status.
  • Added optional parameters, is_starred to LLMBlueprint.list to filter the returned LLM blueprints to those matching is_starred.
  • Added a new enum PromptType, PromptType to identify the LLMBlueprint’s prompting type.
  • Added optional parameters, prompt_type to LLMBlueprint.create, to specify the LLM blueprint’s prompting type. This can be set with PromptType.
  • Added optional parameters, prompt_type to LLMBlueprint.update, to specify the updated LLM blueprint’s prompting type. This can be set with PromptType.
  • Added a new class, ComparisonChat, for interacting with DataRobot generative AI comparison chats. ComparisonChat.get retrieves a comparison chat object by ID. ComparisonChat.list lists all comparison chats available to the user. ComparisonChat.create creates a new comparison chat. ComparisonChat.update updates the name of a comparison chat. ComparisonChat.delete deletes a single comparison chat.
  • Added optional parameters, playground and chat to ChatPrompt.list, to list chat prompts by playground and chat.
  • Added optional parameter, chat to ChatPrompt.create, to specify the chat to create the chat prompt in.
  • Added a new method, ChatPrompt.update <datarobot._experimental.models.genai.chat_prompt.ChatPrompt.update>, to update a chat prompt with custom metrics and feedback.
  • Added a new class, Chat, for interacting with DataRobot generative AI chats. Chat.get retrieves a chat object by ID. Chat.list lists all chats available to the user. Chat.create creates a new chat. Chat.update updates the name of a chat. Chat.delete deletes a single chat.
  • Removed the model_package module. Use RegisteredModelVersion instead.
  • Added new class UserLimits
  • Added support to get the count of users’ LLM API requests. UserLimits.get_llm_requests_count
  • Added support to get the count of users’ vector databases. UserLimits.get_vector_database_count
  • Added new methods to the class Notebook which includes Notebook.run and Notebook.download_revision. See the documentation for example usage.
  • Added new class NotebookScheduledJob.
  • Added new class NotebookScheduledRun.
  • Added a new method Model.get_incremental_learning_metadata that retrieves incremental learning metadata for a model.
  • Added a new method Model.start_incremental_learning that starts incremental learning for a model.
  • Updated the API endpoint prefix for all GenerativeAI routes to align with the publicly documented routes.

Bugfixes

  • Fixed how async url is build in Model.get_or_request_feature_impact
  • Fixed setting ssl_verify by env variables.
  • Resolved a problem related to tilde-based paths in the Client’s ‘config_path’ attribute.
  • Changed the force_size default of ImageOptions to apply the same transformations by default, which are applied when image archive datasets are uploaded to DataRobot.

3.3.0

New features

Bugfixes

  • Payload property subset renamed to source in Model.request_feature_effect
  • Fixed an issue where Context.trace_context was not being set from environment variables or DR config files.
  • Project.refresh no longer sets Project.advanced_options to a dictionary.
  • Fixed Dataset.modify to clarify behavior of when to preserve or clear categories.
  • Fixed an issue with enums in f-strings resulting in the enum class and property being printed instead of the enum property’s value in Python 3.11 environments.

Deprecation summary

  • Project.refresh will no longer set Project.advanced_options to a dictionary after version 3.5 is released. : All interactions with Project.advanced_options should be expected to be through the AdvancedOptions class.

Experimental changes

3.2.0

New features

Currently the default value for enable_api_consumer_tracking is True. - Added method meth:Deployment.get_predictions_over_time <datarobot.models.Deployment.get_predictions_over_time> to retrieve deployment predictions over time data. - Added a new class FairnessScoresOverTime to retrieve fairness over time information. - Added a new method Deployment.get_fairness_scores_over_time to retrieve fairness scores over time of a deployment. - Added a new use_gpu parameter to the method Project.analyze_and_model to set whether the project should allow usage of GPU - Added a new use_gpu parameter to the class Project with information whether project allows usage of GPU - Added a new class TrainingData for retrieving TrainingData assigned to CustomModelVersion. - Added a new class HoldoutData for retrieving HoldoutData assigned to CustomModelVersion. - Added the ability to retrieve the model and blueprint json using the following methods: Model.get_model_blueprint_json Blueprint.get_json - Added Credential.update which allows you to update existing credential resources. - Added a new optional parameter trace_context to datarobot.Client to provide additional information on the DataRobot code being run. This parameter defaults to None. - Updated methods in Model to support use of Sliced Insights: Model.get_feature_effect Model.request_feature_effect Model.get_or_request_feature_effect Model.get_lift_chart Model.get_all_lift_charts Model.get_residuals_chart Model.get_all_residuals_charts Model.request_lift_chart Model.request_residuals_chart Model.get_roc_curve Model.get_feature_impact Model.request_feature_impact Model.get_or_request_feature_impact - Added support for SharingRole to the following methods: - DataStore.share - Added new methods for retrieving SharingRole information for the following classes: - DataStore.get_shared_roles - Added new method for calculating sliced roc curve Model.request_roc_curve - Added new DataSlice to support the following slices methods: DataSlice.list to retrieve all data slices in a project. DataSlice.create to create a new data slice. DataSlice.delete to delete the data slice calling this method. DataSlice.request_size to submit a request to calculate a data slice size on a source. DataSlice.get_size_info to get the data slice’s info when applied to a source. DataSlice.get to retrieve a specific data slice. - Added new DataSliceSizeInfo to define the result of a data slice applied to a source. - Added new method for retrieving all available feature impacts for the model Model.get_all_feature_impacts. - Added new method for StatusCheckJob to wait and return the completed object once it is generated datarobot.models.StatusCheckJob.get_result_when_complete()

Enhancements

The parameters added to both APIs are: : - training_dataset_id - partition_column - holdout_dataset_id - keep_training_holdout_data - max_wait - Extended CustomInferenceModel.create and CustomInferenceModel.update with the parameter is_training_data_for_versions_permanently_enabled. - Added value DR_API_ACCESS to the NETWORK_EGRESS_POLICY enum. - Added new parameter low_memory to Dataset.get_as_dataframe to allow a low memory mode for larger datasets - Added two new parameters to Project.list for paginating long project lists: : - offset - limit

Bugfixes

  • Fixed incompatibilities with Pandas 2.0 in DatetimePartitioning.to_dataframe.
  • Fixed a crash when using non-“latin-1” characters in Panda’s DataFrame used as prediction data in BatchPredictionJob.score.
  • Fixed an issue where failed authentication when invoking datarobot.client.Client() raises a misleading error about client-server compatibility.
  • Fixed incompatibilities with Pandas 2.0 in AccuracyOverTime.get_as_dataframe. The method will now throw a ValueError if an empty list is passed to the parameter metrics.

API changes

  • Added parameter unsupervised_type to the class DatetimePartitioning.
  • The sliced insight API endpoint GET: api/v2/insights/<insight_name>/ returns a paginated response. This means that it returns an empty response if no insights data is found, unlike GET: api/v2/projects/<pid>/models/<lid>/<insight_name>/, which returns 404 NOT FOUND in this case. To maintain backwards-compatibility, all methods that retrieve insights data raise 404 NOT FOUND if the insights API returns an empty response.

Deprecation summary

Configuration changes

  • Pins dependency on package urllib3 to be less than version 2.0.0.

Deprecation summary

  • Deprecated parameter user_agent_suffix in datarobot.Client. user_agent_suffix will be removed in v3.4. Please use trace_context instead.

Documentation changes

  • Fixed in-line documentation of DataRobotClientConfig.
  • Fixed documentation around client configuration from environment variables or config file.

Experimental changes

3.1.1

Configuration changes

  • Removes dependency on package contextlib2 since the package is Python 3.7+.
  • Update typing-extensions to be inclusive of versions from 4.3.0 to < 5.0.0.

3.1.0

Enhancements

Bugfixes

  • Added format key to Batch Prediction intake and output settings for S3, GCP and Azure

API changes

Deprecation summary

  • Deprecated method Project.create_from_hdfs.
  • Deprecated method DatetimePartitioning.generate.
  • Deprecated parameter in_use from ImageAugmentationList.create as DataRobot will take care of it automatically.
  • Deprecated property Deployment.capabilities from Deployment.
  • ImageAugmentationSample.compute was removed in v3.1. You can get the same information with the method ImageAugmentationList.compute_samples.
  • sample_id parameter removed from ImageAugmentationSample.list. Please use auglist_id instead.

Documentation changes

  • Update the documentation to suggest that setting use_backtest_start_end_format of DatetimePartitioning.to_specification to True will mirror the same behavior as the Web UI.
  • Update the documentation to suggest setting use_start_end_format of Backtest.to_specification to True will mirror the same behavior as the Web UI.

3.0.3

Bugfixes

  • Fixed an issue affecting backwards compatibility in datarobot.models.DatetimeModel, where an unexpected keyword from the DataRobot API would break class deserialization.

3.0.2

Bugfixes

  • Restored Model.get_leaderboard_ui_permalink, Model.open_model_browser, These methods were accidentally removed instead of deprecated.
  • Fix for ipykernel < 6.0.0 which does not persist contextvars across cells

Deprecation summary

  • Deprecated method Model.get_leaderboard_ui_permalink. Please use Model.get_uri instead.
  • Deprecated method Model.open_model_browser. Please use Model.open_in_browser instead.

3.0.1

Bugfixes

  • Added typing-extensions as a required dependency for the DataRobot Python SDK.

3.0.0

New features

Enhancements

Bugfixes

  • Dataset.list no longer throws errors when listing datasets with no owner.
  • Fixed an issue with the creation of BatchPredictionJobDefinitions containing a schedule.
  • Fixed error handling in datarobot.helpers.partitioning_methods.get_class.
  • Fixed issue with portions of the payload not using camelCasing in Project.upload_dataset_from_catalog.

API changes

  • The Python client now outputs a DataRobotProjectDeprecationWarning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled as a result of the DataRobot platform’s migration to Python 3.
  • The Python client now raises a TypeError when you try to retrieve a labelwise ROC on a binary model or a binary ROC on a multilabel model.
  • The method Dataset.create_from_data_source now raises InvalidUsageError if username and password are not passed as a pair together.

Deprecation summary

Configuration changes

  • Added a context manager client_configuration that can be used to change the connection configuration temporarily, for use in asynchronous or multithreaded code.
  • Upgraded the Pillow library to version 9.2.0. Users installing DataRobot with the “images” extra (pip install datarobot[images]) should note that this is a required library.

Experimental changes

2.29.0b0

New features

Enhancements

See: endpoint_url parameter. - Added guide on :ref:working with binary data <binary_data> - Added multithreading support to binary data helper functions. - Binary data helpers image defaults aligned with application’s image preprocessing. - Added the following accuracy metrics to be retrieved for a deployment - TPR, PPV, F1 and MCC :ref:Deployment monitoring <deployment_monitoring>

Bugfixes

  • Don’t include holdout start date, end date, or duration in datetime partitioning payload when holdout is disabled.
  • Removed ICE Plot capabilities from Feature Fit.
  • Handle undefined calendar_name in CalendarFile.create_calendar_from_dataset
  • Raise ValueError for submitted calendar names that are not strings

API changes

  • version field is removed from ImportedModel object

Deprecation summary

  • Reason Codes objects deprecated in 2.13 version were removed. Please use Prediction Explanations instead.

Configuration changes

  • The upper version constraint on pandas has been removed.

Documentation changes

  • Fixed a minor typo in the example for Dataset.create_from_data_source.
  • Update the documentation to suggest that feature_derivation_window_end of datarobot.DatetimePartitioningSpecification class should be a negative or zero.

2.28.0

New features

  • Added new parameter upload_read_timeout to BatchPredictionJob.score and BatchPredictionJob.score_to_file to indicate how many seconds to wait until intake dataset uploads to server. Default value 600s.
  • Added the ability to turn off supervised feature reduction for Time Series projects. Option use_supervised_feature_reduction can be set in AdvancedOptions.
  • Allow maximum_memory to be input for custom tasks versions. This will be used for setting the limit to which a custom task prediction container memory can grow.
  • Added method datarobot.models.Project.get_multiseries_names() to the project service which will return all the distinct entries in the multiseries column
  • Added new segmentation_task_id attribute to datarobot.models.Project.set_target() that allows to start project as Segmented Modeling project.
  • Added new property is_segmented to datarobot.models.Project that indicates if project is a regular one or Segmented Modeling project.
  • Added method datarobot.models.Project.restart_segment() to the project service that allows to restart single segment that hasn’t reached modeling phase.
  • Added the ability to interact with Combined Models in Segmented Modeling projects. Available with new class: datarobot.CombinedModel.

Functionality: : - datarobot.CombinedModel.get() - datarobot.CombinedModel.get_segments_info() - datarobot.CombinedModel.get_segments_as_dataframe() - datarobot.CombinedModel.get_segments_as_csv() - datarobot.CombinedModel.set_segment_champion() - Added the ability to create and retrieve segmentation tasks used in Segmented Modeling projects. Available with new class: datarobot.SegmentationTask.

Functionality: : - datarobot.SegmentationTask.create() - datarobot.SegmentationTask.list() - datarobot.SegmentationTask.get() - Added new class: datarobot.SegmentInfo that allows to get information on all segments of Segmented modeling projects, i.e. segment project ID, model counts, autopilot status.

Functionality: : - datarobot.SegmentInfo.list() - Added new methods to base APIObject to assist with dictionary and json serialization of child objects.

Functionality: : - APIObject.to_dict - APIObject.to_json - Added new methods to ImageAugmentationList for interacting with image augmentation samples.

Functionality: : - ImageAugmentationList.compute_samples - ImageAugmentationList.retrieve_samples - Added the ability to set a prediction threshold when creating a deployment from a learning model. - Added support for governance, owners, predictionEnvironment, and fairnessHealth fields when querying for a Deployment object. - Added helper methods for working with files, images and documents. Methods support conversion of file contents into base64 string representations. Methods for images provide also image resize and transformation support.

Functionality: : - get_encoded_file_contents_from_urls - get_encoded_file_contents_from_paths - get_encoded_image_contents_from_paths - get_encoded_image_contents_from_urls

Enhancements

Bugfixes

API changes

  • User can include ICE plots data in the response when requesting Feature Effects/Feature Fit. Extended methods are : - Model.get_feature_effect,
  • Model.get_feature_fit <datarobot.models.Model.get_feature_fit>,
  • DatetimeModel.get_feature_effect and
  • DatetimeModel.get_feature_fit <datarobot.models.DatetimeModel.get_feature_fit>.

Deprecation summary

  • attrs library is removed from library dependencies
  • ImageAugmentationSample.compute was marked as deprecated and will be removed in v2.30. You can get the same information with newly introduced method ImageAugmentationList.compute_samples
  • ImageAugmentationSample.list using sample_id
  • Deprecating scaleout parameters for projects / models. Includes scaleout_modeling_mode, scaleout_max_train_pct, and scaleout_max_train_rows

Configuration changes

  • pandas upper version constraint is updated to include version 1.3.5.

Documentation changes

  • Fixed “from datarobot.enums” import in Unsupervised Clustering example provided in docs.

2.27.0

New features

Functionality: : - datarobot.DataEngineQueryGenerator.create() - datarobot.DataEngineQueryGenerator.get() - datarobot.DataEngineQueryGenerator.create_dataset()

See the :ref:time series data prep documentation <time_series_data_prep> for more information. - Added the ability to upload a prediction dataset into a project from the AI catalog Project.upload_dataset_from_catalog. - Added the ability to specify the number of training rows to use in SHAP based Feature Impact computation. Extended method:

Functionality: : - datarobot.models.restore_discarded_features.DiscardedFeaturesInfo.retrieve() - datarobot.models.restore_discarded_features.DiscardedFeaturesInfo.restore() - Added the ability to control class mapping aggregation in multiclass projects via ClassMappingAggregationSettings passed as a parameter to Project.set_target - Added support for :ref:unsupervised clustering projects<unsupervised_clustering> - Added the ability to compute and retrieve Feature Effects for a Multiclass model using datarobot.models.Model.request_feature_effects_multiclass(), datarobot.models.Model.get_feature_effects_multiclass() or datarobot.models.Model.get_or_request_feature_effects_multiclass() methods. For datetime models use following methods datarobot.models.DatetimeModel.request_feature_effects_multiclass(), datarobot.models.DatetimeModel.get_feature_effects_multiclass() or datarobot.models.DatetimeModel.get_or_request_feature_effects_multiclass() with backtest_index specified - Added the ability to get and update challenger model settings for deployment class: datarobot.models.Deployment

Functionality: : - datarobot.models.Deployment.get_challenger_models_settings() - datarobot.models.Deployment.update_challenger_models_settings() - Added the ability to get and update segment analysis settings for deployment class: datarobot.models.Deployment

Functionality: : - datarobot.models.Deployment.get_segment_analysis_settings() - datarobot.models.Deployment.update_segment_analysis_settings() - Added the ability to get and update predictions by forecast date settings for deployment class: datarobot.models.Deployment

Functionality: : - datarobot.models.Deployment.get_predictions_by_forecast_date_settings() - datarobot.models.Deployment.update_predictions_by_forecast_date_settings() - Added the ability to specify multiple feature derivation windows when creating a Relationships Configuration using RelationshipsConfiguration.create - Added the ability to manipulate a legacy conversion for a custom inference model, using the class: CustomModelVersionConversion

Functionality: : - CustomModelVersionConversion.run_conversion - CustomModelVersionConversion.stop_conversion - CustomModelVersionConversion.get - CustomModelVersionConversion.get_latest - CustomModelVersionConversion.list

Enhancements

Bugfixes

API changes

Deprecation summary

  • Model.get_all_labelwise_roc_curves has been removed. You can get the same information with multiple calls of Model.get_labelwise_roc_curves, one per data source.
  • Model.get_all_multilabel_lift_charts has been removed. You can get the same information with multiple calls of Model.get_multilabel_lift_charts, one per data source.

Documentation changes

  • This release introduces a new documentation organization. The organization has been modified to better reflect the end-to-end modeling workflow. The new “Tutorials” section has 5 major topics that outline the major components of modeling: Data, Modeling, Predictions, MLOps, and Administration.
  • The Getting Started workflow is now hosted at DataRobot’s API Documentation Home.
  • Added an example of how to set up optimized datetime partitioning for time series projects.

2.26.0

New features

  • Added the ability to use external baseline predictions for time series project. External dataset can be validated using datarobot.models.Project.validate_external_time_series_baseline(). Option can be set in AdvancedOptions to scale datarobot models’ accuracy performance using external dataset’s accuracy performance. See the :ref:external baseline predictions documentation <external_baseline_predictions> for more information.
  • Added the ability to generate exponentially weighted moving average features for time series project. Option can be set in AdvancedOptions and controls the alpha parameter used in exponentially weighted moving average operation.
  • Added the ability to request a specific model be prepared for deployment using Project.start_prepare_model_for_deployment.
  • Added a new class: datarobot.CustomTask. This class is a custom task that you can use as part (or all) of your blue print for training models. It needs datarobot.CustomTaskVersion before it can properly be used.

Functionality: : - Create, copy, update or delete: : - datarobot.CustomTask.create() - datarobot.CustomTask.copy() - datarobot.CustomTask.update() - datarobot.CustomTask.delete() - list, get and refresh current tasks: : - datarobot.CustomTask.get() - datarobot.CustomTask.list() - datarobot.CustomTask.refresh() - Download the latest datarobot.CustomTaskVersion of the datarobot.CustomTask : - datarobot.CustomTask.download_latest_version() - Added a new class: datarobot.CustomTaskVersion. This class is for management of specific versions of a custom task.

Functionality: : - Create new custom task versions: : - datarobot.CustomTaskVersion.create_clean() - datarobot.CustomTaskVersion.create_from_previous() - list, get and refresh current available versions: : - datarobot.CustomTaskVersion.list() - datarobot.CustomTaskVersion.get() - datarobot.CustomTaskVersion.refresh() - datarobot.CustomTaskVersion.download() will download a tarball of the files used to create the custom task - datarobot.CustomTaskVersion.update() updates the metadata for a custom task. - Added the ability compute batch predictions for an in-memory DataFrame using BatchPredictionJob.score - Added the ability to specify feature discovery settings when creating a Relationships Configuration using RelationshipsConfiguration.create

Enhancements

Bugfixes

API changes

  • Updated Project.start to use AUTOPILOT_MODE.QUICK when the autopilot_on param is set to True. This brings it in line with Project.set_target.
  • Updated project.start_autopilot to accept the following new GA parameters that are already in the public API: consider_blenders_in_recommendation, run_leakage_removed_feature_list

Deprecation summary

Configuration changes

  • Now requires dependency on package scikit-learn rather than sklearn. Note: This dependency is only used in example code. See this scikit-learn issue for more information.
  • Now permits dependency on package attrs to be less than version 21. This fixes compatibility with apache-airflow.
  • Allow to setup Authorization: <type> <token> type header for OAuth2 Bearer tokens.

Documentation changes

  • Update the documentation with respect to the permission that controls AI Catalog dataset snapshot behavior.

2.25.0

New features

Functionality:

Plots can be calculated using a common method:

Metadata for plots can be retrieved using the following methods:

Plots can be retrieved using the following methods:

Preview plots can be retrieved using the following methods:

Enhancements

Bugfixes

  • Remove the deprecation warnings when using with latest versions of urllib3.
  • FeatureAssociationMatrix.get is now using correct query param name when featurelist_id is specified.
  • Handle scalar values in shapBaseValue while converting a predictions response to a data frame.
  • Ensure that if a configured endpoint ends in a trailing slash, the resulting full URL does not end up with double slashes in the path.
  • Model.request_frozen_datetime_model is now implementing correct validation of input parameter training_start_date.

API changes

2.24.0

New features

Enhancements

Bugfixes

API changes

Deprecation summary

  • class ComplianceDocumentation will be deprecated in v2.24 and will be removed entirely in v2.27. Use AutomatedDocument instead. To start off, see the :ref:Automated Documentation overview<automated_documentation_overview> for details.

Documentation changes

2.23.0

New features

  • Calendars for time series projects can now be automatically generated by providing a country code to the method CalendarFile.create_calendar_from_country_code. A list of allowed country codes can be retrieved using CalendarFile.get_allowed_country_codes For more information, see the :ref:calendar documentation <preloaded_calendar_files>.
  • Added calculate_all_series`` param to [DatetimeModel.compute_series_accuracy`](autodoc/models.md#datarobot.models.DatetimeModel.compute_series_accuracy). This option allows users to compute series accuracy for all available series at once, while by default it is computed for first 1000 series only.
  • Added ability to specify sampling method when setting target of OTV project. Option can be set in AdvancedOptions and changes a way training data is defined in autopilot steps.
  • Add support for custom inference model k8s resources management. This new feature enables users to control k8s resources allocation for their executed model in the k8s cluster. It involves in adding the following new parameters: network_egress_policy, desired_memory, maximum_memory, replicas to the following classes: datarobot.CustomInferenceModel, datarobot.CustomModelVersion, datarobot.CustomModelTest
  • Add support for multiclass custom inference and training models. This enables users to create classification custom models with more than two class labels. The datarobot.CustomInferenceModel class can now use datarobot.TARGET_TYPE.MULTICLASS for their target_type parameter. Class labels for inference models can be set/updated using either a file or as a list of labels.
  • Support for Listing all the secondary dataset configuration for a given project: : - SecondaryDatasetConfigurations.list
  • Add support for unstructured custom inference models. The datarobot.CustomInferenceModel class can now use datarobot.TARGET_TYPE.UNSTRUCTURED for its target_type parameter. target_name parameter is optional for UNSTRUCTURED target type.
  • All per-class lift chart data is now available for multiclass models using Model.get_multiclass_lift_chart.
  • AUTOPILOT_MODE.COMPREHENSIVE, a new mode, has been added to Project.set_target.
  • Add support for anomaly detection custom inference models. The datarobot.CustomInferenceModel class can now use datarobot.TARGET_TYPE.ANOMALY for its target_type parameter. target_name parameter is optional for ANOMALY target type.
  • Support for Updating and retrieving the secondary dataset configuration for a Feature discovery deployment: : - Deployment.update_secondary_dataset_config
  • Deployment.get_secondary_dataset_config
  • Add support for starting and retrieving Feature Impact information for datarobot.CustomModelVersion
  • Search for interaction features and Supervised Feature reduction for feature discovery project can now be specified : in AdvancedOptions.
  • Feature discovery projects can now be created using the Project.start method by providing relationships_configuration_id.
  • Actions applied to input data during automated feature discovery can now be retrieved using FeatureLineage.get Corresponding feature lineage id is available as a new datarobot.models.Feature field feature_lineage_id.
  • Lift charts and ROC curves are now calculated for backtests 2+ in time series and OTV models. The data can be retrieved for individual backtests using Model.get_lift_chart and Model.get_roc_curve.
  • The following methods now accept a new argument called credential_data, the credentials to authenticate with the database, to use instead of user/password or credential ID: : - Dataset.create_from_data_source
  • Dataset.create_project
  • Project.create_from_dataset
  • Add support for DataRobot Connectors, datarobot.Connector provides a simple implementation to interface with connectors.

Enhancements

  • Running Autopilot on Leakage Removed feature list can now be specified in AdvancedOptions. By default, Autopilot will always run on Informative Features - Leakage Removed feature list if it exists. If the parameter run_leakage_removed_feature_list is set to False, then Autopilot will run on Informative Features or available custom feature list.
  • Method Project.upload_dataset and Project.upload_dataset_from_data_source support new optional parameter secondary_datasets_config_id for Feature discovery project.

Bugfixes

API changes

Deprecation summary

Documentation changes

2.22.1

New features

Enhancements

Bugfixes

  • Handle null values in predictionExplanationMetadata["shapRemainingTotal"] while converting a predictions response to a data frame.
  • Handle null values in customModel["latestVersion"]
  • Removed an extra column status from BatchPredictionJob as it caused issues with never version of Trafaret validation.
  • Make predicted_vs_actual optional in Feature Effects data because a feature may have insufficient qualified samples.
  • Make jdbc_url optional in Data Store data because some data stores will not have it.
  • The method Project.get_datetime_models now correctly returns all DatetimeModel objects for the project, instead of just the first 100.
  • Fixed a documentation error related to snake_case vs camelCase in the JDBC settings payload.
  • Make trafaret validator for datasets use a syntax that works properly with a wider range of trafaret versions.
  • Handle extra keys in CustomModelTests and CustomModelVersions
  • ImageEmbedding and ImageActivationMap now supports regression projects.

API changes

  • The default value for the mode param in Project.set_target has been changed from AUTOPILOT_MODE.FULL_AUTO to AUTOPILOT_MODE.QUICK

Documentation changes

  • Added links to classes with duration parameters such as validation_duration and holdout_duration to provide duration string examples to users.
  • The :ref:models documentation <models> has been revised to include section on how to train a new model and how to run cross-validation or backtesting for a model.

2.21.0

New features

Enhancements

Bugfixes

  • An issue with input validation of the Batch Prediction module
  • parent_model_id was not visible for all frozen models
  • Batch Prediction jobs that used other output types than local_file failed when using .wait_for_completion()
  • A race condition in the Batch Prediction file scoring logic

API changes

  • Three new fields were added to the Dataset object. This reflects the updated fields in the public API routes at api/v2/datasets/. The added fields are:
    • processing_state: Current ingestion process state of the dataset
    • row_count: The number of rows in the dataset.
    • size: The size of the dataset as a CSV in bytes.

Deprecation summary

  • datarobot.enums.VARIABLE_TYPE_TRANSFORM.CATEGORICAL for is deprecated for the following and will be removed in v2.22. : - meth:Project.batch_features_type_transform
  • meth:Project.create_type_transform_feature

2.20.0

New features

Functionality:

  • Create a Dataset by uploading from a file, URL or in-memory datasource.
  • Get Datasets or elements of Dataset with:
  • Modify, delete or un-delete a Dataset:
    • Dataset.modify Changes the name and categories of the Dataset
    • Dataset.delete soft deletes a Dataset.
    • Dataset.un_delete un-deletes the Dataset. You cannot retrieve the IDs of deleted Datasets, so if you want to un-delete a Dataset, you need to store its ID before deletion.
  • You can also create a Project using a Dataset with:
  • It is possible to create an alternative configuration for the secondary dataset which can be used during the prediction
  • SecondaryDatasetConfigurations.create allow to create secondary dataset configuration
  • You can now filter the deployments returned by the Deployment.list command. You can do this by passing an instance of the DeploymentListFilters class to the filters keyword argument. The currently supported filters are:
  • role
  • service_health
  • model_health
  • accuracy_health
  • execution_environment_type
  • materiality
  • A new workflow is available for making predictions in time series projects. To that end, PredictionDataset objects now contain the following new fields:
  • forecast_point_range: The start and end date of the range of dates available for use as the forecast point, detected based on the uploaded prediction dataset
  • data_start_date: A datestring representing the minimum primary date of the prediction dataset
  • data_end_date: A datestring representing the maximum primary date of the prediction dataset
  • max_forecast_date: A datestring representing the maximum forecast date of this prediction dataset

Additionally, users no longer need to specify a forecast_point or predictions_start_date and predictions_end_date when uploading datasets for predictions in time series projects. More information can be found in the :ref:time series predictions<new_pred_ux> documentation. - Per-class lift chart data is now available for multiclass models using Model.get_multiclass_lift_chart. - Unsupervised projects can now be created using the Project.start and Project.set_target methods by providing unsupervised_mode=True, provided that the user has access to unsupervised machine learning functionality. Contact support for more information. - A new boolean attribute unsupervised_mode was added to datarobot.DatetimePartitioningSpecification. When it is set to True, datetime partitioning for unsupervised time series projects will be constructed for nowcasting: forecast_window_start=forecast_window_end=0. - Users can now configure the start and end of the training partition as well as the end of the validation partition for backtests in a datetime-partitioned project. More information and example usage can be found in the :ref:backtesting documentation <backtest_configuration>.

Enhancements

  • Updated the user agent header to show which python version.
  • Model.get_frozen_child_models can be used to retrieve models that are frozen from a given model
  • Added datarobot.enums.TS_BLENDER_METHOD to make it clearer which blender methods are allowed for use in time series projects.

Bugfixes

  • An issue where uploaded CSV’s would loose quotes during serialization causing issues when columns containing line terminators where loaded in a dataframe, has been fixed
  • Project.get_association_featurelists is now using the correct endpoint name, but the old one will continue to work
  • Python API PredictionServer supports now on-premise format of API response.

2.19.0

New features

  • Projects can be cloned using Project.clone_project
  • Calendars used in time series projects now support having series-specific events, for instance if a holiday only affects some stores. This can be controlled by using new argument of the CalendarFile.create method. If multiseries id columns are not provided, calendar is considered to be single series and all events are applied to all series.
  • We have expanded prediction intervals availability to the following use-cases:

More details on prediction intervals can be found in the :ref:prediction intervals documentation <prediction_intervals>. - Allowed pairwise interaction groups can now be specified in AdvancedOptions. They will be used in GAM models during training. - New deployments features:

  • Update the label and description of a deployment using Deployment.update.
  • :ref:Association ID setting<deployment_association_id> can be retrieved and updated.
  • Regression deployments now support :ref:prediction warnings<deployment_prediction_warning>.
  • For multiclass models now it’s possible to get feature impact for each individual target class using Model.get_multiclass_feature_impact
  • Added support for new :ref:Batch Prediction API <batch_predictions>.
  • It is now possible to create and retrieve basic, oauth and s3 credentials with Credential.
  • It’s now possible to get feature association statuses for featurelists using Project.get_association_featurelists
  • You can also pass a specific featurelist_id into Project.get_associations

Enhancements

  • Added documentation to Project.get_metrics to detail the new ascending field that indicates how a metric should be sorted.
  • Retraining of a model is processed asynchronously and returns a ModelJob immediately.
  • Blender models can be retrained on a different set of data or a different feature list.
  • Word cloud ngrams now has variable field representing the source of the ngram.
  • Method WordCloud.ngrams_per_class can be used to split ngrams for better usability in multiclass projects.
  • Method Project.set_target support new optional parameters featureEngineeringGraphs and credentials.
  • Method Project.upload_dataset and Project.upload_dataset_from_data_source support new optional parameter credentials.
  • Series accuracy retrieval methods (DatetimeModel.get_series_accuracy_as_dataframe and DatetimeModel.download_series_accuracy_as_csv) for multiseries time series projects now support additional parameters for specifying what data to retrieve, including:
    • metric: Which metric to retrieve scores for
    • multiseries_value: Only returns series with a matching multiseries ID
    • order_by: An attribute by which to sort the results

Bugfixes

API changes

  • The datarobot package is now no longer a namespace package.
  • datarobot.enums.BLENDER_METHOD.FORECAST_DISTANCE is removed (deprecated in 2.18.0).

Documentation changes

  • Updated :ref:Residuals charts <residuals_chart> documentation to reflect that the data rows include row numbers from the source dataset for projects created in DataRobot 5.3 and newer.

2.18.0

New features

  • :ref:Residuals charts <residuals_chart> can now be retrieved for non-time-aware regression models.
  • :ref:Deployment monitoring <deployment_monitoring> can now be used to retrieve service stats, service health, accuracy info, permissions, and feature lists for deployments.
  • :ref:Time series <time_series> projects now support the Average by Forecast Distance blender, configured with more than one Forecast Distance. The blender blends the selected models, selecting the best three models based on the backtesting score for each Forecast Distance and averaging their predictions. The new blender method FORECAST_DISTANCE_AVG has been added to datarobot.enums.BLENDER_METHOD.
  • Deployment.submit_actuals can now be used to submit data about actual results from a deployed model, which can be used to calculate accuracy metrics.

Enhancements

  • Monotonic constraints are now supported for OTV projects. To that end, the parameters monotonic_increasing_featurelist_id and monotonic_decreasing_featurelist_id can be specified in calls to Model.train_datetime or Project.train_datetime.
  • When retrieving information about features, information about summarized categorical variables is now available in a new keySummary.
  • For Word Clouds in multiclass projects, values of the target class for corresponding word or ngram can now be passed using the new class parameter.
  • Listing deployments using Deployment.list now support sorting and searching the results using the new order_by and search parameters.
  • You can now get the model associated with a model job by getting the model variable on the model job object.
  • The Blueprint class can now retrieve the recommended_featurelist_id, which indicates which feature list is recommended for this blueprint. If the field is not present, then there is no recommended feature list for this blueprint.
  • The Model class now can be used to retrieve the model_number.
  • The method Model.get_supported_capabilities now has an extra field supportsCodeGeneration to explain whether the model supports code generation.
  • Calls to Project.start and Project.upload_dataset now support uploading data via S3 URI and pathlib.Path objects.
  • Errors upon connecting to DataRobot are now clearer when an incorrect API Token is used.
  • The datarobot package is now a namespace package.

Deprecation summary

  • datarobot.enums.BLENDER_METHOD.FORECAST_DISTANCE is deprecated and will be removed in 2.19. Use FORECAST_DISTANCE_ENET instead.

Documentation changes

  • Various typo and wording issues have been addressed.
  • A new notebook showing regression-specific features is now been added to the examples_index.
  • Documentation for :ref:Access lists <sharing> has been added.

2.17.0

New features

Enhancements

  • number_of_do_not_derive_features has been added to the datarobot.DatetimePartitioning class to specify the number of features that are marked as excluded from derivation.
  • Users with PyYAML>=5.1 will no longer receive a warning when using the datarobot package
  • It is now possible to use files with unicode names for creating projects and prediction jobs.
  • Users can now embed DataRobot-generated content in a ComplianceDocTemplate using keyword tags. :ref:See here <automated_documentation_overview> for more details.
  • The field calendar_name has been added to datarobot.DatetimePartitioning to display the name of the calendar used for a project.
  • :ref:Prediction intervals <prediction_intervals> are now supported for start-end retrained models in a time series project.
  • Previously, all backtests had to be run before :ref:prediction intervals <prediction_intervals> for a time series project could be requested with predictions. Now, backtests will be computed automatically if needed when prediction intervals are requested.

Bugfixes

  • An issue affecting time series project creation for irregularly spaced dates has been fixed.
  • ComplianceDocTemplate now supports empty text blocks in user sections.
  • An issue when using Predictions.get to retrieve predictions metadata has been fixed.

Documentation changes

  • An overview on working with class ComplianceDocumentation and ComplianceDocTemplate has been created. :ref:See here <automated_documentation_overview> for more details.

2.16.0

New features

Enhancements

  • Information on the effective feature derivation window is now available for :ref:time series projects <time_series> to specify the full span of historical data required at prediction time. It may be longer than the feature derivation window of the project depending on the differencing settings used.

Additionally, more of the project partitioning settings are also available on the DatetimeModel class. The new attributes are:

  • effective_feature_derivation_window_start
  • effective_feature_derivation_window_end
  • forecast_window_start
  • forecast_window_end
  • windows_basis_unit
  • Prediction metadata is now included in the return of Predictions.get

Documentation changes

  • Various typo and wording issues have been addressed.
  • The example data that was meant to accompany the Time Series examples has been added to the zip file of the download in the examples_index.

2.15.1

Enhancements

  • CalendarFile.get_access_list has been added to the CalendarFile class to return a list of users with access to a calendar file.
  • A role attribute has been added to the CalendarFile class to indicate the access level a current user has to a calendar file. For more information on the specific access levels, see the :ref:sharing <sharing> documentation.

Bugfixes

  • Previously, attempting to retrieve the calendar_id of a project without a set target would result in an error. This has been fixed to return None instead.

2.15.0

New features

Enhancements

  • The dataframe returned from datarobot.PredictionExplanations.get_all_as_dataframe() will now have each class label class_X be the same from row to row.
  • The client is now more robust to networking issues by default. It will retry on more errors and respects Retry-After headers in HTTP 413, 429, and 503 responses.
  • Added Forecast Distance blender for Time-Series projects configured with more than one Forecast Distance. It blends the selected models creating separate linear models for each Forecast Distance.
  • Project can now be :ref:shared <sharing> with other users.
  • Project.upload_dataset and Project.upload_dataset_from_data_source will return a PredictionDataset with data_quality_warnings if potential problems exist around the uploaded dataset.
  • relax_known_in_advance_features_check has been added to Project.upload_dataset and Project.upload_dataset_from_data_source to allow missing values from the known in advance features in the forecast window at prediction time.
  • cross_series_group_by_columns has been added to datarobot.DatetimePartitioning to allow users the ability to indicate how to further split series into related groups.
  • Information retrieval for ROC Curve has been extended to include fraction_predicted_as_positive, fraction_predicted_as_negative, lift_positive and lift_negative

Bugfixes

  • Fixes an issue where the client would not be usable if it could not be sure it was compatible with the configured server

API changes

Configuration changes

  • Now requires dependency on package requests to be at least version 2.21.
  • Now requires dependency on package urllib3 to be at least version 1.24.

Documentation changes

  • Advanced model insights notebook extended to contain information on visualization of cumulative gains and lift charts.

2.14.2

Bugfixes

  • Fixed an issue where searches of the HTML documentation would sometimes hang indefinitely

Documentation changes

  • Python3 is now the primary interpreter used to build the docs (this does not affect the ability to use the package with Python2)

2.14.1

Documentation changes

  • Documentation for the Model Deployment interface has been removed after the corresponding interface was removed in 2.13.0.

2.14.0

New features

  • The new method Model.get_supported_capabilities retrieves a summary of the capabilities supported by a particular model, such as whether it is eligible for Prime and whether it has word cloud data available.
  • New class for working with model compliance documentation feature of DataRobot: class ComplianceDocumentation
  • New class for working with compliance documentation templates: ComplianceDocTemplate
  • New class FeatureHistogram has been added to retrieve feature histograms for a requested maximum bin count
  • Time series projects now support binary classification targets.
  • Cross series features can now be created within time series multiseries projects using the use_cross_series_features and aggregation_type attributes of the datarobot.DatetimePartitioningSpecification. See the :ref:Time Series <time_series> documentation for more info.

Enhancements

  • Client instantiation now checks the endpoint configuration and provides more informative error messages. It also automatically corrects HTTP to HTTPS if the server responds with a redirect to HTTPS.
  • Project.upload_dataset and Project.create now accept an optional parameter of dataset_filename to specify a file name for the dataset. This is ignored for url and file path sources.
  • New optional parameter fallback_to_parent_insights has been added to Model.get_lift_chart, Model.get_all_lift_charts, Model.get_confusion_chart, Model.get_all_confusion_charts, Model.get_roc_curve, and Model.get_all_roc_curves. When True, a frozen model with missing insights will attempt to retrieve the missing insight data from its parent model.
  • New number_of_known_in_advance_features attribute has been added to the datarobot.DatetimePartitioning class. The attribute specifies number of features that are marked as known in advance.
  • Project.set_worker_count can now update the worker count on a project to the maximum number available to the user.
  • :ref:Recommended Models API <recommended_models> can now be used to retrieve model recommendations for datetime partitioned projects
  • Timeseries projects can now accept feature derivation and forecast windows intervals in terms of number of the rows rather than a fixed time unit. DatetimePartitioningSpecification and Project.set_target support new optional parameter windowsBasisUnit, either ‘ROW’ or detected time unit.
  • Timeseries projects can now accept feature derivation intervals, forecast windows, forecast points and prediction start/end dates in milliseconds.
  • DataSources and DataStores can now be :ref:shared <sharing> with other users.
  • Training predictions for datetime partitioned projects now support the new data subset dr.enums.DATA_SUBSET.ALL_BACKTESTS for requesting the predictions for all backtest validation folds.

API changes

  • The model recommendation type “Recommended” (deprecated in version 2.13.0) has been removed.

Documentation changes

  • Example notebooks have been updated: : - Notebooks now work in Python 2 and Python 3
  • A notebook illustrating time series capability has been added
  • The financial data example has been replaced with an updated introductory example.
  • To supplement the embedded Python notebooks in both the PDF and HTML docs bundles, the notebook files and supporting data can now be downloaded from the HTML docs bundle.
  • Fixed a minor typo in the code sample for get_or_request_feature_impact

2.13.0

New features

Enhancements

  • Python 3.7 is now supported.
  • Feature impact now returns not only the impact score for the features but also whether they were detected to be redundant with other high-impact features.
  • A new is_blocked attribute has been added to the Job class, specifying whether a job is blocked from execution because one or more dependencies are not yet met.
  • The Featurelist object now has new attributes reporting its creation time, whether it was created by a user or by DataRobot, and the number of models using the featurelist, as well as a new description field.
  • Featurelists can now be renamed and have their descriptions updated with Featurelist.update and ModelingFeaturelist.update.
  • Featurelists can now be deleted with Featurelist.delete and ModelingFeaturelist.delete.
  • ModelRecommendation.get now accepts an optional parameter of type datarobot.enums.RECOMMENDED_MODEL_TYPE which can be used to get a specific kind of recommendation.
  • Previously computed predictions can now be listed and retrieved with the Predictions class, without requiring a reference to the original PredictJob.

Bugfixes

  • The Model Deployment interface which was previously visible in the client has been removed to allow the interface to mature, although the raw API is available as a “beta” API without full backwards compatibility support.

API changes

  • Added support for retrieving the Pareto Front of a Eureqa model. See ParetoFront.
  • A new recommendation type “Recommended for Deployment” has been added to ModelRecommendation which is now returns as the default recommended model when available. See :ref:model_recommendation.

Deprecation summary

  • The feature previously referred to as “Reason Codes” has been renamed to “Prediction Explanations”, to provide increased clarity and accessibility. The old ReasonCodes interface has been deprecated and replaced with PredictionExplanations.
  • The recommendation type “Recommended” is deprecated and will no longer be returned in v2.14 of the API.

Documentation changes

  • Added a new documentation section :ref:model_recommendation.
  • Time series projects support multiseries as well as single series data. They are now documented in the :ref:Time Series Projects <time_series> documentation.

2.12.0

New features

  • Some models now have Missing Value reports allowing users with access to uncensored blueprints to retrieve a detailed breakdown of how numeric imputation and categorical converter tasks handled missing values. See the :ref:documentation <missing_values_report> for more information on the report.

2.11.0

New features

  • The new ModelRecommendation class can be used to retrieve the recommended models for a project.
  • A new helper method cross_validate was added to class Model. This method can be used to request Model’s Cross Validation score.
  • Training a model with monotonic constraints is now supported. Training with monotonic constraints allows users to force models to learn monotonic relationships with respect to some features and the target. This helps users create accurate models that comply with regulations (e.g. insurance, banking). Currently, only certain blueprints (e.g. xgboost) support this feature, and it is only supported for regression and binary classification projects.
  • DataRobot now supports “Database Connectivity”, allowing databases to be used as the source of data for projects and prediction datasets. The feature works on top of the JDBC standard, so a variety of databases conforming to that standard are available; a list of databases with tested support for DataRobot is available in the user guide in the web application. See :ref:Database Connectivity <database_connectivity_overview> for details.
  • Added a new feature to retrieve feature logs for time series projects. Check datarobot.DatetimePartitioning.feature_log_list() and datarobot.DatetimePartitioning.feature_log_retrieve() for details.

API changes

  • New attributes supporting monotonic constraints have been added to the AdvancedOptions, Project, Model, and Blueprint classes. See :ref:monotonic constraints<monotonic_constraints> for more information on how to configure monotonic constraints.
  • New parameters predictions_start_date and predictions_end_date added to Project.upload_dataset to support bulk predictions upload for time series projects.

Deprecation summary

Configuration changes

  • Retry settings compatible with those offered by urllib3’s Retry interface can now be configured. By default, we will now retry connection errors that prevented requests from arriving at the server.

Documentation changes

  • “Advanced Model Insights” example has been updated to properly handle bin weights when rebinning.

2.9.0

New features

  • New ModelDeployment class can be used to track status and health of models deployed for predictions.

Enhancements

  • DataRobot API now supports creating 3 new blender types - Random Forest, TensorFlow, LightGBM.
  • Multiclass projects now support blenders creation for 3 new blender types as well as Average and ENET blenders.
  • Models can be trained by requesting a particular row count using the new training_row_count argument with Project.train, Model.train and Model.request_frozen_model in non-datetime partitioned projects, as an alternative to the previous option of specifying a desired percentage of the project dataset. Specifying model size by row count is recommended when the float precision of sample_pct could be problematic, e.g. when training on a small percentage of the dataset or when training up to partition boundaries.
  • New attributes max_train_rows, scaleout_max_train_pct, and scaleout_max_train_rows have been added to Project. max_train_rows specified the equivalent value to the existing max_train_pct as a row count. The scaleout fields can be used to see how far scaleout models can be trained on projects, which for projects taking advantage of scalable ingest may exceed the limits on the data available to non-scaleout blueprints.
  • Individual features can now be marked as a priori or not a priori using the new feature_settings attribute when setting the target or specifying datetime partitioning settings on time series projects. Any features not specified in the feature_settings parameter will be assigned according to the default_to_a_priori value.
  • Three new options have been made available in the datarobot.DatetimePartitioningSpecification class to fine-tune how time-series projects derive modeling features. treat_as_exponential can control whether data is analyzed as an exponential trend and transformations like log-transform are applied. differencing_method can control which differencing method to use for stationary data. periodicities can be used to specify periodicities occurring within the data. All are optional and defaults will be chosen automatically if they are unspecified.

API changes

  • Now training_row_count is available on non-datetime models as well as “rowCount” based datetime models. It reports the number of rows used to train the model (equivalent to sample_pct).
  • Features retrieved from Feature.get now include target_leakage.

2.8.1

Bugfixes

  • The documented default connect_timeout will now be correctly set for all configuration mechanisms, so that requests that fail to reach the DataRobot server in a reasonable amount of time will now error instead of hanging indefinitely. If you observe that you have started seeing ConnectTimeout errors, please configure your connect_timeout to a larger value.
  • Version of trafaret library this package depends on is now pinned to trafaret>=0.7,<1.1 since versions outside that range are known to be incompatible.

2.8.0

New features

  • The DataRobot API supports the creation, training, and predicting of multiclass classification projects. DataRobot, by default, handles a dataset with a numeric target column as regression. If your data has a numeric cardinality of fewer than 11 classes, you can override this behavior to instead create a multiclass classification project from the data. To do so, use the set_target function, setting target_type=’Multiclass’. If DataRobot recognizes your data as categorical, and it has fewer than 11 classes, using multiclass will create a project that classifies which label the data belongs to.
  • The DataRobot API now includes Rating Tables. A rating table is an exportable csv representation of a model. Users can influence predictions by modifying them and creating a new model with the modified table. See the :ref:documentation<rating_table> for more information on how to use rating tables.
  • scaleout_modeling_mode has been added to the AdvancedOptions class used when setting a project target. It can be used to control whether scaleout models appear in the autopilot and/or available blueprints. Scaleout models are only supported in the Hadoop environment with the corresponding user permission set.
  • A new premium add-on product, Time Series, is now available. New projects can be created as time series projects which automatically derive features from past data and forecast the future. See the :ref:time series documentation<time_series> for more information.
  • The Feature object now returns the EDA summary statistics (i.e., mean, median, minimum, maximum, and standard deviation) for features where this is available (e.g., numeric, date, time, currency, and length features). These summary statistics will be formatted in the same format as the data it summarizes.
  • The DataRobot API now supports Training Predictions workflow. Training predictions are made by a model for a subset of data from original dataset. User can start a job which will make those predictions and retrieve them. See the :ref:documentation<predictions> for more information on how to use training predictions.
  • DataRobot now supports retrieving a :ref:model blueprint chart<model_blueprint_chart> and a :ref:model blueprint docs<model_blueprint_doc>.
  • With the introduction of Multiclass Classification projects, DataRobot needed a better way to explain the performance of a multiclass model so we created a new Confusion Chart. The API now supports retrieving and interacting with confusion charts.

Enhancements

  • DatetimePartitioningSpecification now includes the optional disable_holdout flag that can be used to disable the holdout fold when creating a project with datetime partitioning.
  • When retrieving reason codes on a project using an exposure column, predictions that are adjusted for exposure can be retrieved.
  • File URIs can now be used as sourcedata when creating a project or uploading a prediction dataset. The file URI must refer to an allowed location on the server, which is configured as described in the user guide documentation.
  • The advanced options available when setting the target have been extended to include the new parameter ‘events_count’ as a part of the AdvancedOptions object to allow specifying the events count column. See the user guide documentation in the webapp for more information on events count.
  • PredictJob.get_predictions now returns predicted probability for each class in the dataframe.
  • PredictJob.get_predictions now accepts prefix parameter to prefix the classes name returned in the predictions dataframe.

API changes

  • Add target_type parameter to set_target() and start(), used to override the project default.

2.7.2

Documentation changes

  • Updated link to the publicly hosted documentation.

2.7.1

Documentation changes

  • Online documentation hosting has migrated from PythonHosted to Read The Docs. Minor code changes have been made to support this.

2.7.0

New features

  • Lift chart data for models can be retrieved using the Model.get_lift_chart and Model.get_all_lift_charts methods.
  • ROC curve data for models in classification projects can be retrieved using the Model.get_roc_curve and Model.get_all_roc_curves methods.
  • Semi-automatic autopilot mode is removed.
  • Word cloud data for text processing models can be retrieved using Model.get_word_cloud method.
  • Scoring code JAR file can be downloaded for models supporting code generation.

Enhancements

  • A __repr__ method has been added to the PredictionDataset class to improve readability when using the client interactively.
  • Model.get_parameters now includes an additional key in the derived features it includes, showing the coefficients for individual stages of multistage models (e.g. Frequency-Severity models).
  • When training a DatetimeModel on a window of data, a time_window_sample_pct can be specified to take a uniform random sample of the training data instead of using all data within the window.
  • Installing of DataRobot package now has an “Extra Requirements” section that will install all of the dependencies needed to run the example notebooks.

Documentation changes

  • A new example notebook describing how to visualize some of the newly available model insights including lift charts, ROC curves, and word clouds has been added to the examples section.
  • A new section for Common Issues has been added to Getting Started to help debug issues related to client installation and usage.

2.6.1

Bugfixes

  • Fixed a bug with Model.get_parameters raising an exception on some valid parameter values.

Documentation changes

  • Fixed sorting order in Feature Impact example code snippet.

2.6.0

New features

  • A new partitioning method (datetime partitioning) has been added. The recommended workflow is to preview the partitioning by creating a DatetimePartitioningSpecification and passing it into DatetimePartitioning.generate, inspect the results and adjust as needed for the specific project dataset by adjusting the DatetimePartitioningSpecification and re-generating, and then set the target by passing the final DatetimePartitioningSpecification object to the partitioning_method parameter of Project.set_target.
  • When interacting with datetime partitioned projects, DatetimeModel can be used to access more information specific to models in datetime partitioned projects. See :ref:the documentation<datetime_modeling_workflow> for more information on differences in the modeling workflow for datetime partitioned projects.
  • The advanced options available when setting the target have been extended to include the new parameters ‘offset’ and ‘exposure’ (part of the AdvancedOptions object) to allow specifying offset and exposure columns to apply to predictions generated by models within the project. See the user guide documentation in the webapp for more information on offset and exposure columns.
  • Blueprints can now be retrieved directly by project_id and blueprint_id via Blueprint.get.
  • Blueprint charts can now be retrieved directly by project_id and blueprint_id via BlueprintChart.get. If you already have an instance of Blueprint you can retrieve its chart using Blueprint.get_chart.
  • Model parameters can now be retrieved using ModelParameters.get. If you already have an instance of Model you can retrieve its parameters using Model.get_parameters.
  • Blueprint documentation can now be retrieved using Blueprint.get_documents. It will contain information about the task, its parameters and (when available) links and references to additional sources.
  • The DataRobot API now includes Reason Codes. You can now compute reason codes for prediction datasets. You are able to specify thresholds on which rows to compute reason codes for to speed up computation by skipping rows based on the predictions they generate. See the reason codes :ref:documentation<reason_codes> for more information.

Enhancements

  • A new parameter has been added to the AdvancedOptions used with Project.set_target. By specifying accuracyOptimizedMb=True when creating AdvancedOptions, longer-running models that may have a high accuracy will be included in the autopilot and made available to run manually.
  • A new option for Project.create_type_transform_feature has been added which explicitly truncates data when casting numerical data as categorical data.
  • Added 2 new blenders for projects that use MAD or Weighted MAD as a metric. The MAE blender uses BFGS optimization to find linear weights for the blender that minimize mean absolute error (compared to the GLM blender, which finds linear weights that minimize RMSE), and the MAEL1 blender uses BFGS optimization to find linear weights that minimize MAE + a L1 penalty on the coefficients (compared to the ENET blender, which minimizes RMSE + a combination of the L1 and L2 penalty on the coefficients).

Bugfixes

  • Fixed a bug (affecting Python 2 only) with printing any model (including frozen and prime models) whose model_type is not ascii.
  • FrozenModels were unable to correctly use methods inherited from Model. This has been fixed.
  • When calling get_result for a Job, ModelJob, or PredictJob that has errored, AsyncProcessUnsuccessfulError will now be raised instead of JobNotFinished, consistently with the behavior of get_result_when_complete.

Deprecation summary

  • Support for the experimental Recommender Problems projects has been removed. Any code relying on RecommenderSettings or the recommender_settings argument of Project.set_target and Project.start will error.
  • Project.update, deprecated in v2.2.32, has been removed in favor of specific updates: rename, unlock_holdout, set_worker_count.

Documentation changes

  • The link to Configuration from the Quickstart page has been fixed.

2.5.1

Bugfixes

  • Fixed a bug (affecting Python 2 only) with printing blueprints whose names are not ascii.
  • Fixed an issue where the weights column (for weighted projects) did not appear in the advanced_options of a Project.

2.5.0

New features

  • Methods to work with blender models have been added. Use Project.blend method to create new blenders, Project.get_blenders to get the list of existing blenders and BlenderModel.get to retrieve a model with blender-specific information.
  • Projects created via the API can now use smart downsampling when setting the target by passing smart_downsampled and majority_downsampling_rate into the AdvancedOptions object used with Project.set_target. The smart sampling options used with an existing project will be available as part of Project.advanced_options.
  • Support for frozen models, which use tuning parameters from a parent model for more efficient training, has been added. Use Model.request_frozen_model to create a new frozen model, Project.get_frozen_models to get the list of existing frozen models and FrozenModel.get to retrieve a particular frozen model.

Enhancements

  • The inferred date format (e.g. “%Y-%m-%d %H:%M:%S”) is now included in the Feature object. For non-date features, it will be None.
  • When specifying the API endpoint in the configuration, the client will now behave correctly for endpoints with and without trailing slashes.

2.4.0

New features

  • The premium add-on product DataRobot Prime has been added. You can now approximate a model on the leaderboard and download executable code for it. See documentation for further details, or talk to your account representative if the feature is not available on your account.
  • (Only relevant for on-premise users with a Standalone Scoring cluster.) Methods (request_transferable_export and download_export) have been added to the Model class for exporting models (which will only work if model export is turned on). There is a new class ImportedModel for managing imported models on a Standalone Scoring cluster.
  • It is now possible to create projects from a WebHDFS, PostgreSQL, Oracle or MySQL data source. For more information see the documentation for the relevant Project classmethods: create_from_hdfs, create_from_postgresql, create_from_oracle and create_from_mysql.
  • Job.wait_for_completion, which waits for a job to complete without returning anything, has been added.

Enhancements

  • The client will now check the API version offered by the server specified in configuration, and give a warning if the client version is newer than the server version. The DataRobot server is always backwards compatible with old clients, but new clients may have functionality that is not implemented on older server versions. This issue mainly affects users with on-premise deployments of DataRobot.

Bugfixes

  • Fixed an issue where Model.request_predictions might raise an error when predictions finished very quickly instead of returning the job.

API changes

  • To set the target with quickrun autopilot, call Project.set_target with mode=AUTOPILOT_MODE.QUICK instead of specifying quickrun=True.

Deprecation summary

  • Semi-automatic mode for autopilot has been deprecated and will be removed in 3.0. Use manual or fully automatic instead.
  • Use of the quickrun argument in Project.set_target has been deprecated and will be removed in 3.0. Use mode=AUTOPILOT_MODE.QUICK instead.

Configuration changes

  • It is now possible to control the SSL certificate verification by setting the parameter ssl_verify in the config file.

Documentation changes

  • The “Modeling Airline Delay” example notebook has been updated to work with the new 2.3 enhancements.
  • Documentation for the generic Job class has been added.
  • Class attributes are now documented in the API Reference section of the documentation.
  • The changelog now appears in the documentation.
  • There is a new section dedicated to configuration, which lists all of the configuration options and their meanings.

2.3.0

New features

  • The DataRobot API now includes Feature Impact, an approach to measuring the relevance of each feature that can be applied to any model. The Model class now includes methods request_feature_impact (which creates and returns a feature impact job) and get_feature_impact (which can retrieve completed feature impact results).
  • A new improved workflow for predictions now supports first uploading a dataset via Project.upload_dataset, then requesting predictions via Model.request_predictions. This allows us to better support predictions on larger datasets and non-ascii files.
  • Datasets previously uploaded for predictions (represented by the PredictionDataset class) can be listed from Project.get_datasets and retrieve and deleted via PredictionDataset.get and PredictionDataset.delete.
  • You can now create a new feature by re-interpreting the type of an existing feature in a project by using the Project.create_type_transform_feature method.
  • The Job class now includes a get method for retrieving a job and a cancel method for canceling a job.
  • All of the jobs classes (Job, ModelJob, PredictJob) now include the following new methods: refresh (for refreshing the data in the job object), get_result (for getting the completed resource resulting from the job), and get_result_when_complete (which waits until the job is complete and returns the results, or times out).
  • A new method Project.refresh can be used to update Project objects with the latest state from the server.
  • A new function datarobot.async.wait_for_async_resolution can be used to poll for the resolution of any generic asynchronous operation on the server.

Enhancements

  • The JOB_TYPE enum now includes FEATURE_IMPACT.
  • The QUEUE_STATUS enum now includes ABORTED and COMPLETED.
  • The Project.create method now has a read_timeout parameter which can be used to keep open the connection to DataRobot while an uploaded file is being processed. For very large files this time can be substantial. Appropriately raising this value can help avoid timeouts when uploading large files.
  • The method Project.wait_for_autopilot has been enhanced to error if the project enters a state where autopilot may not finish. This avoids a situation that existed previously where users could wait indefinitely on their project that was not going to finish. However, users are still responsible to make sure a project has more than zero workers, and that the queue is not paused.
  • Feature.get now supports retrieving features by feature name. (For backwards compatibility, feature IDs are still supported until 3.0.)
  • File paths that have unicode directory names can now be used for creating projects and PredictJobs. The filename itself must still be ascii, but containing directory names can have other encodings.
  • Now raises more specific JobAlreadyRequested exception when we refuse a model fitting request as a duplicate. Users can explicitly catch this exception if they want it to be ignored.
  • A file_name attribute has been added to the Project class, identifying the file name associated with the original project dataset. Note that if the project was created from a data frame, the file name may not be helpful.
  • The connect timeout for establishing a connection to the server can now be set directly. This can be done in the yaml configuration of the client, or directly in the code. The default timeout has been lowered from 60 seconds to 6 seconds, which will make detecting a bad connection happen much quicker.

Bugfixes

  • Fixed a bug (affecting Python 2 only) with printing features and featurelists whose names are not ascii.

API changes

  • Job class hierarchy is rearranged to better express the relationship between these objects. See documentation for datarobot.models.job for details.
  • Featurelist objects now have a project_id attribute to indicate which project they belong to. Directly accessing the project attribute of a Featurelist object is now deprecated
  • Support INI-style configuration, which was deprecated in v2.1, has been removed. yaml is the only supported configuration format.
  • The method Project.get_jobs method, which was deprecated in v2.1, has been removed. Users should use the Project.get_model_jobs method instead to get the list of model jobs.

Deprecation summary

  • PredictJob.create has been deprecated in favor of the alternate workflow using Model.request_predictions.
  • Feature.converter (used internally for object construction) has been made private.
  • Model.fetch_resource_data has been deprecated and will be removed in 3.0. To fetch a model from : its ID, use Model.get.
  • The ability to use Feature.get with feature IDs (rather than names) is deprecated and will be removed in 3.0.
  • Instantiating a Project, Model, Blueprint, Featurelist, or Feature instance from a dict of data is now deprecated. Please use the from_data classmethod of these classes instead. Additionally, instantiating a Model from a tuple or by using the keyword argument data is also deprecated.
  • Use of the attribute Featurelist.project is now deprecated. You can use the project_id attribute of a Featurelist to instantiate a Project instance using Project.get.
  • Use of the attributes Model.project, Model.blueprint, and Model.featurelist are all deprecated now to avoid use of partially instantiated objects. Please use the ids of these objects instead.
  • Using a Project instance as an argument in Featurelist.get is now deprecated. Please use a project_id instead. Similarly, using a Project instance in Model.get is also deprecated, and a project_id should be used in its place.

Configuration changes

  • Previously it was possible (though unintended) that the client configuration could be mixed through environment variables, configuration files, and arguments to datarobot.Client. This logic is now simpler - please see the Getting Started section of the documentation for more information.

2.2.33

Bugfixes

  • Fixed a bug with non-ascii project names using the package with Python 2.
  • Fixed an error that occurred when printing projects that had been constructed from an ID only or printing printing models that had been constructed from a tuple (which impacted printing PredictJobs).
  • Fixed a bug with project creation from non-ascii file names. Project creation from non-ascii file names is not supported, so this now raises a more informative exception. The project name is no longer used as the file name in cases where we do not have a file name, which prevents non-ascii project names from causing problems in those circumstances.
  • Fixed a bug (affecting Python 2 only) with printing projects, features, and featurelists whose names are not ascii.

2.2.32

New features

  • Project.get_features and Feature.get methods have been added for feature retrieval.
  • A generic Job entity has been added for use in retrieving the entire queue at once. Calling Project.get_all_jobs will retrieve all (appropriately filtered) jobs from the queue. Those can be cancelled directly as generic jobs, or transformed into instances of the specific job class using ModelJob.from_job and PredictJob.from_job, which allow all functionality previously available via the ModelJob and PredictJob interfaces.
  • Model.train now supports featurelist_id and scoring_type parameters, similar to Project.train.

Enhancements

  • Deprecation warning filters have been updated. By default, a filter will be added ensuring that usage of deprecated features will display a warning once per new usage location. In order to hide deprecation warnings, a filter like warnings.filterwarnings('ignore', category=DataRobotDeprecationWarning) can be added to a script so no such warnings are shown. Watching for deprecation warnings to avoid reliance on deprecated features is recommended.
  • If your client is misconfigured and does not specify an endpoint, the cloud production server is no longer used as the default as in many cases this is not the correct default.
  • This changelog is now included in the distributable of the client.

Bugfixes

  • Fixed an issue where updating the global client would not affect existing objects with cached clients. Now the global client is used for every API call.
  • An issue where mistyping a filepath for use in a file upload has been resolved. Now an error will be raised if it looks like the raw string content for modeling or predictions is just one single line.

API changes

  • Use of username and password to authenticate is no longer supported - use an API token instead.
  • Usage of start_time and finish_time parameters in Project.get_models is not supported both in filtering and ordering of models
  • Default value of sample_pct parameter of Model.train method is now None instead of 100. If the default value is used, models will be trained with all of the available training data based on project configuration, rather than with entire dataset including holdout for the previous default value of 100.
  • order_by parameter of Project.list which was deprecated in v2.0 has been removed.
  • recommendation_settings parameter of Project.start which was deprecated in v0.2 has been removed.
  • Project.status method which was deprecated in v0.2 has been removed.
  • Project.wait_for_aim_stage method which was deprecated in v0.2 has been removed.
  • Delay, ConstantDelay, NoDelay, ExponentialBackoffDelay, RetryManager classes from retry module which were deprecated in v2.1 were removed.
  • Package renamed to datarobot.

Deprecation summary

  • Project.update deprecated in favor of specific updates: rename, unlock_holdout, set_worker_count.

Documentation changes

  • A new use case involving financial data has been added to the examples directory.
  • Added documentation for the partition methods.

2.1.31

Bugfixes

  • In Python 2, using a unicode token to instantiate the client will now work correctly.

2.1.30

Bugfixes

  • The minimum required version of trafaret has been upgraded to 0.7.1 to get around an incompatibility between it and setuptools.

2.1.29

Enhancements

  • Minimal used version of requests_toolbelt package changed from 0.4 to 0.6

2.1.28

New features

  • Default to reading YAML config file from ~/.config/datarobot/drconfig.yaml
  • Allow config_path argument to client
  • wait_for_autopilot method added to Project. This method can be used to block execution until autopilot has finished running on the project.
  • Support for specifying which featurelist to use with initial autopilot in Project.set_target
  • Project.get_predict_jobs method has been added, which looks up all prediction jobs for a project
  • Project.start_autopilot method has been added, which starts autopilot on specified featurelist
  • The schema for PredictJob in DataRobot API v2.1 now includes a message. This attribute has been added to the PredictJob class.
  • PredictJob.cancel now exists to cancel prediction jobs, mirroring ModelJob.cancel
  • Project.from_async is a new classmethod that can be used to wait for an async resolution in project creation. Most users will not need to know about it as it is used behind the scenes in Project.create and Project.set_target, but power users who may run into periodic connection errors will be able to catch the new ProjectAsyncFailureError and decide if they would like to resume waiting for async process to resolve

Enhancements

  • AUTOPILOT_MODE enum now uses string names for autopilot modes instead of numbers

Deprecation summary

  • ConstantDelay, NoDelay, ExponentialBackoffDelay, and RetryManager utils are now deprecated
  • INI-style config files are now deprecated (in favor of YAML config files)
  • Several functions in the utils submodule are now deprecated (they are being moved elsewhere and are not considered part of the public interface)
  • Project.get_jobs has been renamed Project.get_model_jobs for clarity and deprecated
  • Support for the experimental date partitioning has been removed in DataRobot API, so it is being removed from the client immediately.

API changes

  • In several places where AppPlatformError was being raised, now TypeError, ValueError or InputNotUnderstoodError are now used. With this change, one can now safely assume that when catching an AppPlatformError it is because of an unexpected response from the server.
  • AppPlatformError has gained a two new attributes, status_code which is the HTTP status code of the unexpected response from the server, and error_code which is a DataRobot-defined error code. error_code is not used by any routes in DataRobot API 2.1, but will be in the future. In cases where it is not provided, the instance of AppPlatformError will have the attribute error_code set to None.
  • Two new subclasses of AppPlatformError have been introduced, ClientError (for 400-level response status codes) and ServerError (for 500-level response status codes). These will make it easier to build automated tooling that can recover from periodic connection issues while polling.
  • If a ClientError or ServerError occurs during a call to Project.from_async, then a ProjectAsyncFailureError (a subclass of AsyncFailureError) will be raised. That exception will have the status_code of the unexpected response from the server, and the location that was being polled to wait for the asynchronous process to resolve.

2.0.27

New features

  • PredictJob class was added to work with prediction jobs
  • wait_for_async_predictions function added to predict_job module

Deprecation summary

  • The order_by parameter of the Project.list is now deprecated.

0.2.26

Enhancements

  • Projet.set_target will re-fetch the project data after it succeeds, keeping the client side in sync with the state of the project on the server
  • Project.create_featurelist now throws DuplicateFeaturesError exception if passed list of features contains duplicates
  • Project.get_models now supports snake_case arguments to its order_by keyword

Deprecation summary

  • Project.wait_for_aim_stage is now deprecated, as the REST Async flow is a more reliable method of determining that project creation has completed successfully
  • Project.status is deprecated in favor of Project.get_status
  • recommendation_settings parameter of Project.start is deprecated in favor of recommender_settings

Bugfixes

  • Project.wait_for_aim_stage changed to support Python 3
  • Fixed incorrect value of SCORING_TYPE.cross_validation
  • Models returned by Project.get_models will now be correctly ordered when the order_by keyword is used

0.2.25

  • Pinned versions of required libraries

0.2.24

Official release of v0.2

0.1.24

  • Updated documentation
  • Renamed parameter name of Project.create and Project.start to project_name
  • Removed Model.predict method
  • wait_for_async_model_creation function added to modeljob module
  • wait_for_async_status_service of Project class renamed to _wait_for_async_status_service
  • Can now use auth_token in config file to configure SDK

0.1.23

  • Fixes a method that pointed to a removed route

0.1.22

  • Added featurelist_id attribute to ModelJob class

0.1.21

  • Removes model attribute from ModelJob class

0.1.20

  • Project creation raises AsyncProjectCreationError if it was unsuccessful
  • Removed Model.list_prime_rulesets and Model.get_prime_ruleset methods
  • Removed Model.predict_batch method
  • Removed Project.create_prime_model method
  • Removed PrimeRuleSet model
  • Adds backwards compatibility bridge for ModelJob async
  • Adds ModelJob.get and ModelJob.get_model

0.1.19

  • Minor bugfixes in wait_for_async_status_service

0.1.18

  • Removes submit_model from Project until server-side implementation is improved
  • Switches training URLs for new resource-based route at /projects//models/
  • Job renamed to ModelJob, and using modelJobs route
  • Fixes an inconsistency in argument order for train methods

0.1.17

  • wait_for_async_status_service timeout increased from 60s to 600s

0.1.16

  • Project.create will now handle both async/sync project creation

0.1.15

  • All routes pluralized to sync with changes in API
  • Project.get_jobs will request all jobs when no param specified
  • dataframes from predict method will have pythonic names
  • Project.get_status created, Project.status now deprecated
  • Project.unlock_holdout created.
  • Added quickrun parameter to Project.set_target
  • Added modelCategory to Model schema
  • Add permalinks feature to Project and Model objects.
  • Project.create_prime_model created

0.1.14

  • Project.set_worker_count fix for compatibility with API change in project update.

0.1.13

  • Add positive class to set_target.
  • Change attributes names of Project, Model, Job and Blueprint : - features in Model, Job and Blueprint are now processes
  • dataset_id and dataset_name migrated to featurelist_id and featurelist_name.
  • samplepct -> sample_pct
  • Model has now blueprint, project, and featurlist attributes.
  • Minor bugfixes.

0.1.12

  • Minor fixes regarding rename Job attributes. features attributes now named processes, samplepct now is sample_pct.

0.1.11

(May 27, 2015)

  • Minor fixes regarding migrating API from under_score names to camelCase.

0.1.10

(May 20, 2015)

  • Remove Project.upload_file, Project.upload_file_from_url and Project.attach_file methods. Moved all logic that uploading file to Project.create method.

0.1.9

(May 15, 2015)

  • Fix uploading file causing a lot of memory usage. Minor bugfixes.

Updated November 14, 2024