# Projects

> Projects - Either accepts an AdvancedOptions object or individual keyword arguments. This is an
> inplace update.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.839718+00:00` (UTC).

## Primary page

- [Projects](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html): Full documentation for this topic (HTML).

## Sections on this page

- [Project](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#project): In-page section heading.
- [classdatarobot.models.Project](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project): In-page section heading.
- [set_options(options=None, **kwargs)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_options): In-page section heading.
- [get_options()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_options): In-page section heading.
- [classmethodget(project_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get): In-page section heading.
- [classmethodcreate(cls, sourcedata, project_name='Untitled Project', max_wait=600, read_timeout=600, dataset_filename=None, , use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create): In-page section heading.
- [classmethodencrypted_string(plaintext)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.encrypted_string): In-page section heading.
- [classmethodcreate_from_hdfs(cls, url, port=None, project_name=None, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_from_hdfs): In-page section heading.
- [classmethodcreate_from_data_source(cls, data_source_id, username=None, password=None, credential_id=None, use_kerberos=None, credential_data=None, project_name=None, max_wait=600, , use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_from_data_source): In-page section heading.
- [classmethodcreate_from_dataset(cls, dataset_id, dataset_version_id=None, project_name=None, user=None, password=None, credential_id=None, use_kerberos=None, use_sample_from_dataset=None, credential_data=None, max_wait=600, , use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_from_dataset): In-page section heading.
- [classmethodcreate_from_recipe(cls, recipe_id, , use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_from_recipe): In-page section heading.
- [classmethodcreate_segmented_project_from_clustering_model(cls, clustering_project_id, clustering_model_id, target, max_wait=600, , use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_segmented_project_from_clustering_model): In-page section heading.
- [classmethodfrom_async(async_location, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.from_async): In-page section heading.
- [classmethodstart(cls, sourcedata, target=None, project_name='Untitled Project', worker_count=None, metric=None, autopilot_on=True, blueprint_threshold=None, response_cap=None, partitioning_method=None, positive_class=None, target_type=None, unsupervised_mode=False, blend_best_models=None, prepare_model_for_deployment=None, consider_blenders_in_recommendation=None, scoring_code_only=None, min_secondary_validation_model_count=None, shap_only_mode=None, relationships_configuration_id=None, autopilot_with_feature_discovery=None, feature_discovery_supervised_feature_reduction=None, unsupervised_type=None, autopilot_cluster_list=None, bias_mitigation_feature_name=None, bias_mitigation_technique=None, include_bias_mitigation_feature_as_predictor_variable=None, incremental_learning_only_mode=None, incremental_learning_on_best_model=None, number_of_incremental_learning_iterations_before_best_model_selection=None, , use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.start): In-page section heading.
- [classmethodlist(search_params=None, use_cases=None, offset=None, limit=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.list): In-page section heading.
- [refresh()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.refresh): In-page section heading.
- [delete()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.delete): In-page section heading.
- [analyze_and_model(target=None, mode='quick', metric=None, worker_count=None, positive_class=None, partitioning_method=None, featurelist_id=None, advanced_options=None, max_wait=600, target_type=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=False, relationships_configuration_id=None, class_mapping_aggregation_settings=None, segmentation_task_id=None, unsupervised_type=None, autopilot_cluster_list=None, use_gpu=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model): In-page section heading.
- [SEE ALSO](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#see-also): In-page section heading.
- [set_target(target=None, mode='quick', metric=None, worker_count=None, positive_class=None, partitioning_method=None, featurelist_id=None, advanced_options=None, max_wait=600, target_type=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=False, relationships_configuration_id=None, class_mapping_aggregation_settings=None, segmentation_task_id=None, unsupervised_type=None, autopilot_cluster_list=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_target): In-page section heading.
- [SEE ALSO](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#see-also_1): In-page section heading.
- [get_model_records(sort_by_partition='validation', sort_by_metric=None, with_metric=None, search_term=None, featurelists=None, families=None, blueprints=None, labels=None, characteristics=None, training_filters=None, number_of_clusters=None, limit=100, offset=0)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_model_records): In-page section heading.
- [get_models(order_by=None, search_params=None, with_metric=None, use_new_models_retrieval=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_models): In-page section heading.
- [recommended_model()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.recommended_model): In-page section heading.
- [get_top_model(metric=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_top_model): In-page section heading.
- [get_datetime_models()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_datetime_models): In-page section heading.
- [get_prime_models()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_prime_models): In-page section heading.
- [get_prime_files(parent_model_id=None, model_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_prime_files): In-page section heading.
- [get_dataset()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_dataset): In-page section heading.
- [get_datasets()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_datasets): In-page section heading.
- [upload_dataset(sourcedata, max_wait=600, read_timeout=600, forecast_point=None, predictions_start_date=None, predictions_end_date=None, dataset_filename=None, relax_known_in_advance_features_check=None, credentials=None, actual_value_column=None, secondary_datasets_config_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.upload_dataset): In-page section heading.
- [upload_dataset_from_data_source(data_source_id, username, password, max_wait=600, forecast_point=None, relax_known_in_advance_features_check=None, credentials=None, predictions_start_date=None, predictions_end_date=None, actual_value_column=None, secondary_datasets_config_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.upload_dataset_from_data_source): In-page section heading.
- [upload_dataset_from_catalog(dataset_id, credential_id=None, credential_data=None, dataset_version_id=None, max_wait=600, forecast_point=None, relax_known_in_advance_features_check=None, credentials=None, predictions_start_date=None, predictions_end_date=None, actual_value_column=None, secondary_datasets_config_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.upload_dataset_from_catalog): In-page section heading.
- [get_blueprints()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_blueprints): In-page section heading.
- [get_features()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_features): In-page section heading.
- [get_modeling_features(batch_size=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_modeling_features): In-page section heading.
- [get_featurelists()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_featurelists): In-page section heading.
- [get_associations(assoc_type, metric, featurelist_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_associations): In-page section heading.
- [get_association_featurelists()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_association_featurelists): In-page section heading.
- [get_association_matrix_details(feature1, feature2)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_association_matrix_details): In-page section heading.
- [get_modeling_featurelists(batch_size=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_modeling_featurelists): In-page section heading.
- [get_discarded_features()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_discarded_features): In-page section heading.
- [restore_discarded_features(features, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.restore_discarded_features): In-page section heading.
- [create_type_transform_feature(name, parent_name, variable_type, replacement=None, date_extraction=None, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_type_transform_feature): In-page section heading.
- [get_featurelist_by_name(name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_featurelist_by_name): In-page section heading.
- [create_featurelist(name=None, features=None, starting_featurelist=None, starting_featurelist_id=None, starting_featurelist_name=None, features_to_include=None, features_to_exclude=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_featurelist): In-page section heading.
- [create_modeling_featurelist(name, features, skip_datetime_partition_column=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_modeling_featurelist): In-page section heading.
- [get_metrics(feature_name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_metrics): In-page section heading.
- [get_status()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_status): In-page section heading.
- [pause_autopilot()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.pause_autopilot): In-page section heading.
- [unpause_autopilot()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.unpause_autopilot): In-page section heading.
- [start_autopilot(featurelist_id, mode='quick', blend_best_models=False, scoring_code_only=False, prepare_model_for_deployment=True, consider_blenders_in_recommendation=False, run_leakage_removed_feature_list=True, autopilot_cluster_list=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.start_autopilot): In-page section heading.
- [train(trainable, sample_pct=None, featurelist_id=None, source_project_id=None, scoring_type=None, training_row_count=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=, n_clusters=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.train): In-page section heading.
- [train_datetime(blueprint_id, featurelist_id=None, training_row_count=None, training_duration=None, source_project_id=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=, use_project_settings=False, sampling_method=None, n_clusters=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.train_datetime): In-page section heading.
- [blend(model_ids, blender_method)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.blend): In-page section heading.
- [SEE ALSO](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#see-also_2): In-page section heading.
- [check_blendable(model_ids, blender_method)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.check_blendable): In-page section heading.
- [start_prepare_model_for_deployment(model_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.start_prepare_model_for_deployment): In-page section heading.
- [get_all_jobs(status=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_all_jobs): In-page section heading.
- [get_blenders()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_blenders): In-page section heading.
- [get_frozen_models()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_frozen_models): In-page section heading.
- [get_combined_models()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_combined_models): In-page section heading.
- [get_active_combined_model()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_active_combined_model): In-page section heading.
- [get_segments_models(combined_model_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_segments_models): In-page section heading.
- [get_model_jobs(status=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_model_jobs): In-page section heading.
- [get_predict_jobs(status=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_predict_jobs): In-page section heading.
- [wait_for_autopilot(check_interval=20.0, timeout=86400, verbosity=1)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.wait_for_autopilot): In-page section heading.
- [rename(project_name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.rename): In-page section heading.
- [set_project_description(project_description)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_project_description): In-page section heading.
- [unlock_holdout()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.unlock_holdout): In-page section heading.
- [set_worker_count(worker_count)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_worker_count): In-page section heading.
- [set_advanced_options(advanced_options=None, **kwargs)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_advanced_options): In-page section heading.
- [list_advanced_options()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.list_advanced_options): In-page section heading.
- [set_partitioning_method(cv_method=None, validation_type=None, seed=0, reps=None, user_partition_col=None, training_level=None, validation_level=None, holdout_level=None, cv_holdout_level=None, validation_pct=None, holdout_pct=None, partition_key_cols=None, partitioning_method=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_partitioning_method): In-page section heading.
- [get_uri()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_uri): In-page section heading.
- [get_rating_table_models()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_rating_table_models): In-page section heading.
- [get_rating_tables()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_rating_tables): In-page section heading.
- [get_access_list()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_access_list): In-page section heading.
- [share(access_list, send_notification=None, include_feature_discovery_entities=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.share): In-page section heading.
- [batch_features_type_transform(parent_names, variable_type, prefix=None, suffix=None, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.batch_features_type_transform): In-page section heading.
- [clone_project(new_project_name=None, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.clone_project): In-page section heading.
- [create_interaction_feature(name, features, separator, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.create_interaction_feature): In-page section heading.
- [get_relationships_configuration()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_relationships_configuration): In-page section heading.
- [download_feature_discovery_dataset(file_name, pred_dataset_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.download_feature_discovery_dataset): In-page section heading.
- [download_feature_discovery_recipe_sqls(file_name, model_id=None, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.download_feature_discovery_recipe_sqls): In-page section heading.
- [validate_external_time_series_baseline(catalog_version_id, target, datetime_partitioning, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.validate_external_time_series_baseline): In-page section heading.
- [download_multicategorical_data_format_errors(file_name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.download_multicategorical_data_format_errors): In-page section heading.
- [get_multiseries_names()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_multiseries_names): In-page section heading.
- [restart_segment(segment)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.restart_segment): In-page section heading.
- [get_bias_mitigated_models(parent_model_id=None, offset=0, limit=100)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_bias_mitigated_models): In-page section heading.
- [apply_bias_mitigation(bias_mitigation_parent_leaderboard_id, bias_mitigation_feature_name, bias_mitigation_technique, include_bias_mitigation_feature_as_predictor_variable)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.apply_bias_mitigation): In-page section heading.
- [request_bias_mitigation_feature_info(bias_mitigation_feature_name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.request_bias_mitigation_feature_info): In-page section heading.
- [get_bias_mitigation_feature_info(bias_mitigation_feature_name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.get_bias_mitigation_feature_info): In-page section heading.
- [classmethodfrom_data(data)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.from_data): In-page section heading.
- [classmethodfrom_server_data(data, keep_attrs=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.from_server_data): In-page section heading.
- [open_in_browser()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.open_in_browser): In-page section heading.
- [set_datetime_partitioning(datetime_partition_spec=None, **kwargs)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.set_datetime_partitioning): In-page section heading.
- [list_datetime_partition_spec()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.list_datetime_partition_spec): In-page section heading.
- [classdatarobot.helpers.eligibility_result.EligibilityResult](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.eligibility_result.EligibilityResult): In-page section heading.
- [Advanced options](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#advanced-options): In-page section heading.
- [classdatarobot.helpers.AdvancedOptions](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.AdvancedOptions): In-page section heading.
- [get(_AdvancedOptions__key, _AdvancedOptions__default=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.AdvancedOptions.get): In-page section heading.
- [pop(_AdvancedOptions__key)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.AdvancedOptions.pop): In-page section heading.
- [update_individual_options(**kwargs)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.AdvancedOptions.update_individual_options): In-page section heading.
- [collect_payload()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.AdvancedOptions.collect_payload): In-page section heading.
- [Partitioning](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#partitioning): In-page section heading.
- [classdatarobot.RandomCV](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.RandomCV): In-page section heading.
- [classdatarobot.StratifiedCV](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.StratifiedCV): In-page section heading.
- [classdatarobot.GroupCV](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.GroupCV): In-page section heading.
- [classdatarobot.UserCV](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.UserCV): In-page section heading.
- [classdatarobot.RandomTVH](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.RandomTVH): In-page section heading.
- [classdatarobot.UserTVH](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.UserTVH): In-page section heading.
- [classdatarobot.StratifiedTVH](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.StratifiedTVH): In-page section heading.
- [classdatarobot.GroupTVH](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.GroupTVH): In-page section heading.
- [classdatarobot.DatetimePartitioningSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification): In-page section heading.
- [collect_payload()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification.collect_payload): In-page section heading.
- [prep_payload(project_id, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification.prep_payload): In-page section heading.
- [update(**kwargs)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification.update): In-page section heading.
- [classdatarobot.BacktestSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.BacktestSpecification): In-page section heading.
- [classdatarobot.FeatureSettings](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.FeatureSettings): In-page section heading.
- [collect_payload(use_a_priori=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.FeatureSettings.collect_payload): In-page section heading.
- [classdatarobot.Periodicity](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.Periodicity): In-page section heading.
- [classdatarobot.DatetimePartitioning](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning): In-page section heading.
- [classmethodgenerate(cls, project_id, spec, max_wait=600, target=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.generate): In-page section heading.
- [classmethodget(project_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.get): In-page section heading.
- [classmethodgenerate_optimized(project_id, spec, target, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.generate_optimized): In-page section heading.
- [classmethodget_optimized(project_id, datetime_partitioning_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.get_optimized): In-page section heading.
- [classmethodfeature_log_list(project_id, offset=None, limit=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.feature_log_list): In-page section heading.
- [classmethodfeature_log_retrieve(project_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.feature_log_retrieve): In-page section heading.
- [to_specification(use_holdout_start_end_format=False, use_backtest_start_end_format=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.to_specification): In-page section heading.
- [to_dataframe()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.to_dataframe): In-page section heading.
- [classmethoddatetime_partitioning_log_retrieve(project_id, datetime_partitioning_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.datetime_partitioning_log_retrieve): In-page section heading.
- [classmethoddatetime_partitioning_log_list(project_id, datetime_partitioning_id, offset=None, limit=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.datetime_partitioning_log_list): In-page section heading.
- [classmethodget_input_data(project_id, datetime_partitioning_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.get_input_data): In-page section heading.
- [classdatarobot.helpers.partitioning_methods.DatetimePartitioningId](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.DatetimePartitioningId): In-page section heading.
- [collect_payload()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.DatetimePartitioningId.collect_payload): In-page section heading.
- [prep_payload(project_id, max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.DatetimePartitioningId.prep_payload): In-page section heading.
- [update(**kwargs)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.DatetimePartitioningId.update): In-page section heading.
- [classdatarobot.helpers.partitioning_methods.Backtest](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.Backtest): In-page section heading.
- [to_specification(use_start_end_format=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.Backtest.to_specification): In-page section heading.
- [to_dataframe()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.Backtest.to_dataframe): In-page section heading.
- [classdatarobot.helpers.partitioning_methods.FeatureSettingsPayload](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.FeatureSettingsPayload): In-page section heading.
- [datarobot.helpers.partitioning_methods.construct_duration_string(years=0, months=0, days=0, hours=0, minutes=0, seconds=0)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string): In-page section heading.
- [Status check job](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#status-check-job): In-page section heading.
- [classdatarobot.models.StatusCheckJob](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.StatusCheckJob): In-page section heading.
- [wait_for_completion(max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.StatusCheckJob.wait_for_completion): In-page section heading.
- [get_status()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.StatusCheckJob.get_status): In-page section heading.
- [get_result_when_complete(max_wait=600)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.StatusCheckJob.get_result_when_complete): In-page section heading.
- [classdatarobot.models.JobStatusResult](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.JobStatusResult): In-page section heading.
- [status:Optional\[str\]](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.JobStatusResult.status): In-page section heading.
- [status_id:Optional\[str\]](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.JobStatusResult.status_id): In-page section heading.
- [completed_resource_url:Optional\[str\]](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.JobStatusResult.completed_resource_url): In-page section heading.
- [message:Optional\[str\]](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.JobStatusResult.message): In-page section heading.
- [Segmented modeling](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#segmented-modeling): In-page section heading.
- [classdatarobot.CombinedModel](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel): In-page section heading.
- [classmethodget(project_id, combined_model_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.get): In-page section heading.
- [classmethodset_segment_champion(project_id, model_id, clone=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.set_segment_champion): In-page section heading.
- [get_segments_info()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.get_segments_info): In-page section heading.
- [get_segments_as_dataframe(encoding='utf-8')](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.get_segments_as_dataframe): In-page section heading.
- [get_segments_as_csv(filename, encoding='utf-8')](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.get_segments_as_csv): In-page section heading.
- [train(sample_pct=None, featurelist_id=None, scoring_type=None, training_row_count=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.train): In-page section heading.
- [train_datetime(featurelist_id=None, training_row_count=None, training_duration=None, time_window_sample_pct=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=, use_project_settings=False, sampling_method=None, n_clusters=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.train_datetime): In-page section heading.
- [retrain(sample_pct=None, featurelist_id=None, training_row_count=None, n_clusters=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.retrain): In-page section heading.
- [request_frozen_model(sample_pct=None, training_row_count=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.request_frozen_model): In-page section heading.
- [request_frozen_datetime_model(training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, time_window_sample_pct=None, sampling_method=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.request_frozen_datetime_model): In-page section heading.
- [cross_validate()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CombinedModel.cross_validate): In-page section heading.
- [classdatarobot.SegmentationTask](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentationTask): In-page section heading.
- [classmethodfrom_data(data)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentationTask.from_data): In-page section heading.
- [collect_payload()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentationTask.collect_payload): In-page section heading.
- [classmethodcreate(project_id, target, use_time_series=False, datetime_partition_column=None, multiseries_id_columns=None, user_defined_segment_id_columns=None, max_wait=600, model_package_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentationTask.create): In-page section heading.
- [classmethodlist(project_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentationTask.list): In-page section heading.
- [classmethodget(project_id, segmentation_task_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentationTask.get): In-page section heading.
- [classdatarobot.SegmentInfo](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentInfo): In-page section heading.
- [classmethodlist(project_id, model_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.SegmentInfo.list): In-page section heading.
- [classdatarobot.models.segmentation.SegmentationTask](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTask): In-page section heading.
- [classmethodfrom_data(data)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTask.from_data): In-page section heading.
- [collect_payload()](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTask.collect_payload): In-page section heading.
- [classmethodcreate(project_id, target, use_time_series=False, datetime_partition_column=None, multiseries_id_columns=None, user_defined_segment_id_columns=None, max_wait=600, model_package_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTask.create): In-page section heading.
- [classmethodlist(project_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTask.list): In-page section heading.
- [classmethodget(project_id, segmentation_task_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTask.get): In-page section heading.
- [classdatarobot.models.segmentation.SegmentationTaskCreatedResponse](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.segmentation.SegmentationTaskCreatedResponse): In-page section heading.
- [External baseline validation](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#external-baseline-validation): In-page section heading.
- [classdatarobot.models.external_baseline_validation.ExternalBaselineValidationInfo](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.external_baseline_validation.ExternalBaselineValidationInfo): In-page section heading.
- [classmethodget(project_id, validation_job_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.external_baseline_validation.ExternalBaselineValidationInfo.get): In-page section heading.
- [Calendar file](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#calendar-file): In-page section heading.
- [classdatarobot.CalendarFile](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile): In-page section heading.
- [classmethodcreate(file_path, calendar_name=None, multiseries_id_columns=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.create): In-page section heading.
- [classmethodcreate_calendar_from_dataset(dataset_id, dataset_version_id=None, calendar_name=None, multiseries_id_columns=None, delete_on_error=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.create_calendar_from_dataset): In-page section heading.
- [classmethodcreate_calendar_from_country_code(country_code, start_date, end_date)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.create_calendar_from_country_code): In-page section heading.
- [classmethodget_allowed_country_codes(offset=None, limit=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.get_allowed_country_codes): In-page section heading.
- [classmethodget(calendar_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.get): In-page section heading.
- [classmethodlist(project_id=None, batch_size=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.list): In-page section heading.
- [classmethoddelete(calendar_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.delete): In-page section heading.
- [classmethodupdate_name(calendar_id, new_calendar_name)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.update_name): In-page section heading.
- [classmethodshare(calendar_id, access_list)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.share): In-page section heading.
- [classmethodget_access_list(calendar_id, batch_size=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.get_access_list): In-page section heading.
- [classdatarobot.models.calendar_file.CountryCode](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.calendar_file.CountryCode): In-page section heading.

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [API reference](https://docs.datarobot.com/en/docs/api/reference/index.html): Linked from this page.
- [Python API client](https://docs.datarobot.com/en/docs/api/reference/sdk/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/api/reference/sdk/tag-ml.html): Linked from this page.
- [InputNotUnderstoodError](https://docs.datarobot.com/en/docs/api/reference/sdk/errors.html#datarobot.errors.InputNotUnderstoodError): Linked from this page.
- [datarobot.models.Dataset](https://docs.datarobot.com/en/docs/api/reference/sdk/data-registry.html#datarobot.models.Dataset): Linked from this page.
- [GenericModel](https://docs.datarobot.com/en/docs/api/reference/sdk/datarobot-models.html#datarobot.models.GenericModel): Linked from this page.
- [https://docs.datarobot.com/en/docs/modeling/reference/model-detail/leaderboard-ref.html#na-scores](https://docs.datarobot.com/en/docs/modeling/reference/model-detail/leaderboard-ref.html#na-scores): Linked from this page.
- [Time Series documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#time-series-predict): Linked from this page.
- [PredictionDataset](https://docs.datarobot.com/en/docs/api/reference/sdk/batch-predictions.html#datarobot.models.PredictionDataset): Linked from this page.
- [Feature](https://docs.datarobot.com/en/docs/api/reference/sdk/features.html#datarobot.models.Feature): Linked from this page.
- [Blueprint](https://docs.datarobot.com/en/docs/api/reference/sdk/blueprints.html#datarobot.models.Blueprint): Linked from this page.
- [datetime partitioned project documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/datetime_partition.html#date-dur-spec): Linked from this page.
- [RatingTable](https://docs.datarobot.com/en/docs/api/reference/sdk/insights.html#datarobot.models.RatingTable): Linked from this page.
- [User Guide](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/segmented_modeling.html#segmented-modeling): Linked from this page.
- [sharing](https://docs.datarobot.com/en/docs/api/dev-learning/python/admin/sharing.html#sharing): Linked from this page.

## Documentation content

## Project

### class datarobot.models.Project

A project built from a particular training dataset

- Variables:

#### set_options(options=None, **kwargs)

Update the advanced options of this project.

Either accepts an AdvancedOptions object or individual keyword arguments.
This is an inplace update.

- Raises: ValueError – Raised if an object passed to the options parameter is not an AdvancedOptions instance,
      a valid keyword argument from the AdvancedOptions class, or a combination of an AdvancedOptions instance AND keyword arguments.
- Return type: None

#### get_options()

Return the stored advanced options for this project.

- Return type: AdvancedOptions

#### classmethod get(project_id)

Gets information about a project.

- Parameters: project_id ( str ) – The identifier of the project you want to load.
- Returns: project – The queried project
- Return type: Project

> [!NOTE] Examples
> ```
> import datarobot as dr
> p = dr.Project.get(project_id='54e639a18bd88f08078ca831')
> p.id
> >>>'54e639a18bd88f08078ca831'
> p.project_name
> >>>'Some project name'
> ```

#### classmethod create(cls, sourcedata, project_name='Untitled Project', max_wait=600, read_timeout=600, dataset_filename=None, , use_case=None)

Creates a project with provided data.

Project creation is asynchronous process, which means that after
initial request we will keep polling status of async process
that is responsible for project creation until it’s finished.
For SDK users this only means that this method might raise
exceptions related to it’s async nature.

- Parameters:
- Returns: project – Instance with initialized data.
- Return type: Project
- Raises:

> [!NOTE] Examples
> ```
> p = Project.create('/home/datasets/somedataset.csv',
>                    project_name="New API project")
> p.id
> >>> '5921731dkqshda8yd28h'
> p.project_name
> >>> 'New API project'
> ```

#### classmethod encrypted_string(plaintext)

Sends a string to DataRobot to be encrypted

This is used for passwords that DataRobot uses to access external data sources

- Parameters: plaintext ( str ) – The string to encrypt
- Returns: ciphertext – The encrypted string
- Return type: str

#### classmethod create_from_hdfs(cls, url, port=None, project_name=None, max_wait=600)

Create a project from a datasource on a WebHDFS server.

- Parameters:
- Return type: Project

> [!NOTE] Examples
> ```
> p = Project.create_from_hdfs('hdfs:///tmp/somedataset.csv',
>                              project_name="New API project")
> p.id
> >>> '5921731dkqshda8yd28h'
> p.project_name
> >>> 'New API project'
> ```

#### classmethod create_from_data_source(cls, data_source_id, username=None, password=None, credential_id=None, use_kerberos=None, credential_data=None, project_name=None, max_wait=600, , use_case=None)

Create a project from a data source. Either data_source or data_source_id
should be specified.

- Parameters:
- Raises: InvalidUsageError – Raised if either username or password is passed without the other.
- Return type: Project

#### classmethod create_from_dataset(cls, dataset_id, dataset_version_id=None, project_name=None, user=None, password=None, credential_id=None, use_kerberos=None, use_sample_from_dataset=None, credential_data=None, max_wait=600, , use_case=None)

Create a Project from a [datarobot.models.Dataset](https://docs.datarobot.com/en/docs/api/reference/sdk/data-registry.html#datarobot.models.Dataset)

- Parameters:
- Return type: Project

#### classmethod create_from_recipe(cls, recipe_id, , use_case=None)

Create a project from a recipe

- Parameters: recipe_id ( string ) – The ID of the recipe entry to use to create the project’s dataset.
- Return type: Project

#### classmethod create_segmented_project_from_clustering_model(cls, clustering_project_id, clustering_model_id, target, max_wait=600, , use_case=None)

Create a new segmented project from a clustering model

- Parameters:
- Returns: project – The created project
- Return type: Project

#### classmethod from_async(async_location, max_wait=600)

Given a temporary async status location poll for no more than max_wait seconds
until the async process (project creation or setting the target, for example)
finishes successfully, then return the ready project

- Parameters:
- Returns: project – The project, now ready
- Return type: Project
- Raises:

#### classmethod start(cls, sourcedata, target=None, project_name='Untitled Project', worker_count=None, metric=None, autopilot_on=True, blueprint_threshold=None, response_cap=None, partitioning_method=None, positive_class=None, target_type=None, unsupervised_mode=False, blend_best_models=None, prepare_model_for_deployment=None, consider_blenders_in_recommendation=None, scoring_code_only=None, min_secondary_validation_model_count=None, shap_only_mode=None, relationships_configuration_id=None, autopilot_with_feature_discovery=None, feature_discovery_supervised_feature_reduction=None, unsupervised_type=None, autopilot_cluster_list=None, bias_mitigation_feature_name=None, bias_mitigation_technique=None, include_bias_mitigation_feature_as_predictor_variable=None, incremental_learning_only_mode=None, incremental_learning_on_best_model=None, number_of_incremental_learning_iterations_before_best_model_selection=None, , use_case=None)

Chain together project creation, file upload, and target selection.

> [!NOTE] Notes
> While this function provides a simple means to get started, it does not expose
> all possible parameters. For advanced usage, using `create`, `set_advanced_options` and `analyze_and_model` directly is recommended.

- Parameters:
- Returns: project – The newly created and initialized project.
- Return type: Project
- Raises:

> [!NOTE] Examples
> ```
> Project.start("./tests/fixtures/file.csv",
>               "a_target",
>               project_name="test_name",
>               worker_count=4,
>               metric="a_metric")
> ```
> 
> This is an example of using a URL to specify the datasource:
> 
> ```
> Project.start("https://example.com/data/file.csv",
>               "a_target",
>               project_name="test_name",
>               worker_count=4,
>               metric="a_metric")
> ```

#### classmethod list(search_params=None, use_cases=None, offset=None, limit=None)

Returns the projects associated with this account.

- Parameters:

> [!NOTE] Examples
> List all projects
> 
> ```
> p_list = Project.list()
> p_list
> >>> [Project('Project One'), Project('Two')]
> ```
> 
> Search for projects by name
> 
> ```
> Project.list(search_params={'project_name': 'red'})
> >>> [Project('Prediction Time'), Project('Fred Project')]
> ```
> 
> List 2nd and 3rd projects
> 
> ```
> Project.list(offset=1, limit=2)
> >>> [Project('Project 2'), Project('Project 3')]
> ```

#### refresh()

Fetches the latest state of the project, and updates this object
with that information. This is an in place update, not a new object.

- Return type: None

#### delete()

Removes this project from your account.

- Return type: None

#### analyze_and_model(target=None, mode='quick', metric=None, worker_count=None, positive_class=None, partitioning_method=None, featurelist_id=None, advanced_options=None, max_wait=600, target_type=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=False, relationships_configuration_id=None, class_mapping_aggregation_settings=None, segmentation_task_id=None, unsupervised_type=None, autopilot_cluster_list=None, use_gpu=None)

Set target variable of an existing project and begin the autopilot process or send data to DataRobot
for feature analysis only if manual mode is specified.

Any options saved using `set_options` will be used if nothing is passed to `advanced_options`.
However, saved options will be ignored if `advanced_options` are passed.

Target setting is an asynchronous process, which means that after
initial request we will keep polling status of async process
that is responsible for target setting until it’s finished.
For SDK users this only means that this method might raise
exceptions related to it’s async nature.

When execution returns to the caller, the autopilot process will already have commenced
(again, unless manual mode is specified).

- Parameters:
- Returns: project – The instance with updated attributes.
- Return type: Project
- Raises:

#### SEE ALSO

[datarobot.models.Project.start](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.start): combines project creation, file upload, and target selection. Provides fewer options, but is useful for getting started quickly.

#### set_target(target=None, mode='quick', metric=None, worker_count=None, positive_class=None, partitioning_method=None, featurelist_id=None, advanced_options=None, max_wait=600, target_type=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=False, relationships_configuration_id=None, class_mapping_aggregation_settings=None, segmentation_task_id=None, unsupervised_type=None, autopilot_cluster_list=None)

Set target variable of an existing project and begin the Autopilot process (unless manual
mode is specified).

Target setting is an asynchronous process, which means that after
initial request DataRobot keeps polling status of an async process
that is responsible for target setting until it’s finished.
For SDK users, this method might raise
exceptions related to its async nature.

When execution returns to the caller, the Autopilot process will already have commenced
(again, unless manual mode is specified).

- Parameters:

#### SEE ALSO

[datarobot.models.Project.start](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.start): Combines project creation, file upload, and target selection. Provides fewer options, but is useful for getting started quickly.

[datarobot.models.Project.analyze_and_model](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model): the method replacing `set_target` after it is removed.

#### get_model_records(sort_by_partition='validation', sort_by_metric=None, with_metric=None, search_term=None, featurelists=None, families=None, blueprints=None, labels=None, characteristics=None, training_filters=None, number_of_clusters=None, limit=100, offset=0)

Retrieve paginated model records, sorted by scores, with optional filtering.

- Parameters:

#### get_models(order_by=None, search_params=None, with_metric=None, use_new_models_retrieval=False)

List all completed, successful models in the leaderboard for the given project.

- Parameters:

> [!NOTE] Examples
> ```
> Project.get('pid').get_models(order_by=['-sample_pct',
>                               'metric'])
> 
> # Getting models that contain "Ridge" in name
> Project.get('pid').get_models(
>     search_params={
>         'name': "Ridge"
>     })
> 
> # Filtering models based on 'starred' flag:
> Project.get('pid').get_models(search_params={'is_starred': True})
> ```
> 
> ```
> # retrieve additional attributes for the model
> model_records = project.get_models(use_new_models_retrieval=True)
> model_record = model_records[0]
> blueprint_id = model_record.blueprint_id
> blueprint = dr.Blueprint.get(project.id, blueprint_id)
> model_record.number_of_clusters
> blueprint.supports_composable_ml
> blueprint.supports_monotonic_constraints
> blueprint.monotonic_decreasing_featurelist_id
> blueprint.monotonic_increasing_featurelist_id
> model = dr.Model.get(project.id, model_record.id)
> model.prediction_threshold
> model.prediction_threshold_read_only
> model.has_empty_clusters
> model.is_n_clusters_dynamically_determined
> ```

#### recommended_model()

Returns the default recommended model, or None if there is no default recommended model.

- Returns: recommended_model – The default recommended model.
- Return type: Model or None

#### get_top_model(metric=None)

Obtain the top ranked model for a given metric/
If no metric is passed in, it uses the project’s default metric.
Models that display score of N/A in the UI are not included in the ranking (see [https://docs.datarobot.com/en/docs/modeling/reference/model-detail/leaderboard-ref.html#na-scores](https://docs.datarobot.com/en/docs/modeling/reference/model-detail/leaderboard-ref.html#na-scores)).

- Parameters: metric ( Optional[str] ) – Metric to sort models
- Returns: model – The top model
- Return type: Model
- Raises: ValueError – Raised if the project is unsupervised.
      Raised if the project has no target set.
      Raised if no metric was passed or the project has no metric.
      Raised if the metric passed is not used by the models on the leaderboard.

> [!NOTE] Examples
> ```
> from datarobot.models.project import Project
> 
> project = Project.get("<MY_PROJECT_ID>")
> top_model = project.get_top_model()
> ```

#### get_datetime_models()

List all models in the project as DatetimeModels

Requires the project to be datetime partitioned.  If it is not, a ClientError will occur.

- Returns: models – the datetime models
- Return type: list of DatetimeModel

#### get_prime_models()

List all DataRobot Prime models for the project
Prime models were created to approximate a parent model, and have downloadable code.

- Returns: models
- Return type: list of PrimeModel

#### get_prime_files(parent_model_id=None, model_id=None)

List all downloadable code files from DataRobot Prime for the project

- Parameters:
- Returns: files
- Return type: list of PrimeFile

#### get_dataset()

Retrieve the dataset used to create a project.

- Returns: Dataset used for creation of project or None if no catalog_id present.
- Return type: Dataset

> [!NOTE] Examples
> ```
> from datarobot.models.project import Project
> 
> project = Project.get("<MY_PROJECT_ID>")
> dataset = project.get_dataset()
> ```

#### get_datasets()

List all the datasets that have been uploaded for predictions

- Returns: datasets
- Return type: list of PredictionDataset instances

#### upload_dataset(sourcedata, max_wait=600, read_timeout=600, forecast_point=None, predictions_start_date=None, predictions_end_date=None, dataset_filename=None, relax_known_in_advance_features_check=None, credentials=None, actual_value_column=None, secondary_datasets_config_id=None)

Upload a new dataset to make predictions against

- Parameters:
- Returns: dataset – The newly uploaded dataset.
- Return type: PredictionDataset
- Raises:

#### upload_dataset_from_data_source(data_source_id, username, password, max_wait=600, forecast_point=None, relax_known_in_advance_features_check=None, credentials=None, predictions_start_date=None, predictions_end_date=None, actual_value_column=None, secondary_datasets_config_id=None)

Upload a new dataset from a data source to make predictions against

- Parameters:
- Returns: dataset – the newly uploaded dataset
- Return type: PredictionDataset

#### upload_dataset_from_catalog(dataset_id, credential_id=None, credential_data=None, dataset_version_id=None, max_wait=600, forecast_point=None, relax_known_in_advance_features_check=None, credentials=None, predictions_start_date=None, predictions_end_date=None, actual_value_column=None, secondary_datasets_config_id=None)

Upload a new dataset from a catalog dataset to make predictions against

- Parameters:

#### get_blueprints()

List all blueprints recommended for a project.

- Returns: menu – All blueprints in a project’s repository.
- Return type: list of Blueprint instances

#### get_features()

List all features for this project

- Returns: all features for this project
- Return type: list of Feature

#### get_modeling_features(batch_size=None)

List all modeling features for this project

Only available once the target and partitioning settings have been set.  For more
information on the distinction between input and modeling features, see the [time series documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#input-vs-modeling).

- Parameters: batch_size ( Optional[int] ) – The number of features to retrieve in a single API call.  If specified, the client may
  make multiple calls to retrieve the full list of features.  If not specified, an
  appropriate default will be chosen by the server.
- Returns: All modeling features in this project
- Return type: list of ModelingFeature

#### get_featurelists()

List all featurelists created for this project

- Returns: All featurelists created for this project
- Return type: list of Featurelist

#### get_associations(assoc_type, metric, featurelist_id=None)

Get the association statistics and metadata for a project’s
informative features

Added in version v2.17.

- Parameters:
- Returns: association_data – Pairwise metric strength data, feature clustering data,
  and ordering data for Feature Association Matrix visualization
- Return type: dict

#### get_association_featurelists()

List featurelists and get feature association status for each

Added in version v2.19.

- Returns: feature_lists – Dict with ‘featurelists’ as key, with list of featurelists as values
- Return type: dict

#### get_association_matrix_details(feature1, feature2)

Get a sample of the actual values used to measure the association
between a pair of features

Added in version v2.17.

- Parameters:
- Returns:

#### get_modeling_featurelists(batch_size=None)

List all modeling featurelists created for this project

Modeling featurelists can only be created after the target and partitioning options have
been set for a project.  In time series projects, these are the featurelists that can be
used for modeling; in other projects, they behave the same as regular featurelists.

See the [time series documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#input-vs-modeling) for more information.

- Parameters: batch_size ( Optional[int] ) – The number of featurelists to retrieve in a single API call.  If specified, the client
  may make multiple calls to retrieve the full list of features.  If not specified, an
  appropriate default will be chosen by the server.
- Returns: all modeling featurelists in this project
- Return type: list of ModelingFeaturelist

#### get_discarded_features()

Retrieve discarded during feature generation features. Applicable for time
series projects. Can be called at the modeling stage.

- Returns: discarded_features_info
- Return type: DiscardedFeaturesInfo

#### restore_discarded_features(features, max_wait=600)

Restore discarded during feature generation features. Applicable for time
series projects. Can be called at the modeling stage.

- Returns: status – information about features requested to be restored.
- Return type: FeatureRestorationStatus

#### create_type_transform_feature(name, parent_name, variable_type, replacement=None, date_extraction=None, max_wait=600)

Create a new feature by transforming the type of an existing feature in the project

Note that only the following transformations are supported:

1. Text to categorical or numeric
2. Categorical to text or numeric
3. Numeric to categorical
4. Date to categorical or numeric

> [!NOTE] Notes
> Special considerations when casting numeric to categorical
> 
> There are two parameters which can be used for `variableType` to convert numeric
> data to categorical levels. These differ in the assumptions they make about the input
> data, and are very important when considering the data that will be used to make
> predictions. The assumptions that each makes are:
> 
> categorical
> : The data in the column is all integral, and there are no missing
>   values. If either of these conditions do not hold in the training set, the
>   transformation will be rejected. During predictions, if any of the values in the
>   parent column are missing, the predictions will error.
> categoricalInt
> :
> New in v2.6
> All of the data in the column should be considered categorical in its string form when
>   cast to an int by truncation. For example the value
> 3
> will be cast as the string
> 3
> and the value
> 3.14
> will also be cast as the string
> 3
> . Further, the
>   value
> -3.6
> will become the string
> -3
> .
>   Missing values will still be recognized as missing.
> 
> For convenience these are represented in the enum `VARIABLE_TYPE_TRANSFORM` with the
> names `CATEGORICAL` and `CATEGORICAL_INT`.

- Parameters:
- Returns: The data of the new Feature
- Return type: Feature
- Raises:

#### get_featurelist_by_name(name)

Creates a new featurelist

- Parameters: name ( Optional[str] ) – The name of the Project’s featurelist to get.
- Returns: featurelist found by name, optional
- Return type: Featurelist

> [!NOTE] Examples
> ```
> project = Project.get('5223deadbeefdeadbeef0101')
> featurelist = project.get_featurelist_by_name("Raw Features")
> ```

#### create_featurelist(name=None, features=None, starting_featurelist=None, starting_featurelist_id=None, starting_featurelist_name=None, features_to_include=None, features_to_exclude=None)

Creates a new featurelist

- Parameters:
- Returns: newly created featurelist
- Return type: Featurelist
- Raises:

> [!NOTE] Examples
> ```
> project = Project.get('5223deadbeefdeadbeef0101')
> flists = project.get_featurelists()
> 
> # Create a new featurelist using a subset of features from an
> # existing featurelist
> flist = flists[0]
> features = flist.features[::2]  # Half of the features
> 
> new_flist = project.create_featurelist(
>     name='Feature Subset',
>     features=features,
> )
> ```
> 
> ```
> project = Project.get('5223deadbeefdeadbeef0101')
> 
> # Create a new featurelist using a subset of features from an
> # existing featurelist by using features_to_exclude param
> 
> new_flist = project.create_featurelist(
>     name='Feature Subset of Existing Featurelist',
>     starting_featurelist_name="Informative Features",
>     features_to_exclude=["metformin", "weight", "age"],
> )
> ```

#### create_modeling_featurelist(name, features, skip_datetime_partition_column=False)

Create a new modeling featurelist

Modeling featurelists can only be created after the target and partitioning options have
been set for a project.  In time series projects, these are the featurelists that can be
used for modeling; in other projects, they behave the same as regular featurelists.

See the [time series documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#input-vs-modeling) for more information.

- Parameters:
- Returns: featurelist – the newly created featurelist
- Return type: ModelingFeaturelist

> [!NOTE] Examples
> ```
> project = Project.get('1234deadbeeffeeddead4321')
> modeling_features = project.get_modeling_features()
> selected_features = [feat.name for feat in modeling_features][:5]  # select first five
> new_flist = project.create_modeling_featurelist('Model This', selected_features)
> ```

#### get_metrics(feature_name)

Get the metrics recommended for modeling on the given feature.

- Parameters: feature_name ( str ) – The name of the feature to query regarding which metrics are
  recommended for modeling.
- Returns:

#### get_status()

Query the server for project status.

- Returns: status – Contains:
- Return type: dict

> [!NOTE] Examples
> ```
> {"autopilot_done": False,
>  "stage": "modeling",
>  "stage_description": "Ready for modeling"}
> ```

#### pause_autopilot()

Pause autopilot, which stops processing the next jobs in the queue.

- Returns: paused – Whether the command was acknowledged
- Return type: boolean

#### unpause_autopilot()

Unpause autopilot, which restarts processing the next jobs in the queue.

- Returns: unpaused – Whether the command was acknowledged.
- Return type: boolean

#### start_autopilot(featurelist_id, mode='quick', blend_best_models=False, scoring_code_only=False, prepare_model_for_deployment=True, consider_blenders_in_recommendation=False, run_leakage_removed_feature_list=True, autopilot_cluster_list=None)

Start Autopilot on provided featurelist with the specified Autopilot settings,
halting the current Autopilot run.

Only one autopilot can be running at the time.
That’s why any ongoing autopilot on a different featurelist will
be halted - modeling jobs in queue would not
be affected but new jobs would not be added to queue by
the halted autopilot.

- Parameters:

#### train(trainable, sample_pct=None, featurelist_id=None, source_project_id=None, scoring_type=None, training_row_count=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=, n_clusters=None)

Submit a job to the queue to train a model.

Either sample_pct or training_row_count can be used to specify the amount of data to
use, but not both.  If neither are specified, a default of the maximum amount of data that
can safely be used to train any blueprint without going into the validation data will be
selected.

In smart-sampled projects, sample_pct and training_row_count are assumed to be in terms
of rows of the minority class.

> [!NOTE] Notes
> If the project uses datetime partitioning, use [Project.train_datetime](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.train_datetime) instead.

- Parameters:

> [!NOTE] Examples
> Use a `Blueprint` instance:
> 
> ```
> blueprint = project.get_blueprints()[0]
> model_job_id = project.train(blueprint, training_row_count=project.max_train_rows)
> ```
> 
> Use a `blueprint_id`, which is a string. In the first case, it is
> assumed that the blueprint was created by this project. If you are
> using a blueprint used by another project, you will need to pass the
> id of that other project as well.
> 
> ```
> blueprint_id = 'e1c7fc29ba2e612a72272324b8a842af'
> project.train(blueprint, training_row_count=project.max_train_rows)
> 
> another_project.train(blueprint, source_project_id=project.id)
> ```
> 
> You can also easily use this interface to train a new model using the data from
> an existing model:
> 
> ```
> model = project.get_models()[0]
> model_job_id = project.train(model.blueprint.id,
>                              sample_pct=100)
> ```

#### train_datetime(blueprint_id, featurelist_id=None, training_row_count=None, training_duration=None, source_project_id=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=, use_project_settings=False, sampling_method=None, n_clusters=None)

Create a new model in a datetime partitioned project

If the project is not datetime partitioned, an error will occur.

All durations should be specified with a duration string such as those returned
by the [partitioning_methods.construct_duration_string](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string) helper method.
Please see [datetime partitioned project documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/datetime_partition.html#date-dur-spec) for more information on duration strings.

- Parameters:
- Returns: job – the created job to build the model
- Return type: ModelJob

#### blend(model_ids, blender_method)

Submit a job for creating blender model. Upon success, the new job will
be added to the end of the queue.

- Parameters:
- Returns: model_job – New ModelJob instance for the blender creation job in queue.
- Return type: ModelJob

#### SEE ALSO

[datarobot.models.Project.check_blendable](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.check_blendable): to confirm if models can be blended

#### check_blendable(model_ids, blender_method)

Check if the specified models can be successfully blended

- Parameters:
- Return type: EligibilityResult

#### start_prepare_model_for_deployment(model_id)

Prepare a specific model for deployment.

The requested model will be trained on the maximum autopilot size then go through the
recommendation stages. For datetime partitioned projects, this includes the feature impact
stage, retraining on a reduced feature list, and retraining the best of the reduced
feature list model and the max autopilot original model on recent data. For non-datetime
partitioned projects, this includes the feature impact stage, retraining on a reduced
feature list, retraining the best of the reduced feature list model and the max autopilot
original model up to the holdout size, then retraining the up-to-the holdout model on the
full dataset.

- Parameters: model_id ( str ) – The model to prepare for deployment.
- Return type: None

#### get_all_jobs(status=None)

Get a list of jobs

This will give Jobs representing any type of job, including modeling or predict jobs.

- Parameters:status(QUEUE_STATUS enum,optional) – If called with QUEUE_STATUS.INPROGRESS, will return the jobs
that are currently running. If called with QUEUE_STATUS.QUEUE, will return the jobs that
are waiting to be run. If called with QUEUE_STATUS.ERROR, will return the jobs that
have errored. If no value is provided, will return all jobs currently running
or waiting to be run.
*Returns:jobs– Each is an instance of Job
*Return type:list

#### get_blenders()

Get a list of blender models.

- Returns: list of all blender models in project.
- Return type: list of BlenderModel

#### get_frozen_models()

Get a list of frozen models

- Returns: list of all frozen models in project.
- Return type: list of FrozenModel

#### get_combined_models()

Get a list of models in segmented project.

- Returns: list of all combined models in segmented project.
- Return type: list of CombinedModel

#### get_active_combined_model()

Retrieve currently active combined model in segmented project.

- Returns: currently active combined model in segmented project.
- Return type: CombinedModel

#### get_segments_models(combined_model_id=None)

Retrieve a list of all models belonging to the segments/child projects
of the segmented project.

- Parameters: combined_model_id ( Optional[str] ) – Id of the combined model to get segments for. If there is only a single
  combined model it can be retrieved automatically, but this must be
  specified when there are > 1 combined models.
- Returns: segments_models – A list of dictionaries containing all of the segments/child projects,
  each with a list of their models ordered by metric from best to worst.
- Return type: list(dict)

#### get_model_jobs(status=None)

Get a list of modeling jobs

- Parameters:status(QUEUE_STATUS enum,optional) – If called with QUEUE_STATUS.INPROGRESS, will return the modeling jobs
that are currently running. If called with QUEUE_STATUS.QUEUE, will return the modeling jobs that
are waiting to be run. If called with QUEUE_STATUS.ERROR, will return the modeling jobs that
have errored. If no value is provided, will return all modeling jobs currently running
or waiting to be run.
*Returns:jobs– Each is an instance of ModelJob
*Return type:list

#### get_predict_jobs(status=None)

Get a list of prediction jobs

- Parameters:status(QUEUE_STATUS enum,optional) – If called with QUEUE_STATUS.INPROGRESS, will return the prediction jobs
that are currently running. If called with QUEUE_STATUS.QUEUE, will return the prediction jobs that
are waiting to be run. If called with QUEUE_STATUS.ERROR, will return the prediction jobs that
have errored. If called without a status, will return all prediction jobs currently running
or waiting to be run.
*Returns:jobs– Each is an instance of PredictJob
*Return type:list

#### wait_for_autopilot(check_interval=20.0, timeout=86400, verbosity=1)

Blocks until autopilot is finished. This will raise an exception if the autopilot
mode is changed from AUTOPILOT_MODE.FULL_AUTO.

It makes API calls to sync the project state with the server and to look at
which jobs are enqueued.

- Parameters:
- Raises:
- Return type: None

#### rename(project_name)

Update the name of the project.

- Parameters: project_name ( str ) – The new name
- Return type: None

#### set_project_description(project_description)

Set or Update the project description.

- Parameters: project_description ( str ) – The new description for this project.
- Return type: None

#### unlock_holdout()

Unlock the holdout for this project.

This will cause subsequent queries of the models of this project to
contain the metric values for the holdout set, if it exists.

Take care, as this cannot be undone. Remember that best practice is to
select a model before analyzing the model performance on the holdout set

- Return type: None

#### set_worker_count(worker_count)

Sets the number of workers allocated to this project.

Note that this value is limited to the number allowed by your account.
Lowering the number will not stop currently running jobs, but will
cause the queue to wait for the appropriate number of jobs to finish
before attempting to run more jobs.

- Parameters: worker_count ( int ) – The number of concurrent workers to request from the pool of workers.
  (New in version v2.14) Setting this to -1 will update the number of workers to the
  maximum available to your account.
- Return type: None

#### set_advanced_options(advanced_options=None, **kwargs)

Update the advanced options of this project.

> [!NOTE] Notes
> Project options will not be stored at the database level, so the options
> set via this method will only be attached to a project instance for the lifetime of a
> client session (if you quit your session and reopen a new one before running autopilot,
> the advanced options will be lost).
> 
> Either accepts an AdvancedOptions object to replace all advanced options or individual keyword
> arguments. This is an inplace update, not a new object. The options set will only remain for the
> life of this project instance within a given session.

- Parameters:
- Return type: None

#### list_advanced_options()

View the advanced options that have been set on a project instance.
Includes those that haven’t been set (with value of None).

- Return type: dict of advanced options and their values

#### set_partitioning_method(cv_method=None, validation_type=None, seed=0, reps=None, user_partition_col=None, training_level=None, validation_level=None, holdout_level=None, cv_holdout_level=None, validation_pct=None, holdout_pct=None, partition_key_cols=None, partitioning_method=None)

Configures the partitioning method for this project.

If this project does not already have a partitioning method set, creates
a new configuration based on provided args.

If the partitioning_method arg is set, that configuration will instead be used.

> [!NOTE] Notes
> This is an inplace update, not a new object. The options set will only remain for the
> life of this project instance within a given session. You must still call `set_target` to make this change permanent for the project. Calling `refresh` without first calling `set_target` will invalidate this configuration. Similarly, calling `get` to retrieve a
> second copy of the project will not include this configuration.
> 
> Added in version v3.0.

- Parameters:
- Raises:
- Returns: project – The instance with updated attributes.
- Return type: Project

#### get_uri()

- Returns: url – Permanent static hyperlink to a project leaderboard.
- Return type: str

#### get_rating_table_models()

Get a list of models with a rating table

- Returns: list of all models with a rating table in project.
- Return type: list of RatingTableModel

#### get_rating_tables()

Get a list of rating tables

- Returns: list of rating tables in project.
- Return type: list of RatingTable

#### get_access_list()

Retrieve users who have access to this project and their access levels

Added in version v2.15.

- Return type: list of SharingAccess

#### share(access_list, send_notification=None, include_feature_discovery_entities=None)

Modify the ability of users to access this project

Added in version v2.15.

- Parameters:
- Return type: None
- Raises: datarobot.ClientError : – if you do not have permission to share this project, if the user you’re sharing with
      doesn’t exist, if the same user appears multiple times in the access_list, or if these
      changes would leave the project without an owner

> [!NOTE] Examples
> Transfer access to the project from [old_user@datarobot.com](mailto:old_user@datarobot.com) to [new_user@datarobot.com](mailto:new_user@datarobot.com)
> 
> ```
> import datarobot as dr
> 
> new_access = dr.SharingAccess(new_user@datarobot.com,
>                               dr.enums.SHARING_ROLE.OWNER, can_share=True)
> access_list = [dr.SharingAccess(old_user@datarobot.com, None), new_access]
> 
> dr.Project.get('my-project-id').share(access_list)
> ```

#### batch_features_type_transform(parent_names, variable_type, prefix=None, suffix=None, max_wait=600)

Create new features by transforming the type of existing ones.

Added in version v2.17.

> [!NOTE] Notes
> The following transformations are only supported in batch mode:
> 
> Text to categorical or numeric
> Categorical to text or numeric
> Numeric to categorical
> 
> See {ref}`here ` for special considerations when casting
> numeric to categorical.
> Date to categorical or numeric transformations are not currently supported for batch
> mode but can be performed individually usingcreate_type_transform_feature.

- Parameters:

#### clone_project(new_project_name=None, max_wait=600)

Create a fresh (post-EDA1) copy of this project that is ready for setting
targets and modeling options.

- Parameters:
- Return type: datarobot.models.Project

#### create_interaction_feature(name, features, separator, max_wait=600)

Create a new interaction feature by combining two categorical ones.

Added in version v2.21.

- Parameters:
- Returns: The data of the new Interaction feature
- Return type: datarobot.models.InteractionFeature
- Raises:

#### get_relationships_configuration()

Get the relationships configuration for a given project

Added in version v2.21.

- Returns: relationships_configuration – relationships configuration applied to project
- Return type: RelationshipsConfiguration

#### download_feature_discovery_dataset(file_name, pred_dataset_id=None)

Download Feature discovery training or prediction dataset

- Parameters:
- Return type: None

#### download_feature_discovery_recipe_sqls(file_name, model_id=None, max_wait=600)

Export and download Feature discovery recipe SQL statements
.. versionadded:: v2.25

- Parameters:
- Raises:
- Return type: None

#### validate_external_time_series_baseline(catalog_version_id, target, datetime_partitioning, max_wait=600)

Validate external baseline prediction catalog.

The forecast windows settings, validation and holdout duration specified in the
datetime specification must be consistent with project settings as these parameters
are used to check whether the specified catalog version id has been validated or not.
See [external baseline predictions documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#external-baseline-predictions) for example usage.

- Parameters:

#### download_multicategorical_data_format_errors(file_name)

Download multicategorical data format errors to the CSV file. If any format errors
where detected in potentially multicategorical features the resulting file will contain
at max 10 entries. CSV file content contains feature name, dataset index in which the
error was detected, row value and type of error detected. In case that there were no
errors or none of the features where potentially multicategorical the CSV file will be
empty containing only the header.

- Parameters: file_name ( str ) – File path where CSV file will be saved.
- Return type: None

#### get_multiseries_names()

For a multiseries timeseries project it returns all distinct entries in the
multiseries column. For a non timeseries project it will just return an empty list.

- Returns: multiseries_names – List of all distinct entries in the multiseries column
- Return type: List[str]

#### restart_segment(segment)

Restart single segment in a segmented project.

Added in version v2.28.

Segment restart is allowed only for segments that haven’t reached modeling phase.
Restart will permanently remove previous project and trigger set up of a new one
for particular segment.

- Parameters: segment ( str ) – Segment to restart

#### get_bias_mitigated_models(parent_model_id=None, offset=0, limit=100)

List the child models with bias mitigation applied

Added in version v2.29.

- Parameters:
- Returns: models
- Return type: list of dict

#### apply_bias_mitigation(bias_mitigation_parent_leaderboard_id, bias_mitigation_feature_name, bias_mitigation_technique, include_bias_mitigation_feature_as_predictor_variable)

Apply bias mitigation to an existing model by training a version of that model but with
bias mitigation applied.
An error will be returned if the model does not support bias mitigation with the technique
requested.

Added in version v2.29.

- Parameters:
- Returns: the job of the model with bias mitigation applied that was just submitted for training
- Return type: ModelJob

#### request_bias_mitigation_feature_info(bias_mitigation_feature_name)

Request a compute job for bias mitigation feature info for a given feature, which will
include
- if there are any rare classes
- if there are any combinations of the target values and the feature values that never occur
in the same row
- if the feature has a high number of missing values.
Note that this feature check is dependent on the current target selected for the project.

Added in version v2.29.

- Parameters: bias_mitigation_feature_name ( str ) – The feature name of the protected features that will be used in a bias mitigation task to
  attempt to mitigate bias
- Returns: Bias mitigation feature info model for the requested feature
- Return type: BiasMitigationFeatureInfo

#### get_bias_mitigation_feature_info(bias_mitigation_feature_name)

Get the computed bias mitigation feature info for a given feature, which will include
- if there are any rare classes
- if there are any combinations of the target values and the feature values that never occur
in the same row
- if the feature has a high number of missing values.
Note that this feature check is dependent on the current target selected for the project.
If this info has not already been computed, this will raise a 404 error.

Added in version v2.29.

- Parameters: bias_mitigation_feature_name ( str ) – The feature name of the protected features that will be used in a bias mitigation task to
  attempt to mitigate bias
- Returns: Bias mitigation feature info model for the requested feature
- Return type: BiasMitigationFeatureInfo

#### classmethod from_data(data)

Instantiate an object of this class using a dict.

- Parameters: data ( dict ) – Correctly snake_cased keys and their values.
- Return type: TypeVar ( T , bound= APIObject)

#### classmethod from_server_data(data, keep_attrs=None)

Instantiate an object of this class using the data directly from the server,
meaning that the keys may have the wrong camel casing

- Parameters:
- Return type: TypeVar ( T , bound= APIObject)

#### open_in_browser()

Opens class’ relevant web browser location.
If default browser is not available the URL is logged.

Note:
If text-mode browsers are used, the calling process will block
until the user exits the browser.

- Return type: None

#### set_datetime_partitioning(datetime_partition_spec=None, **kwargs)

Set the datetime partitioning method for a time series project by either passing in
a DatetimePartitioningSpecification instance or any individual attributes of that class.
Updates `self.partitioning_method` if already set previously (does not replace it).

This is an alternative to passing a specification to [Project.analyze_and_model](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model) via the `partitioning_method` parameter. To see the
full partitioning based on the project dataset, use [DatetimePartitioning.generate](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.generate).

Added in version v3.0.

- Parameters: datetime_partition_spec ( DatetimePartitioningSpecification ) – DatetimePartitioningSpecification ,
  optional
  The customizable aspects of datetime partitioning for a time series project. An alternative
  to passing individual settings (attributes of the DatetimePartitioningSpecification class).
- Returns: Full partitioning including user-specified attributes as well as those determined by DR
  based on the dataset.
- Return type: DatetimePartitioning

#### list_datetime_partition_spec()

List datetime partitioning settings.

This method makes an API call to retrieve settings from the DB if project is in the modeling
stage, i.e., if analyze_and_model (autopilot) has already been called.

If analyze_and_model has not yet been called, this method will instead simply print
settings from project.partitioning_method.

Added in version v3.0.

- Return type: DatetimePartitioningSpecification or None

### class datarobot.helpers.eligibility_result.EligibilityResult

Represents whether a particular operation is supported

For instance, a function to check whether a set of models can be blended can return an
EligibilityResult specifying whether or not blending is supported and why it may not be
supported.

- Variables:

## Advanced options

### class datarobot.helpers.AdvancedOptions

Used when setting the target of a project to set advanced options of modeling process.

- Parameters:

> [!NOTE] Examples
> ```
> import datarobot as dr
> advanced_options = dr.AdvancedOptions(
>     weights='weights_column',
>     offset=['offset_column'],
>     exposure='exposure_column',
>     response_cap=0.7,
>     blueprint_threshold=2,
>     smart_downsampled=True, majority_downsampling_rate=75.0)
> ```

#### get(_AdvancedOptions__key, _AdvancedOptions__default=None)

Return the value for key if key is in the dictionary, else default.

- Return type: Optional [ Any ]

#### pop(_AdvancedOptions__key)

If the key is not found, return the default if given; otherwise,
raise a KeyError.

- Return type: Optional [ Any ]

#### update_individual_options(**kwargs)

Update individual attributes of an instance of [AdvancedOptions](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.AdvancedOptions).

- Return type: None

#### collect_payload()

a helper to collect data for payload

- Return type: Dict [ str , Union [ Any , Dict [ str , str ]]]

## Partitioning

### class datarobot.RandomCV

A partition in which observations are randomly assigned to cross-validation groups
and the holdout set.

- Parameters:

### class datarobot.StratifiedCV

A partition in which observations are randomly assigned to cross-validation groups
and the holdout set, preserving in each group the same ratio of positive to negative cases as in
the original data.

- Parameters:

### class datarobot.GroupCV

A partition in which one column is specified, and rows sharing a common value
for that column are guaranteed to stay together in the partitioning into cross-validation
groups and the holdout set.

- Parameters:

### class datarobot.UserCV

A partition where the cross-validation folds and the holdout set are specified by
the user.

- Parameters:

### class datarobot.RandomTVH

Specifies a partitioning method in which rows are randomly assigned to training, validation,
and holdout.

- Parameters:

### class datarobot.UserTVH

Specifies a partitioning method in which rows are assigned by the user to training,
validation, and holdout sets.

- Parameters:

### class datarobot.StratifiedTVH

A partition in which observations are randomly assigned to train, validation, and
holdout sets, preserving in each group the same ratio of positive to negative cases as in the
original data.

- Parameters:

### class datarobot.GroupTVH

A partition in which one column is specified, and rows sharing a common value
for that column are guaranteed to stay together in the partitioning into the training,
validation, and holdout sets.

- Parameters:

### class datarobot.DatetimePartitioningSpecification

Uniquely defines a DatetimePartitioning for some project

Includes only the attributes of DatetimePartitioning that are directly controllable by users,
not those determined by the DataRobot application based on the project dataset and the
user-controlled settings.

This is the specification that should be passed to [Project.analyze_and_model](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model) via the `partitioning_method` parameter. To see the
full partitioning based on the project dataset, use [DatetimePartitioning.generate](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.generate).

All durations should be specified with a duration string such as those returned
by the [partitioning_methods.construct_duration_string](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string) helper method.
Please see [datetime partitioned project documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/datetime_partition.html#date-dur-spec) for more information on duration strings.

Note that either ( `holdout_start_date`, `holdout_duration`) or ( `holdout_start_date`, `holdout_end_date`) can be used to specify holdout partitioning settings.

- Variables:

#### collect_payload()

Set up the dict that should be sent to the server when setting the target

- Returns: partitioning_spec
- Return type: dict

#### prep_payload(project_id, max_wait=600)

Run any necessary validation and prep of the payload, including async operations

Mainly used for the datetime partitioning spec but implemented in general for consistency

- Return type: None

#### update(**kwargs)

Update this instance, matching attributes to kwargs

Mainly used for the datetime partitioning spec but implemented in general for consistency

- Return type: None

### class datarobot.BacktestSpecification

Uniquely defines a Backtest used in a DatetimePartitioning

Includes only the attributes of a backtest directly controllable by users.  The other attributes
are assigned by the DataRobot application based on the project dataset and the user-controlled
settings.

There are two ways to specify an individual backtest:

Option 1: Use `index`, `gap_duration`, `validation_start_date`, and `validation_duration`. All durations should be specified with a duration string such as those
returned by the [partitioning_methods.construct_duration_string](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string) helper method.

```
import datarobot as dr

partitioning_spec = dr.DatetimePartitioningSpecification(
    backtests=[
        # modify the first backtest using option 1
        dr.BacktestSpecification(
            index=0,
            gap_duration=dr.partitioning_methods.construct_duration_string(),
            validation_start_date=datetime(year=2010, month=1, day=1),
            validation_duration=dr.partitioning_methods.construct_duration_string(years=1),
        )
    ],
    # other partitioning settings...
)
```

Option 2 (New in version v2.20): Use `index`, `primary_training_start_date`, `primary_training_end_date`, `validation_start_date`, and `validation_end_date`. In this
case, note that setting `primary_training_end_date` and `validation_start_date` to the same
timestamp will result with no gap being created.

```
import datarobot as dr

partitioning_spec = dr.DatetimePartitioningSpecification(
    backtests=[
        # modify the first backtest using option 2
        dr.BacktestSpecification(
            index=0,
            primary_training_start_date=datetime(year=2005, month=1, day=1),
            primary_training_end_date=datetime(year=2010, month=1, day=1),
            validation_start_date=datetime(year=2010, month=1, day=1),
            validation_end_date=datetime(year=2011, month=1, day=1),
        )
    ],
    # other partitioning settings...
)
```

All durations should be specified with a duration string such as those returned
by the [partitioning_methods.construct_duration_string](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string) helper method.
Please see [datetime partitioned project documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/datetime_partition.html#date-dur-spec) for more information on duration strings.

- Variables:

### class datarobot.FeatureSettings

Per feature settings

- Variables:

#### collect_payload(use_a_priori=False)

- Parameters: use_a_priori ( bool ) – Switch to using the older a_priori key name instead of known_in_advance.
  Default False
- Return type: BacktestSpecification dictionary representation

### class datarobot.Periodicity

Periodicity configuration

- Parameters:

> [!NOTE] Examples
> ```
> from datarobot as dr
> periodicities = [
>     dr.Periodicity(time_steps=10, time_unit=dr.enums.TIME_UNITS.HOUR),
>     dr.Periodicity(time_steps=600, time_unit=dr.enums.TIME_UNITS.MINUTE)]
> spec = dr.DatetimePartitioningSpecification(
>     # ...
>     periodicities=periodicities
> )
> ```

### class datarobot.DatetimePartitioning

Full partitioning of a project for datetime partitioning.

To instantiate, use [DatetimePartitioning.get(project_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.get).

Includes both the attributes specified by the user, as well as those determined by the DataRobot
application based on the project dataset.  In order to use a partitioning to set the target,
call [to_specification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.to_specification) and pass the
resulting [DatetimePartitioningSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification) to [Project.analyze_and_model](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model) via the `partitioning_method` parameter.

The available training data corresponds to all the data available for training, while the
primary training data corresponds to the data that can be used to train while ensuring that all
backtests are available.  If a model is trained with more data than is available in the primary
training data, then all backtests may not have scores available.

All durations are specified with a duration string such as those returned
by the [partitioning_methods.construct_duration_string](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string) helper method.
Please see [datetime partitioned project documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/datetime_partition.html#date-dur-spec) for more information on duration strings.

- Variables:

#### classmethod generate(cls, project_id, spec, max_wait=600, target=None)

Preview the full partitioning determined by a DatetimePartitioningSpecification

Based on the project dataset and the partitioning specification, inspect the full
partitioning that would be used if the same specification were passed into [Project.analyze_and_model](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model).

- Parameters:
- Returns: the full generated partitioning
- Return type: DatetimePartitioning

#### classmethod get(project_id)

Retrieve the DatetimePartitioning from a project

Only available if the project has already set the target as a datetime project.

- Parameters: project_id ( str ) – the ID of the project to retrieve partitioning for
- Returns: DatetimePartitioning
- Return type: the full partitioning for the project

#### classmethod generate_optimized(project_id, spec, target, max_wait=600)

Preview the full partitioning determined by a DatetimePartitioningSpecification

Based on the project dataset and the partitioning specification, inspect the full
partitioning that would be used if the same specification were passed into
Project.analyze_and_model.

- Parameters:
- Returns: the full generated partitioning
- Return type: DatetimePartitioning

#### classmethod get_optimized(project_id, datetime_partitioning_id)

Retrieve an Optimized DatetimePartitioning from a project for the specified
datetime_partitioning_id. A datetime_partitioning_id is created by using the [generate_optimized](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.generate_optimized) function.

- Parameters:
- Returns: DatetimePartitioning
- Return type: the full partitioning for the project

#### classmethod feature_log_list(project_id, offset=None, limit=None)

Retrieve the feature derivation log content and log length for a time series project.

The Time Series Feature Log provides details about the feature generation process for a
time series project. It includes information about which features are generated and their
priority, as well as the detected properties of the time series data such as whether the
series is stationary, and periodicities detected.

This route is only supported for time series projects that have finished partitioning.

The feature derivation log will include information about:

- Detected stationarity of the series: e.g., ‘Series detected as non-stationary’
- Detected presence of multiplicative trend in the series: e.g., ‘Multiplicative trend detected’
- Detected presence of multiplicative trend in the series: e.g.,  ‘Detected periodicities: 7 day’
- Maximum number of feature to be generated: e.g., ‘Maximum number of feature to be generated is 1440’
- Window sizes used in rolling statistics / lag extractors e.g., ‘The window sizes chosen to be: 2 months (because the time step is 1 month and Feature Derivation Window is 2 months)’
- Features that are specified as known-in-advance e.g., ‘Variables treated as apriori: holiday’
- Details about why certain variables are transformed in the input data e.g., ‘Generating variable “y (log)” from “y” because multiplicative trend is detected’
- Details about features generated as timeseries features, and their priority e.g., ‘Generating feature “date (actual)” from “date” (priority: 1)’

- Parameters:
- Return type: Any

#### classmethod feature_log_retrieve(project_id)

Retrieve the feature derivation log content and log length for a time series project.

The Time Series Feature Log provides details about the feature generation process for a
time series project. It includes information about which features are generated and their
priority, as well as the detected properties of the time series data such as whether the
series is stationary, and periodicities detected.

This route is only supported for time series projects that have finished partitioning.

The feature derivation log will include information about:

- Detected stationarity of the series: e.g., ‘Series detected as non-stationary’
- Detected presence of multiplicative trend in the series: e.g., ‘Multiplicative trend detected’
- Detected presence of multiplicative trend in the series: e.g.,  ‘Detected periodicities: 7 day’
- Maximum number of feature to be generated: e.g., ‘Maximum number of feature to be generated is 1440’
- Window sizes used in rolling statistics / lag extractors e.g., ‘The window sizes chosen to be: 2 months (because the time step is 1 month and Feature Derivation Window is 2 months)’
- Features that are specified as known-in-advance e.g., ‘Variables treated as apriori: holiday’
- Details about why certain variables are transformed in the input data e.g., ‘Generating variable “y (log)” from “y” because multiplicative trend is detected’
- Details about features generated as timeseries features, and their priority e.g., ‘Generating feature “date (actual)” from “date” (priority: 1)’

- Parameters: project_id ( str ) – project id to retrieve a feature derivation log for.
- Return type: str

#### to_specification(use_holdout_start_end_format=False, use_backtest_start_end_format=False)

Render the DatetimePartitioning as a [DatetimePartitioningSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification)

The resulting specification can be used when setting the target, and contains only the
attributes directly controllable by users.

- Parameters:
- Returns: the specification for this partitioning
- Return type: DatetimePartitioningSpecification

#### to_dataframe()

Render the partitioning settings as a dataframe for convenience of display

Excludes project_id, datetime_partition_column, date_format,
autopilot_data_selection_method, validation_duration,
and number_of_backtests, as well as the row count information, if present.

Also excludes the time series specific parameters for use_time_series,
default_to_known_in_advance, default_to_do_not_derive, and defining the feature
derivation and forecast windows.

- Return type: DataFrame

#### classmethod datetime_partitioning_log_retrieve(project_id, datetime_partitioning_id)

Retrieve the datetime partitioning log content for an optimized datetime partitioning.

The datetime partitioning log provides details about the partitioning process for an OTV
or time series project.

- Parameters:
- Return type: Any

#### classmethod datetime_partitioning_log_list(project_id, datetime_partitioning_id, offset=None, limit=None)

Retrieve the datetime partitioning log content and log length for an optimized
datetime partitioning.

The Datetime Partitioning Log provides details about the partitioning process for an OTV
or Time Series project.

- Parameters:
- Return type: Any

#### classmethod get_input_data(project_id, datetime_partitioning_id)

Retrieve the input used to create an optimized DatetimePartitioning from a project for
the specified datetime_partitioning_id. A datetime_partitioning_id is created by using the [generate_optimized](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.generate_optimized) function.

- Parameters:
- Returns: DatetimePartitioningInput
- Return type: The input to optimized datetime partitioning.

### class datarobot.helpers.partitioning_methods.DatetimePartitioningId

Defines a DatetimePartitioningId used for datetime partitioning.

This class only includes the datetime_partitioning_id that identifies a previously
optimized datetime partitioning and the project_id for the associated project.

This is the specification that should be passed to [Project.analyze_and_model](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.models.Project.analyze_and_model) via the `partitioning_method` parameter. To see
the full partitioning use [DatetimePartitioning.get_optimized](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioning.get_optimized).

- Variables:

#### collect_payload()

Set up the dict that should be sent to the server when setting the target

- Returns: partitioning_spec
- Return type: dict

#### prep_payload(project_id, max_wait=600)

Run any necessary validation and prep of the payload, including async operations

Mainly used for the datetime partitioning spec but implemented in general for consistency

- Return type: None

#### update(**kwargs)

Update this instance, matching attributes to kwargs

Mainly used for the datetime partitioning spec but implemented in general for consistency

- Return type: NoReturn

### class datarobot.helpers.partitioning_methods.Backtest

A backtest used to evaluate models trained in a datetime partitioned project

When setting up a datetime partitioning project, backtests are specified by a [BacktestSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.BacktestSpecification).

The available training data corresponds to all the data available for training, while the
primary training data corresponds to the data that can be used to train while ensuring that all
backtests are available.  If a model is trained with more data than is available in the primary
training data, then all backtests may not have scores available.

All durations are specified with a duration string such as those returned
by the [partitioning_methods.construct_duration_string](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.helpers.partitioning_methods.construct_duration_string) helper method.
Please see [datetime partitioned project documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/datetime_partition.html#date-dur-spec) for more information on duration strings.

- Variables:

#### to_specification(use_start_end_format=False)

Render this backtest as a [BacktestSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.BacktestSpecification).

The resulting specification includes only the attributes users can directly control, not
those indirectly determined by the project dataset.

- Parameters: use_start_end_format ( bool ) – Default False . If False , will use a duration-based approach for specifying
  backtests ( gap_duration , validation_start_date , and validation_duration ).
  If True , will use a start/end date approach for specifying
  backtests ( primary_training_start_date , primary_training_end_date , validation_start_date , validation_end_date ).
  In contrast, projects created in the Web UI will use the start/end date approach for specifying
  backtests. Set this parameter to True to mirror the behavior in the Web UI.
- Returns: the specification for this backtest
- Return type: BacktestSpecification

#### to_dataframe()

Render this backtest as a dataframe for convenience of display

- Returns: backtest_partitioning – the backtest attributes, formatted into a dataframe
- Return type: pandas.Dataframe

### class datarobot.helpers.partitioning_methods.FeatureSettingsPayload

### datarobot.helpers.partitioning_methods.construct_duration_string(years=0, months=0, days=0, hours=0, minutes=0, seconds=0)

Construct a valid string representing a duration in accordance with ISO8601

A duration of six months, 3 days, and 12 hours could be represented as P6M3DT12H.

- Parameters:
- Returns: duration_string – The duration string, specified compatibly with ISO8601
- Return type: str

## Status check job

### class datarobot.models.StatusCheckJob

Tracks asynchronous task status

- Variables: job_id ( str ) – The ID of the status the job belongs to.

#### wait_for_completion(max_wait=600)

Waits for job to complete.

- Parameters: max_wait ( Optional[int] ) – How long to wait for the job to finish. If the time expires, DataRobot returns the current status.
- Returns: status – Returns the current status of the job.
- Return type: JobStatusResult

#### get_status()

Retrieve JobStatusResult object with the latest job status data from the server.

- Return type: JobStatusResult

#### get_result_when_complete(max_wait=600)

Wait for the job to complete, then attempt to convert the resulting json into an object of type
self.resource_type
:rtype: `A newly created resource` of `type self.resource_type`

### class datarobot.models.JobStatusResult

JobStatusResult(status, status_id, completed_resource_url, message)

#### status : Optional[str]

Alias for field number 0

#### status_id : Optional[str]

Alias for field number 1

#### completed_resource_url : Optional[str]

Alias for field number 2

#### message : Optional[str]

Alias for field number 3

## Segmented modeling

API Reference for entities used in Segmented Modeling. See dedicated [User Guide](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/segmented_modeling.html#segmented-modeling) for examples.

### class datarobot.CombinedModel

A model from a segmented project. Combination of ordinary models in child segments projects.

- Variables:

#### classmethod get(project_id, combined_model_id)

Retrieve combined model

- Parameters:
- Returns: The queried combined model.
- Return type: CombinedModel

#### classmethod set_segment_champion(project_id, model_id, clone=False)

Update a segment champion in a combined model by setting the model_id
that belongs to the child project_id as the champion.

- Parameters:
- Returns: combined_model_id – Id of the combined model that was updated
- Return type: str

#### get_segments_info()

Retrieve Combined Model segments info

- Returns: List of segments
- Return type: list[SegmentInfo]

#### get_segments_as_dataframe(encoding='utf-8')

Retrieve Combine Models segments as a DataFrame.

- Parameters: encoding ( Optional[str] ) – A string representing the encoding to use in the output csv file.
  Defaults to ‘utf-8’.
- Returns: Combined model segments
- Return type: DataFrame

#### get_segments_as_csv(filename, encoding='utf-8')

Save the Combine Models segments to a csv.

- Parameters:
- Return type: None

#### train(sample_pct=None, featurelist_id=None, scoring_type=None, training_row_count=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=)

Inherited from Model - CombinedModels cannot be retrained directly

- Return type: NoReturn

#### train_datetime(featurelist_id=None, training_row_count=None, training_duration=None, time_window_sample_pct=None, monotonic_increasing_featurelist_id=, monotonic_decreasing_featurelist_id=, use_project_settings=False, sampling_method=None, n_clusters=None)

Inherited from Model - CombinedModels cannot be retrained directly

- Return type: NoReturn

#### retrain(sample_pct=None, featurelist_id=None, training_row_count=None, n_clusters=None)

Inherited from Model - CombinedModels cannot be retrained directly

- Return type: NoReturn

#### request_frozen_model(sample_pct=None, training_row_count=None)

Inherited from Model - CombinedModels cannot be retrained as frozen

- Return type: NoReturn

#### request_frozen_datetime_model(training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, time_window_sample_pct=None, sampling_method=None)

Inherited from Model - CombinedModels cannot be retrained as frozen

- Return type: NoReturn

#### cross_validate()

Inherited from Model - CombinedModels cannot request cross validation

- Return type: NoReturn

### class datarobot.SegmentationTask

A Segmentation Task is used for segmenting an existing project into multiple child
projects. Each child project (or segment) will be a separate autopilot run. Currently
only user defined segmentation is supported.

Example for creating a new SegmentationTask for Time Series segmentation with a
user defined id column:

```
from datarobot import SegmentationTask

# Create the SegmentationTask
segmentation_task_results = SegmentationTask.create(
    project_id=project.id,
    target=target,
    use_time_series=True,
    datetime_partition_column=datetime_partition_column,
    multiseries_id_columns=[multiseries_id_column],
    user_defined_segment_id_columns=[user_defined_segment_id_column]
)

# Retrieve the completed SegmentationTask object from the job results
segmentation_task = segmentation_task_results['completedJobs'][0]
```

- Variables:

#### classmethod from_data(data)

Instantiate an object of this class using a dict.

- Parameters: data ( dict ) – Correctly snake_cased keys and their values.
- Return type: SegmentationTask

#### collect_payload()

Convert the record to a dictionary

- Return type: Dict [ str , str ]

#### classmethod create(project_id, target, use_time_series=False, datetime_partition_column=None, multiseries_id_columns=None, user_defined_segment_id_columns=None, max_wait=600, model_package_id=None)

Creates segmentation tasks for the project based on the defined parameters.

- Parameters:
- Returns: segmentation_tasks – Dictionary containing the numberOfJobs, completedJobs, and failedJobs. completedJobs
  is a list of SegmentationTask objects, while failed jobs is a list of dictionaries
  indicating problems with submitted tasks.
- Return type: dict

#### classmethod list(project_id)

List all of the segmentation tasks that have been created for a specific project_id.

- Parameters: project_id ( str ) – The id of the parent project
- Returns: segmentation_tasks – List of instances with initialized data.
- Return type: list of SegmentationTask

#### classmethod get(project_id, segmentation_task_id)

Retrieve information for a single segmentation task associated with a project_id.

- Parameters:
- Returns: segmentation_task – Instance with initialized data.
- Return type: SegmentationTask

### class datarobot.SegmentInfo

A SegmentInfo is an object containing information about the combined model segments

- Variables:

#### classmethod list(project_id, model_id)

List all of the segments that have been created for a specific project_id.

- Parameters: project_id ( str ) – The id of the parent project
- Returns: segments – List of instances with initialized data.
- Return type: list of datarobot.models.segmentation.SegmentInfo

### class datarobot.models.segmentation.SegmentationTask

A Segmentation Task is used for segmenting an existing project into multiple child
projects. Each child project (or segment) will be a separate autopilot run. Currently
only user defined segmentation is supported.

Example for creating a new SegmentationTask for Time Series segmentation with a
user defined id column:

```
from datarobot import SegmentationTask

# Create the SegmentationTask
segmentation_task_results = SegmentationTask.create(
    project_id=project.id,
    target=target,
    use_time_series=True,
    datetime_partition_column=datetime_partition_column,
    multiseries_id_columns=[multiseries_id_column],
    user_defined_segment_id_columns=[user_defined_segment_id_column]
)

# Retrieve the completed SegmentationTask object from the job results
segmentation_task = segmentation_task_results['completedJobs'][0]
```

- Variables:

#### classmethod from_data(data)

Instantiate an object of this class using a dict.

- Parameters: data ( dict ) – Correctly snake_cased keys and their values.
- Return type: SegmentationTask

#### collect_payload()

Convert the record to a dictionary

- Return type: Dict [ str , str ]

#### classmethod create(project_id, target, use_time_series=False, datetime_partition_column=None, multiseries_id_columns=None, user_defined_segment_id_columns=None, max_wait=600, model_package_id=None)

Creates segmentation tasks for the project based on the defined parameters.

- Parameters:
- Returns: segmentation_tasks – Dictionary containing the numberOfJobs, completedJobs, and failedJobs. completedJobs
  is a list of SegmentationTask objects, while failed jobs is a list of dictionaries
  indicating problems with submitted tasks.
- Return type: dict

#### classmethod list(project_id)

List all of the segmentation tasks that have been created for a specific project_id.

- Parameters: project_id ( str ) – The id of the parent project
- Returns: segmentation_tasks – List of instances with initialized data.
- Return type: list of SegmentationTask

#### classmethod get(project_id, segmentation_task_id)

Retrieve information for a single segmentation task associated with a project_id.

- Parameters:
- Returns: segmentation_task – Instance with initialized data.
- Return type: SegmentationTask

### class datarobot.models.segmentation.SegmentationTaskCreatedResponse

## External baseline validation

### class datarobot.models.external_baseline_validation.ExternalBaselineValidationInfo

An object containing information about external time series baseline predictions
validation results.

- Variables:

#### classmethod get(project_id, validation_job_id)

Get information about external baseline validation job

- Parameters:
- Returns: info – information about external baseline validation job
- Return type: ExternalBaselineValidationInfo

## Calendar file

### class datarobot.CalendarFile

Represents the data for a calendar file.

For more information about calendar files, see the [calendar documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#calendar-files).

- Variables:

#### classmethod create(file_path, calendar_name=None, multiseries_id_columns=None)

Creates a calendar using the given file. For information about calendar files, see the [calendar documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#calendar-files)

The provided file must be a CSV in the format:

```
Date,   Event,          Series ID,    Event Duration
<date>, <event_type>,   <series id>,  <event duration>
<date>, <event_type>,              ,  <event duration>
```

A header row is required, and the “Series ID” and “Event Duration” columns are optional.

Once the CalendarFile has been created, pass its ID with
the [DatetimePartitioningSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification) when setting the target for a time series project in order to use it.

- Parameters:
- Returns: calendar_file – Instance with initialized data.
- Return type: CalendarFile
- Raises: AsyncProcessUnsuccessfulError – Raised if there was an error processing the provided calendar file.

> [!NOTE] Examples
> ```
> # Creating a calendar with a specified name
> cal = dr.CalendarFile.create('/home/calendars/somecalendar.csv',
>                                          calendar_name='Some Calendar Name')
> cal.id
> >>> 5c1d4904211c0a061bc93013
> cal.name
> >>> Some Calendar Name
> 
> # Creating a calendar without specifying a name
> cal = dr.CalendarFile.create('/home/calendars/somecalendar.csv')
> cal.id
> >>> 5c1d4904211c0a061bc93012
> cal.name
> >>> somecalendar.csv
> 
> # Creating a calendar with multiseries id columns
> cal = dr.CalendarFile.create('/home/calendars/somemultiseriescalendar.csv',
>                              calendar_name='Some Multiseries Calendar Name',
>                              multiseries_id_columns=['series_id'])
> cal.id
> >>> 5da9bb21962d746f97e4daee
> cal.name
> >>> Some Multiseries Calendar Name
> cal.multiseries_id_columns
> >>> ['series_id']
> ```

#### classmethod create_calendar_from_dataset(dataset_id, dataset_version_id=None, calendar_name=None, multiseries_id_columns=None, delete_on_error=False)

Creates a calendar using the given dataset. For information about calendar files, see the [calendar documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/modeling/spec/time_series.html#calendar-files)

The provided dataset have the following format:

```
Date,   Event,          Series ID,    Event Duration
<date>, <event_type>,   <series id>,  <event duration>
<date>, <event_type>,              ,  <event duration>
```

The “Series ID” and “Event Duration” columns are optional.

Once the CalendarFile has been created, pass its ID with
the [DatetimePartitioningSpecification](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.DatetimePartitioningSpecification) when setting the target for a time series project in order to use it.

- Parameters:
- Returns: calendar_file – Instance with initialized data.
- Return type: CalendarFile
- Raises: AsyncProcessUnsuccessfulError – Raised if there was an error processing the provided calendar file.

> [!NOTE] Examples
> ```
> # Creating a calendar from a dataset
> dataset = dr.Dataset.create_from_file('/home/calendars/somecalendar.csv')
> cal = dr.CalendarFile.create_calendar_from_dataset(
>     dataset.id, calendar_name='Some Calendar Name'
> )
> cal.id
> >>> 5c1d4904211c0a061bc93013
> cal.name
> >>> Some Calendar Name
> 
> # Creating a calendar from a new dataset version
> new_dataset_version = dr.Dataset.create_version_from_file(
>     dataset.id, '/home/calendars/anothercalendar.csv'
> )
> cal = dr.CalendarFile.create(
>     new_dataset_version.id, dataset_version_id=new_dataset_version.version_id
> )
> cal.id
> >>> 5c1d4904211c0a061bc93012
> cal.name
> >>> anothercalendar.csv
> ```

#### classmethod create_calendar_from_country_code(country_code, start_date, end_date)

Generates a calendar based on the provided country code and dataset start date and end
dates. The provided country code should be uppercase and 2-3 characters long. See [CalendarFile.get_allowed_country_codes](https://docs.datarobot.com/en/docs/api/reference/sdk/projects.html#datarobot.CalendarFile.get_allowed_country_codes) for a list of allowed country codes.

- Parameters:
- Returns: calendar_file – Instance with initialized data.
- Return type: CalendarFile

#### classmethod get_allowed_country_codes(offset=None, limit=None)

Retrieves the list of allowed country codes that can be used for generating the preloaded
calendars.

- Parameters:
- Returns: A list dicts, each of which represents an allowed country codes. Each item has the
  following structure:
- Return type: list

#### classmethod get(calendar_id)

Gets the details of a calendar, given the id.

- Parameters: calendar_id ( str ) – The identifier of the calendar.
- Returns: calendar_file – The requested calendar.
- Return type: CalendarFile
- Raises: DataError – Raised if the calendar_id is invalid, i.e., the specified CalendarFile does not exist.

> [!NOTE] Examples
> ```
> cal = dr.CalendarFile.get(some_calendar_id)
> cal.id
> >>> some_calendar_id
> ```

#### classmethod list(project_id=None, batch_size=None)

Gets the details of all calendars this user has view access for.

- Parameters:
- Returns: calendar_list – A list of CalendarFile objects.
- Return type: list of CalendarFile

> [!NOTE] Examples
> ```
> calendars = dr.CalendarFile.list()
> len(calendars)
> >>> 10
> ```

#### classmethod delete(calendar_id)

Deletes the calendar specified by calendar_id.

- Parameters: calendar_id ( str ) – The id of the calendar to delete.
  The requester must have OWNER access for this calendar.
- Raises: ClientError – Raised if an invalid calendar_id is provided.
- Return type: None

> [!NOTE] Examples
> ```
> # Deleting with a valid calendar_id
> status_code = dr.CalendarFile.delete(some_calendar_id)
> status_code
> >>> 204
> dr.CalendarFile.get(some_calendar_id)
> >>> ClientError: Item not found
> ```

#### classmethod update_name(calendar_id, new_calendar_name)

Changes the name of the specified calendar to the specified name.
The requester must have at least READ_WRITE permissions on the calendar.

- Parameters:
- Returns: status_code – 200 for success
- Return type: int
- Raises: ClientError – Raised if an invalid calendar_id is provided.

> [!NOTE] Examples
> ```
> response = dr.CalendarFile.update_name(some_calendar_id, some_new_name)
> response
> >>> 200
> cal = dr.CalendarFile.get(some_calendar_id)
> cal.name
> >>> some_new_name
> ```

#### classmethod share(calendar_id, access_list)

Shares the calendar with the specified users, assigning the specified roles.

- Parameters:
- Returns: status_code – 200 for success
- Return type: int
- Raises:

> [!NOTE] Examples
> ```
> # assuming some_user is a valid user, share this calendar with some_user
> sharing_list = [dr.SharingAccess(some_user_username,
>                                  dr.enums.SHARING_ROLE.READ_WRITE)]
> response = dr.CalendarFile.share(some_calendar_id, sharing_list)
> response.status_code
> >>> 200
> 
> # delete some_user from this calendar, assuming they have access of some kind already
> delete_sharing_list = [dr.SharingAccess(some_user_username,
>                                         None)]
> response = dr.CalendarFile.share(some_calendar_id, delete_sharing_list)
> response.status_code
> >>> 200
> 
> # Attempt to add an invalid user to a calendar
> invalid_sharing_list = [dr.SharingAccess(invalid_username,
>                                          dr.enums.SHARING_ROLE.READ_WRITE)]
> dr.CalendarFile.share(some_calendar_id, invalid_sharing_list)
> >>> ClientError: Unable to update access for this calendar
> ```

#### classmethod get_access_list(calendar_id, batch_size=None)

Retrieve a list of users that have access to this calendar.

- Parameters:
- Returns: access_control_list – A list of SharingAccess objects.
- Return type: list of SharingAccess
- Raises: ClientError – Raised if user does not have access to calendar or calendar does not exist.

### class datarobot.models.calendar_file.CountryCode
