# LLM compliance tests

> LLM compliance tests - Cost configuration for a specific LLM model; used for cost metric
> calculation. Price-per-token is price/reference token count.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.520672+00:00` (UTC).

## Primary page

- [LLM compliance tests](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html): Full documentation for this topic (HTML).

## Sections on this page

- [classdatarobot.models.genai.insights_configuration.InsightsConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.insights_configuration.InsightsConfiguration): In-page section heading.
- [classmethodfrom_data(data)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.insights_configuration.InsightsConfiguration.from_data): In-page section heading.
- [classdatarobot.models.genai.cost_metric_configurations.LLMCostConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.cost_metric_configurations.LLMCostConfiguration): In-page section heading.
- [classdatarobot.models.genai.cost_metric_configurations.CostMetricConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.cost_metric_configurations.CostMetricConfiguration): In-page section heading.
- [classmethodget(cost_metric_configuration_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.cost_metric_configurations.CostMetricConfiguration.get): In-page section heading.
- [update(cost_metric_configurations, name=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.cost_metric_configurations.CostMetricConfiguration.update): In-page section heading.
- [classmethodcreate(use_case_id, playground_id, name, cost_metric_configurations)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.cost_metric_configurations.CostMetricConfiguration.create): In-page section heading.
- [delete()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.cost_metric_configurations.CostMetricConfiguration.delete): In-page section heading.
- [classdatarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration): In-page section heading.
- [classmethodget(id)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.get): In-page section heading.
- [classmethodlist(use_case_id, playground_id, evaluation_dataset_configuration_id=None, offset=0, limit=100, sort=None, search=None, correctness_only=False, completed_only=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.list): In-page section heading.
- [classmethodcreate(name, use_case_id, dataset_id, prompt_column_name, playground_id, is_synthetic_dataset=False, response_column_name=None, tool_calls_column_name=None, agent_goals_column_name=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.create): In-page section heading.
- [update(name=None, dataset_id=None, prompt_column_name=None, response_column_name=None, tool_calls_column_name=None, agent_goals_column_name=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.update): In-page section heading.
- [delete()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration.delete): In-page section heading.
- [classdatarobot.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation): In-page section heading.
- [classmethodcreate(chat_name, llm_blueprint_ids, evaluation_dataset_configuration_id, insights_configuration)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation.create): In-page section heading.
- [classmethodlist(llm_blueprint_ids=None, chat_ids=None, evaluation_dataset_configuration_ids=None, metric_names=None, aggregation_types=None, current_configuration_only=False, sort=None, offset=0, limit=100, non_errored_only=True)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation.list): In-page section heading.
- [classmethoddelete(llm_blueprint_ids=None, chat_ids=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation.delete): In-page section heading.
- [classdatarobot.models.genai.synthetic_evaluation_dataset_generation.SyntheticEvaluationDataset](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.synthetic_evaluation_dataset_generation.SyntheticEvaluationDataset): In-page section heading.
- [classmethodcreate(llm_id, vector_database_id, llm_settings=None, dataset_name=None, language=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.synthetic_evaluation_dataset_generation.SyntheticEvaluationDataset.create): In-page section heading.
- [classdatarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation): In-page section heading.
- [classmethodcreate(deployment_id, name, prediction_timeout, model_id=None, use_case_id=None, playground_id=None, prompt_column_name=None, target_column_name=None, response_column_name=None, citation_prefix_column_name=None, expected_response_column_name=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation.create): In-page section heading.
- [classmethodlist(use_case_ids=None, offset=None, limit=None, search=None, sort=None, completed_only=True, deployment_id=None, model_id=None, prompt_column_name=None, target_column_name=None, citation_prefix_column_name=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation.list): In-page section heading.
- [classmethodget(validation_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation.get): In-page section heading.
- [revalidate()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation.revalidate): In-page section heading.
- [update(name=None, prompt_column_name=None, target_column_name=None, response_column_name=None, expected_response_column_name=None, citation_prefix_column_name=None, deployment_id=None, model_id=None, prediction_timeout=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation.update): In-page section heading.
- [delete()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation.delete): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.LLMTestConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfiguration): In-page section heading.
- [classmethodcreate(name, dataset_evaluations, llm_test_grading_criteria, use_case=None, description=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfiguration.create): In-page section heading.
- [classmethodget(llm_test_configuration)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfiguration.get): In-page section heading.
- [classmethodlist(use_case=None, test_config_type=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfiguration.list): In-page section heading.
- [update(name=None, description=None, dataset_evaluations=None, llm_test_grading_criteria=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfiguration.update): In-page section heading.
- [delete()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfiguration.delete): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.LLMTestConfigurationSupportedInsights](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfigurationSupportedInsights): In-page section heading.
- [classmethodlist(use_case=None, playground=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.LLMTestConfigurationSupportedInsights.list): In-page section heading.
- [classdatarobot.models.genai.llm_test_result.LLMTestResult](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_result.LLMTestResult): In-page section heading.
- [classmethodcreate(llm_test_configuration, llm_blueprint)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_result.LLMTestResult.create): In-page section heading.
- [classmethodget(llm_test_result)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_result.LLMTestResult.get): In-page section heading.
- [classmethodlist(llm_test_configuration=None, llm_blueprint=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_result.LLMTestResult.list): In-page section heading.
- [delete()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_result.LLMTestResult.delete): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.DatasetEvaluation](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.DatasetEvaluation): In-page section heading.
- [classdatarobot.models.genai.llm_test_result.InsightEvaluationResult](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_result.InsightEvaluationResult): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.OOTBDatasetDict](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.OOTBDatasetDict): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.DatasetEvaluationRequestDict](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.DatasetEvaluationRequestDict): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.DatasetEvaluationDict](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.DatasetEvaluationDict): In-page section heading.
- [classdatarobot.models.genai.nemo_configuration.NemoConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.nemo_configuration.NemoConfiguration): In-page section heading.
- [classmethodget(playground)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.nemo_configuration.NemoConfiguration.get): In-page section heading.
- [classmethodupsert(playground, blocked_terms_file_contents, prompt_pipeline_metric_name=None, prompt_pipeline_files=None, prompt_llm_configuration=None, prompt_moderation_configuration=None, prompt_pipeline_template_id=None, response_pipeline_metric_name=None, response_pipeline_files=None, response_llm_configuration=None, response_moderation_configuration=None, response_pipeline_template_id=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.nemo_configuration.NemoConfiguration.upsert): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.OOTBDataset](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.OOTBDataset): In-page section heading.
- [classmethodlist()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.OOTBDataset.list): In-page section heading.
- [classdatarobot.models.genai.llm_test_configuration.NonOOTBDataset](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.NonOOTBDataset): In-page section heading.
- [classmethodlist(use_case=None)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.llm_test_configuration.NonOOTBDataset.list): In-page section heading.
- [classdatarobot.models.genai.metric_insights.MetricInsights](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.metric_insights.MetricInsights): In-page section heading.
- [classmethodlist(playground, llm_blueprint_ids=None, with_aggregation_types_only=False, production_only=False, completed_only=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.metric_insights.MetricInsights.list): In-page section heading.
- [classmethodcopy_to_playground(source_playground, target_playground, add_to_existing=True, with_evaluation_datasets=False)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.metric_insights.MetricInsights.copy_to_playground): In-page section heading.
- [classdatarobot.models.genai.ootb_metric_configuration.PlaygroundOOTBMetricConfiguration](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.ootb_metric_configuration.PlaygroundOOTBMetricConfiguration): In-page section heading.
- [classmethodget(playground_id)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.ootb_metric_configuration.PlaygroundOOTBMetricConfiguration.get): In-page section heading.
- [classmethodcreate(playground_id, ootb_metric_configurations)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.ootb_metric_configuration.PlaygroundOOTBMetricConfiguration.create): In-page section heading.
- [classdatarobot.models.genai.evaluation_dataset_utils.ReferenceToolCall](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCall): In-page section heading.
- [json()](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCall.json): In-page section heading.
- [classmethodfrom_json(json_str)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCall.from_json): In-page section heading.
- [classdatarobot.models.genai.evaluation_dataset_utils.ReferenceToolCalls](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCalls): In-page section heading.
- [classmethodfrom_json(json_str)](https://docs.datarobot.com/en/docs/api/reference/sdk/gen-testing.html#datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCalls.from_json): In-page section heading.

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [API reference](https://docs.datarobot.com/en/docs/api/reference/index.html): Linked from this page.
- [Python API client](https://docs.datarobot.com/en/docs/api/reference/sdk/index.html): Linked from this page.
- [Generative AI](https://docs.datarobot.com/en/docs/api/reference/sdk/tag-genai.html): Linked from this page.

## Documentation content

# AI Robustness Tests

### class datarobot.models.genai.insights_configuration.InsightsConfiguration

Configuration information for a specific insight.

- Variables:

#### classmethod from_data(data)

Properly convert composition classes.

- Return type: InsightsConfiguration

### class datarobot.models.genai.cost_metric_configurations.LLMCostConfiguration

Cost configuration for a specific LLM model; used for cost metric calculation.
Price-per-token is price/reference token count.

- Variables:

### class datarobot.models.genai.cost_metric_configurations.CostMetricConfiguration

Cost metric configuration for a use case.

- Variables:

#### classmethod get(cost_metric_configuration_id)

Get cost metric configuration by ID.

- Return type: CostMetricConfiguration

#### update(cost_metric_configurations, name=None)

Update the cost configurations.

- Return type: CostMetricConfiguration

#### classmethod create(use_case_id, playground_id, name, cost_metric_configurations)

Create a new cost metric configuration.

- Return type: CostMetricConfiguration

#### delete()

Delete the cost metric configuration.

- Return type: None

### class datarobot.models.genai.evaluation_dataset_configuration.EvaluationDatasetConfiguration

An evaluation dataset configuration used to evaluate the performance of LLMs.

- Variables:

#### classmethod get(id)

Get an evaluation dataset configuration by ID.

- Parameters: id ( str ) – The evaluation dataset configuration ID to fetch.
- Returns: evaluation_dataset_configuration – The evaluation dataset configuration.
- Return type: EvaluationDatasetConfiguration

#### classmethod list(use_case_id, playground_id, evaluation_dataset_configuration_id=None, offset=0, limit=100, sort=None, search=None, correctness_only=False, completed_only=False)

List all evaluation dataset configurations for a Use Case.

- Parameters:
- Returns: evaluation_dataset_configurations – A list of evaluation dataset configurations.
- Return type: List[EvaluationDatasetConfiguration]

#### classmethod create(name, use_case_id, dataset_id, prompt_column_name, playground_id, is_synthetic_dataset=False, response_column_name=None, tool_calls_column_name=None, agent_goals_column_name=None)

Create an evaluation dataset configuration for an existing dataset.

- Parameters:
- Returns: evaluation_dataset_configuration – The created evaluation dataset configuration.
- Return type: EvaluationDatasetConfiguration

#### update(name=None, dataset_id=None, prompt_column_name=None, response_column_name=None, tool_calls_column_name=None, agent_goals_column_name=None)

Update the evaluation dataset configuration.

- Parameters:
- Returns: evaluation_dataset_configuration – The updated evaluation dataset configuration.
- Return type: EvaluationDatasetConfiguration

#### delete()

Delete the evaluation dataset configuration.

- Return type: None

### class datarobot.models.genai.evaluation_dataset_metric_aggregation.EvaluationDatasetMetricAggregation

Information about the aggregated metric results for one metric and one evaluation dataset.
This class can list already computed aggregations or start the job computing the aggregations.
Jobs will prompt an LLM blueprint or agentic workflow, compute metrics and aggregate the
results across prompts.

- Variables:

#### classmethod create(chat_name, llm_blueprint_ids, evaluation_dataset_configuration_id, insights_configuration)

Create a new evaluation dataset metric aggregation job.  The job will run the
specified metric for the specified LLM blueprint IDs using the prompt-response pairs in
the evaluation dataset.

- Parameters:
- Returns: The ID of the evaluation dataset metric aggregation job.
- Return type: str

#### classmethod list(llm_blueprint_ids=None, chat_ids=None, evaluation_dataset_configuration_ids=None, metric_names=None, aggregation_types=None, current_configuration_only=False, sort=None, offset=0, limit=100, non_errored_only=True)

List evaluation dataset metric aggregations.  The results will be filtered by the provided
LLM blueprint IDs and chat IDs.

- Parameters:
- Returns: A list of evaluation dataset metric aggregations.
- Return type: List[EvaluationDatasetMetricAggregation]

#### classmethod delete(llm_blueprint_ids=None, chat_ids=None)

Delete the associated evaluation dataset metric aggregations.  Either llm_blueprint_ids
or chat_ids must be provided.  If both are provided, only results matching both will be removed.

- Parameters:
- Return type: None

### class datarobot.models.genai.synthetic_evaluation_dataset_generation.SyntheticEvaluationDataset

A synthetically generated evaluation dataset for LLMs.

- Variables:

#### classmethod create(llm_id, vector_database_id, llm_settings=None, dataset_name=None, language=None)

Create a synthetic evaluation dataset generation job.  This will
create a synthetic dataset to be used for evaluation of a language model.

- Parameters:
- Returns: SyntheticEvaluationDataset
- Return type: Reference to the synthetic evaluation dataset that was created.

### class datarobot.models.genai.sidecar_model_metric.SidecarModelMetricValidation

A sidecar model metric validation for LLMs.

- Variables:

#### classmethod create(deployment_id, name, prediction_timeout, model_id=None, use_case_id=None, playground_id=None, prompt_column_name=None, target_column_name=None, response_column_name=None, citation_prefix_column_name=None, expected_response_column_name=None)

Create a sidecar model metric validation.

- Parameters:
- Returns: The created sidecar model metric validation.
- Return type: SidecarModelMetricValidation

#### classmethod list(use_case_ids=None, offset=None, limit=None, search=None, sort=None, completed_only=True, deployment_id=None, model_id=None, prompt_column_name=None, target_column_name=None, citation_prefix_column_name=None)

List sidecar model metric validations.

- Parameters:
- Returns: The list of sidecar model metric validations.
- Return type: List[SidecarModelMetricValidation]

#### classmethod get(validation_id)

Get a sidecar model metric validation by ID.

- Parameters: validation_id ( str ) – The ID of the validation to get.
- Returns: The sidecar model metric validation.
- Return type: SidecarModelMetricValidation

#### revalidate()

Revalidate the sidecar model metric validation.

- Returns: The sidecar model metric validation.
- Return type: SidecarModelMetricValidation

#### update(name=None, prompt_column_name=None, target_column_name=None, response_column_name=None, expected_response_column_name=None, citation_prefix_column_name=None, deployment_id=None, model_id=None, prediction_timeout=None)

Update the sidecar model metric validation.

- Parameters:
- Returns: The updated sidecar model metric validation.
- Return type: SidecarModelMetricValidation

#### delete()

Delete the sidecar model metric validation.

- Return type: None

### class datarobot.models.genai.llm_test_configuration.LLMTestConfiguration

Metadata for a DataRobot GenAI LLM test configuration.

- Variables:

#### classmethod create(name, dataset_evaluations, llm_test_grading_criteria, use_case=None, description=None)

Creates a new LLM test configuration.

- Parameters:
- Returns: llm_test_configuration – The created LLM test configuration.
- Return type: LLMTestConfiguration

#### classmethod get(llm_test_configuration)

Retrieve a single LLM Test configuration.

- Parameters: llm_test_configuration ( LLMTestConfiguration or str ) – The LLM test configuration to retrieve, either LLMTestConfiguration or LLMTestConfiguration ID.
- Returns: llm_test_configuration – The requested LLM Test configuration.
- Return type: LLMTestConfiguration

#### classmethod list(use_case=None, test_config_type=None)

List all LLM test configurations available to the user. If a Use Case is specified,
results are restricted to only those configurations associated with that Use Case.

- Parameters:
- Returns: llm_test_configurations – Returns a list of LLM test configurations.
- Return type: list[LLMTestConfiguration]

#### update(name=None, description=None, dataset_evaluations=None, llm_test_grading_criteria=None)

Update the LLM test configuration.

- Parameters:
- Returns: llm_test_configuration – The updated LLM test configuration.
- Return type: LLMTestConfiguration

#### delete()

Delete a single LLM test configuration.

- Return type: None

### class datarobot.models.genai.llm_test_configuration.LLMTestConfigurationSupportedInsights

Metadata for a DataRobot GenAI LLM test configuration supported insights.

- Variables: supported_insight_configurations ( list[InsightsConfiguration] ) – The supported insights for LLM test configurations.

#### classmethod list(use_case=None, playground=None)

List all supported insights for a LLM test configuration.

- Parameters:
- Returns: llm_test_configuration_supported_insights – Returns the supported insight configurations for the
  LLM test configuration.
- Return type: LLMTestConfigurationSupportedInsights

### class datarobot.models.genai.llm_test_result.LLMTestResult

Metadata for a DataRobot GenAI LLM test result.

- Variables:

#### classmethod create(llm_test_configuration, llm_blueprint)

Create a new LLMTestResult. This executes the LLM test configuration using the
specified LLM blueprint. To check the status of the LLM test, use the
LLMTestResult.get method with the returned ID.

- Parameters:
- Returns: llm_test_result – The created LLM test result.
- Return type: LLMTestResult

#### classmethod get(llm_test_result)

Retrieve a single LLM test result.

- Parameters: llm_test_result ( LLMTestResult or str ) – The LLM test result to retrieve, specified by either LLM test result or test ID.
- Returns: llm_test_result – The requested LLM test result.
- Return type: LLMTestResult

#### classmethod list(llm_test_configuration=None, llm_blueprint=None)

List all LLM test results available to the user. If the LLM test configuration or LLM
blueprint is specified, results are restricted to only those LLM test results associated
with the LLM test configuration or LLM blueprint.

- Parameters:
- Returns: llm_test_results – Returns a list of LLM test results.
- Return type: List[LLMTestResult]

#### delete()

Delete a single LLM test result.

- Return type: None

### class datarobot.models.genai.llm_test_configuration.DatasetEvaluation

Metadata for a DataRobot GenAI dataset evaluation.

- Variables:

### class datarobot.models.genai.llm_test_result.InsightEvaluationResult

Metadata for a DataRobot GenAI insight evaluation result.

- Variables:

### class datarobot.models.genai.llm_test_configuration.OOTBDatasetDict

### class datarobot.models.genai.llm_test_configuration.DatasetEvaluationRequestDict

### class datarobot.models.genai.llm_test_configuration.DatasetEvaluationDict

### class datarobot.models.genai.nemo_configuration.NemoConfiguration

Configuration for the Nemo Pipeline.

- Variables:

#### classmethod get(playground)

Get the Nemo configuration for a playground.

- Parameters: playground ( str or Playground ) – The playground to get the configuration for
- Returns: The Nemo configuration for the playground.
- Return type: NemoConfiguration

#### classmethod upsert(playground, blocked_terms_file_contents, prompt_pipeline_metric_name=None, prompt_pipeline_files=None, prompt_llm_configuration=None, prompt_moderation_configuration=None, prompt_pipeline_template_id=None, response_pipeline_metric_name=None, response_pipeline_files=None, response_llm_configuration=None, response_moderation_configuration=None, response_pipeline_template_id=None)

Create or update the nemo configuration for a playground.

- Parameters:
- Returns: The Nemo configuration for the playground.
- Return type: NemoConfiguration

### class datarobot.models.genai.llm_test_configuration.OOTBDataset

Metadata for a DataRobot GenAI out-of-the-box LLM compliance test dataset.

- Variables:

#### classmethod list()

List all out-of-the-box datasets available to the user.

- Returns: ootb_datasets – Returns a list of out-of-the-box datasets.
- Return type: list[OOTBDataset]

### class datarobot.models.genai.llm_test_configuration.NonOOTBDataset

Metadata for a DataRobot GenAI non out-of-the-box (OOTB) LLM compliance test dataset.

#### classmethod list(use_case=None)

List all non out-of-the-box datasets available to the user.

- Returns: non_ootb_datasets – Returns a list of non out-of-the-box datasets.
- Return type: list[NonOOTBDataset]

### class datarobot.models.genai.metric_insights.MetricInsights

Metric insights for playground.

#### classmethod list(playground, llm_blueprint_ids=None, with_aggregation_types_only=False, production_only=False, completed_only=False)

Get metric insights for playground.

- Parameters:
- Returns: insights – Metric insights for playground.
- Return type: list[InsightsConfiguration]

#### classmethod copy_to_playground(source_playground, target_playground, add_to_existing=True, with_evaluation_datasets=False)

Copy metric insights from one playground to another.

- Parameters:
- Return type: None

### class datarobot.models.genai.ootb_metric_configuration.PlaygroundOOTBMetricConfiguration

OOTB metric configurations for a playground.

- Variables: ootb_metric_configurations ( (List[OOTBMetricConfigurationResponse]) : The list of the OOTB metric configurations. )

#### classmethod get(playground_id)

Get OOTB metric configurations for the playground.

- Return type: PlaygroundOOTBMetricConfiguration

#### classmethod create(playground_id, ootb_metric_configurations)

Create a new OOTB metric configurations.

- Return type: PlaygroundOOTBMetricConfiguration

### class datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCall

Reference tool call for an evaluation dataset.  This is a convenience stand in
for the Ragas ToolCall class.

#### json()

Convert the tool call to a JSON string.

- Return type: str

#### classmethod from_json(json_str)

Create a ReferenceToolCall object from a JSON string.

- Return type: ReferenceToolCall

### class datarobot.models.genai.evaluation_dataset_utils.ReferenceToolCalls

Utility for creating a list of reference tool calls for an evaluation dataset. This
class represents a list of tool calls for a single row in the evaluation dataset.

Example usage:

> df = pandas.DataFrame()
> tool_calls_1 = ReferenceToolCalls([
>     ReferenceToolCall(name=”get_weather”, args={“location”: “New York”}),
>     ReferenceToolCall(name=”get_news”, args={“topic”: “technology”})
> ])
> tool_calls_2 = ReferenceToolCalls([
>     ReferenceToolCall(name=”get_weather”, args={“location”: “Los Angeles”}),
>     ReferenceToolCall(name=”get_news”, args={“topic”: “sports”})
> ])
> df[‘prompts’] = [‘what is the weather for the tech conference in NYC?’,
> ‘what is the weather in LA?, and will it affect the game?’]
> df[‘reference_tool_calls’] = [tool_calls_1.json(), tool_calls_2.json()]

#### classmethod from_json(json_str)

Create a ReferenceToolCalls object from a JSON string.

- Return type: ReferenceToolCalls
