Chat Prompts (GenAI)¶
This page outlines the operations, endpoints, parameters, and example requests and responses for the Chat Prompts (GenAI).
GET /api/v2/genai/chatPrompts/¶
List chat prompts.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/genai/chatPrompts/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
playgroundId | query | string | false | Only retrieve the chat prompts associated with this playground ID. |
llmBlueprintId | query | string | false | Only retrieve the chat prompts associated with this LLM blueprint ID. If specified, will retrieve the chat prompts for the oldest chat in this LLM blueprint. |
chatId | query | string | false | Only retrieve the chat prompts associated with this chat ID. |
offset | query | integer | false | Skip the specified number of values. |
limit | query | integer | false | Retrieve only the specified number of values. |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"chunkId": 0,
"metadata": {},
"page": 0,
"similarityScore": 0,
"source": "string",
"startIndex": 0,
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"resultMetadata": {
"blockedResultText": "string",
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"providerLLMGuards": [
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
],
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
],
"next": "string",
"previous": "string",
"totalCount": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Successful Response | ListChatPromptsResponse |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/genai/chatPrompts/¶
Request the execution of a new prompt within a chat or an LLM blueprint.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/genai/chatPrompts/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}" \
-d '{CreateChatPromptRequest}'
Body parameter¶
{
"chatId": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"text": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | CreateChatPromptRequest | true | none |
Example responses¶
202 Response
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"chunkId": 0,
"metadata": {},
"page": 0,
"similarityScore": 0,
"source": "string",
"startIndex": 0,
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"resultMetadata": {
"blockedResultText": "string",
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"providerLLMGuards": [
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
],
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Successful Response | ChatPromptResponse |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/genai/chatPrompts/{chatPromptId}/¶
Delete an existing chat prompt.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/genai/chatPrompts/{chatPromptId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
chatPromptId | path | string | true | The ID of the chat prompt to delete. |
Example responses¶
422 Response
{
"detail": [
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Successful Response | None |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/genai/chatPrompts/{chatPromptId}/¶
Retrieve an existing chat prompt.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/genai/chatPrompts/{chatPromptId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
chatPromptId | path | string | true | The ID of the chat prompt to retrieve. |
Example responses¶
200 Response
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"chunkId": 0,
"metadata": {},
"page": 0,
"similarityScore": 0,
"source": "string",
"startIndex": 0,
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"resultMetadata": {
"blockedResultText": "string",
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"providerLLMGuards": [
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
],
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Successful Response | ChatPromptResponse |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
Schemas¶
ChatPromptResponse
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"chunkId": 0,
"metadata": {},
"page": 0,
"similarityScore": 0,
"source": "string",
"startIndex": 0,
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"resultMetadata": {
"blockedResultText": "string",
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"providerLLMGuards": [
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
],
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
ChatPromptResponse
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
chatContextId | string¦null | false | The ID of the chat context for this prompt. | |
chatId | string¦null | false | The ID of the chat this chat prompt belongs to. | |
chatPromptIdsIncludedInHistory | [string]¦null | false | The list of IDs of the chat prompts included in this prompt's history. | |
citations | [Citation] | true | The list of relevant vector database citations (in case of using a vector database). | |
confidenceScores | ConfidenceScores¦null | true | The confidence scores that measure the similarity between the prompt context and the prompt completion. | |
creationDate | string(date-time) | true | The creation date of the chat prompt (ISO 8601 formatted). | |
creationUserId | string | true | The ID of the user that created the chat prompt. | |
executionStatus | ExecutionStatus | true | The execution status of the chat prompt. | |
id | string | true | The ID of the chat prompt. | |
llmBlueprintId | string | true | The ID of the LLM blueprint the chat prompt belongs to. | |
llmId | LanguageModelTypeId | true | The ID of the LLM used by the chat prompt. | |
llmSettings | any | false | A key/value dictionary of LLM settings. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CommonLLMSettings | false | The settings that are available for all non-custom LLMs. |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CustomModelLLMSettings | false | The settings that are available for custom model LLMs. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
metadataFilter | object¦null | false | The metadata dictionary defining the filters that documents must match in order to be retrieved. | |
resultMetadata | ResultMetadata¦null | true | The additional information about the chat prompt results. | |
resultText | string¦null | true | The text of the prompt completion. | |
text | string | true | The text of the user prompt. | |
userName | string | true | The name of the user that created the chat prompt. | |
vectorDatabaseId | string¦null | false | The ID of the vector database linked to this LLM blueprint. | |
vectorDatabaseSettings | VectorDatabaseSettings¦null | false | A key/value dictionary of vector database settings. |
Citation
{
"chunkId": 0,
"metadata": {},
"page": 0,
"similarityScore": 0,
"source": "string",
"startIndex": 0,
"text": "string"
}
Citation
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
chunkId | integer¦null | false | The ID of the chunk in the vector database index. | |
metadata | object¦null | false | LangChain Document metadata information holder. | |
page | integer¦null | false | The source page number where the citation was found. | |
similarityScore | number¦null | false | The similarity score between the citation and the user prompt. | |
source | string¦null | true | The source of the citation (e.g., a filename in the original dataset). | |
startIndex | integer¦null | false | The chunk's start character index in the source document. | |
text | string | true | The text of the citation. |
CommonLLMSettings
{
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
}
CommonLLMSettings
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
maxCompletionLength | integer¦null | false | Maximum number of tokens allowed in the completion. The combined count of this value and prompt tokens must be below the model's maximum context size, where prompt token count is comprised of system prompt, user prompt, recent chat history, and vector database citations. | |
systemPrompt | string¦null | false | maxLength: 500000 |
System prompt guides the style of the LLM response. It is a "universal" prompt, prepended to all individual prompts. |
temperature | number¦null | false | Temperature controls the randomness of model output, where higher values return more diverse output and lower values return more deterministic results. | |
topP | number¦null | false | Top P sets a threshold that controls the selection of words included in the response, based on a cumulative probability cutoff for token selection. For example, 0.2 considers only the top 20% probability mass. Higher numbers return more diverse options for outputs. |
ConfidenceScores
{
"bleu": 0,
"meteor": 0,
"rouge": 0
}
ConfidenceScores
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
bleu | number | true | BLEU score. | |
meteor | number | true | METEOR score. | |
rouge | number | true | ROUGE score. |
CreateChatPromptRequest
{
"chatId": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"text": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
CreateChatPromptRequest
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
chatId | string¦null | false | The ID of the chat this prompt belongs to. If LLM and vector database settings are not specified in the request, then the prompt will use the current settings of the chat. | |
llmBlueprintId | string¦null | false | The ID of the LLM blueprint this prompt belongs to. If LLM and vector database settings are not specified in the request, then the prompt will use the current settings of the LLM blueprint. | |
llmId | LanguageModelTypeId¦null | false | If specified, uses this LLM ID for the prompt and updates the settings of the corresponding chat or LLM blueprint to use this LLM ID. | |
llmSettings | any | false | If specified, uses these LLM settings for the prompt and updates the settings of the corresponding chat or LLM blueprint to use these LLM settings. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CommonLLMSettings | false | The settings that are available for all non-custom LLMs. |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CustomModelLLMSettings | false | The settings that are available for custom model LLMs. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
metadataFilter | object¦null | false | The metadata fields to add to the chat prompt. | |
text | string | true | maxLength: 500000 |
The text of the user prompt. |
vectorDatabaseId | string¦null | false | If specified, uses this vector database ID for the prompt and updates the settings of the corresponding chat or LLM blueprint to use this vector database ID. | |
vectorDatabaseSettings | VectorDatabaseSettings¦null | false | If specified, uses these vector database settings for the prompt and updates the settings of the corresponding chat or LLM blueprint to use these vector database settings. |
CustomModelLLMSettings
{
"externalLlmContextSize": 128,
"systemPrompt": "string",
"validationId": "string"
}
CustomModelLLMSettings
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
externalLlmContextSize | integer¦null | false | maximum: 128000 minimum: 128 |
The external LLM's context size, in tokens. This value is only used for pruning documents supplied to the LLM when a vector database is associated with the LLM blueprint. It does not affect the external LLM's actual context size in any way and is not supplied to the LLM. |
systemPrompt | string¦null | false | maxLength: 500000 |
System prompt guides the style of the LLM response. It is a "universal" prompt, prepended to all individual prompts. |
validationId | string¦null | false | The validation ID of the custom model LLM. |
ExecutionStatus
"NEW"
ExecutionStatus
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
ExecutionStatus | string | false | Job and entity execution status. |
Enumerated Values¶
Property | Value |
---|---|
ExecutionStatus | [NEW , RUNNING , COMPLETED , REQUIRES_USER_INPUT , SKIPPED , ERROR ] |
FeedbackResult
{
"negativeUserIds": [],
"positiveUserIds": []
}
FeedbackResult
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
negativeUserIds | [string] | false | The list of user IDs whose feedback is negative. | |
positiveUserIds | [string] | false | The list of user IDs whose feedback is positive. |
HTTPValidationErrorResponse
{
"detail": [
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
]
}
HTTPValidationErrorResponse
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
detail | [ValidationError] | false | none |
LanguageModelTypeId
"azure-openai-gpt-3.5-turbo"
LanguageModelTypeId
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
LanguageModelTypeId | string | false | The ID that defines the type of the LLM. |
Enumerated Values¶
Property | Value |
---|---|
LanguageModelTypeId | [azure-openai-gpt-3.5-turbo , azure-openai-gpt-3.5-turbo-16k , azure-openai-gpt-4 , azure-openai-gpt-4-32k , azure-openai-gpt-4-turbo , azure-openai-gpt-4-o , amazon-titan , anthropic-claude-2 , anthropic-claude-3-haiku , anthropic-claude-3-sonnet , anthropic-claude-3-opus , google-bison , google-gemini-1.5-flash , google-gemini-1.5-pro , custom-model ] |
ListChatPromptsResponse
{
"count": 0,
"data": [
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"chunkId": 0,
"metadata": {},
"page": 0,
"similarityScore": 0,
"source": "string",
"startIndex": 0,
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"metadataFilter": {},
"resultMetadata": {
"blockedResultText": "string",
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"providerLLMGuards": [
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
],
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
}
],
"next": "string",
"previous": "string",
"totalCount": 0
}
ListChatPromptsResponse
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | The number of records on this page. | |
data | [ChatPromptResponse] | true | The list of records. | |
next | string¦null | true | The URL to the next page, or null if there is no such page. |
|
previous | string¦null | true | The URL to the previous page, or null if there is no such page. |
|
totalCount | integer | true | The total number of records. |
MetricMetadata
{
"costConfigurationId": "string",
"customModelId": "string",
"errorMessage": "string",
"evaluationDatasetConfigurationId": "string",
"executionStatus": "NEW",
"formattedName": "string",
"formattedValue": "string",
"name": "string",
"nemoMetricId": "string",
"ootbMetricId": "string",
"sidecarModelMetricValidationId": "string",
"stage": "prompt_pipeline",
"value": null
}
MetricMetadata
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
costConfigurationId | string¦null | false | The ID of the cost configuration. | |
customModelId | string¦null | false | The ID of the custom model used for the metric. | |
errorMessage | string¦null | false | The error message associated with the metric computation. | |
evaluationDatasetConfigurationId | string¦null | false | The ID of the evaluation dataset configuration. | |
executionStatus | ExecutionStatus¦null | false | The creation status of the vector database. | |
formattedName | string¦null | false | The formatted name of the metric. | |
formattedValue | string¦null | false | The formatted value of the metric. | |
name | string | true | The name of the metric. | |
nemoMetricId | string¦null | false | The id of the NeMo Pipeline configuration. | |
ootbMetricId | string¦null | false | The id of the OOTB metric configuration. | |
sidecarModelMetricValidationId | string¦null | false | The validation ID of the sidecar model validation(in case of using a sidecar model deployment for the metric). | |
stage | PipelineStage¦null | false | The stage (prompt or response) that the metric applies to. | |
value | any | true | The value of the metric. |
PipelineStage
"prompt_pipeline"
PipelineStage
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
PipelineStage | string | false | Enum that describes at which stage the metric may be calculated. |
Enumerated Values¶
Property | Value |
---|---|
PipelineStage | [prompt_pipeline , response_pipeline ] |
ProviderGuardStage
"prompt"
ProviderGuardStage
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
ProviderGuardStage | string | false | The data stage where the provider guard metric is acting upon. |
Enumerated Values¶
Property | Value |
---|---|
ProviderGuardStage | [prompt , response ] |
ProviderGuardsMetadata
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
ProviderGuardsMetadata
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
name | string | true | The name of the provider guard metric. | |
satisfyCriteria | boolean | true | Whether the configured provider guard metric satisfied its hidden internal guard criteria. | |
stage | ProviderGuardStage | true | The data stage where the provider guard metric is acting upon. | |
value | any | true | The value of the provider guard metric. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
ResultMetadata
{
"blockedResultText": "string",
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"providerLLMGuards": [
{
"name": "string",
"satisfyCriteria": true,
"stage": "prompt",
"value": "string"
}
],
"totalTokenCount": 0
}
ResultMetadata
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
blockedResultText | string¦null | false | The message to replace the result text if it is non empty, which represents a blocked response. | |
cost | number¦null | false | The estimated cost of executing the prompt. | |
errorMessage | string¦null | false | The error message for the prompt (in case of an errored prompt). | |
estimatedDocsTokenCount | integer | false | The estimated number of tokens in the documents retrieved from the vector database. | |
feedbackResult | FeedbackResult | false | The user feedback associated with the prompt. | |
finalPrompt | any | false | The final representation of the prompt that was submitted to the LLM. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | [object] | false | none | |
»» additionalProperties | any | false | none |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | [object] | false | none | |
»»»» additionalProperties | any | false | none |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»»»» anonymous | object | false | none | |
»»»»»» additionalProperties | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | object | false | none | |
»» additionalProperties | any | false | none |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | [object] | false | none | |
»»»» additionalProperties | string | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
inputTokenCount | integer | false | The number of tokens in the LLM input. This number includes the tokens in the system prompt, the user prompt, the chat history (for history-aware chats) and the documents retrieved from the vector database (in case of using a vector database). | |
latencyMilliseconds | integer | true | The latency of the LLM response (in milliseconds). | |
metrics | [MetricMetadata] | false | The evaluation metrics for the prompt. | |
outputTokenCount | integer | false | The number of tokens in the LLM output. | |
providerLLMGuards | [ProviderGuardsMetadata]¦null | false | The provider llm guards metadata. | |
totalTokenCount | integer | false | The combined number of tokens in the LLM input and output. |
ValidationError
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
ValidationError
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
loc | [anyOf] | true | none |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
msg | string | true | none | |
type | string | true | none |
VectorDatabaseSettings
{
"addNeighborChunks": false,
"maxDocumentsRetrievedPerPrompt": 1,
"maxTokens": 1
}
VectorDatabaseSettings
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
addNeighborChunks | boolean | false | Add neighboring chunks to those that the similarity search retrieves, such that when selected, search returns i, i-1, and i+1. | |
maxDocumentsRetrievedPerPrompt | integer¦null | false | maximum: 10 minimum: 1 |
The maximum number of chunks to retrieve from the vector database. |
maxTokens | integer¦null | false | maximum: 51200 minimum: 1 |
The maximum number of tokens to retrieve from the vector database. |