Chat Prompts (GenAI)¶
This page outlines the operations, endpoints, parameters, and example requests and responses for the Chat Prompts (GenAI).
GET /api/v2/genai/chatPrompts/¶
List chat prompts.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/genai/chatPrompts/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
playgroundId | query | string | false | Only retrieve the chat prompts associated with this playground ID. |
llmBlueprintId | query | string | false | Only retrieve the chat prompts associated with this LLM blueprint ID. If specified, will retrieve the chat prompts for the oldest chat in this LLM blueprint. |
chatId | query | string | false | Only retrieve the chat prompts associated with this chat ID. |
offset | query | integer | false | Skip the specified number of values. |
limit | query | integer | false | Retrieve only the specified number of values. |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"page": 0,
"source": "string",
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"resultMetadata": {
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
],
"next": "string",
"previous": "string",
"totalCount": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Successful Response | ListChatPromptsResponse |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/genai/chatPrompts/¶
Request the execution of a new prompt within a chat or an LLM blueprint.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/genai/chatPrompts/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"chatId": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"text": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | CreateChatPromptRequest | true | none |
Example responses¶
202 Response
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"page": 0,
"source": "string",
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"resultMetadata": {
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Successful Response | ChatPromptResponse |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/genai/chatPrompts/{chatPromptId}/¶
Delete an existing chat prompt.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/genai/chatPrompts/{chatPromptId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
chatPromptId | path | string | true | The ID of the chat prompt to delete. |
Example responses¶
422 Response
{
"detail": [
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Successful Response | None |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/genai/chatPrompts/{chatPromptId}/¶
Retrieve an existing chat prompt.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/genai/chatPrompts/{chatPromptId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
chatPromptId | path | string | true | The ID of the chat prompt to retrieve. |
Example responses¶
200 Response
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"page": 0,
"source": "string",
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"resultMetadata": {
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Successful Response | ChatPromptResponse |
422 | Unprocessable Entity | Validation Error | HTTPValidationErrorResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
Schemas¶
ChatPromptResponse
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"page": 0,
"source": "string",
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"resultMetadata": {
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
ChatPromptResponse
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
chatContextId | string¦null | false | The ID of the chat context for this prompt. | |
chatId | string¦null | false | The ID of the chat this chat prompt belongs to. | |
chatPromptIdsIncludedInHistory | [string]¦null | false | The list of IDs of the chat prompts included in this prompt's history. | |
citations | [Citation] | true | The list of relevant vector database citations (in case of using a vector database). | |
confidenceScores | ConfidenceScores¦null | true | The confidence scores that measure the similarity between the prompt context and the prompt completion. | |
creationDate | string(date-time) | true | The creation date of the chat prompt (ISO 8601 formatted). | |
creationUserId | string | true | The ID of the user that created the chat prompt. | |
executionStatus | ExecutionStatus | true | The execution status of the chat prompt. | |
id | string | true | The ID of the chat prompt. | |
llmBlueprintId | string | true | The ID of the LLM blueprint the chat prompt belongs to. | |
llmId | LanguageModelTypeId | true | The ID of the LLM used by the chat prompt. | |
llmSettings | any | false | A key/value dictionary of LLM settings. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CommonLLMSettings | false | The settings that are available for all non-custom LLMs. |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CustomModelLLMSettings | false | The settings that are available for custom model LLMs. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
resultMetadata | ResultMetadata¦null | true | The additional information about the chat prompt results. | |
resultText | string¦null | true | The text of the prompt completion. | |
text | string | true | The text of the user prompt. | |
userName | string | true | The name of the user that created the chat prompt. | |
vectorDatabaseId | string¦null | false | The ID of the vector database linked to this LLM blueprint. | |
vectorDatabaseSettings | VectorDatabaseSettings¦null | false | A key/value dictionary of vector database settings. |
Citation
{
"page": 0,
"source": "string",
"text": "string"
}
Citation
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
page | integer¦null | false | The source page number where the citation was found. | |
source | string¦null | true | The source of the citation (e.g., a filename in the original dataset). | |
text | string | true | The text of the citation. |
CommonLLMSettings
{
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
}
CommonLLMSettings
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
maxCompletionLength | integer¦null | false | Maximum number of tokens allowed in the completion. The combined count of this value and prompt tokens must be below the model's maximum context size, where prompt token count is comprised of system prompt, user prompt, recent chat history, and vector database citations. | |
systemPrompt | string¦null | false | maxLength: 500000 |
System prompt guides the style of the LLM response. It is a "universal" prompt, prepended to all individual prompts. |
temperature | number¦null | false | Temperature controls the randomness of model output, where higher values return more diverse output and lower values return more deterministic results. | |
topP | number¦null | false | Top P sets a threshold that controls the selection of words included in the response, based on a cumulative probability cutoff for token selection. For example, 0.2 considers only the top 20% probability mass. Higher numbers return more diverse options for outputs. |
ConfidenceScores
{
"bleu": 0,
"meteor": 0,
"rouge": 0
}
ConfidenceScores
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
bleu | number | true | BLEU score. | |
meteor | number | true | METEOR score. | |
rouge | number | true | ROUGE score. |
CreateChatPromptRequest
{
"chatId": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"text": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
CreateChatPromptRequest
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
chatId | string¦null | false | The ID of the chat this prompt belongs to. If LLM and vector database settings are not specified in the request, then the prompt will use the current settings of the chat. | |
llmBlueprintId | string¦null | false | The ID of the LLM blueprint this prompt belongs to. If LLM and vector database settings are not specified in the request, then the prompt will use the current settings of the LLM blueprint. | |
llmId | LanguageModelTypeId¦null | false | If specified, uses this LLM ID for the prompt and updates the settings of the corresponding chat or LLM blueprint to use this LLM ID. | |
llmSettings | any | false | If specified, uses these LLM settings for the prompt and updates the settings of the corresponding chat or LLM blueprint to use these LLM settings. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CommonLLMSettings | false | The settings that are available for all non-custom LLMs. |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CustomModelLLMSettings | false | The settings that are available for custom model LLMs. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
text | string | true | maxLength: 500000 |
The text of the user prompt. |
vectorDatabaseId | string¦null | false | If specified, uses this vector database ID for the prompt and updates the settings of the corresponding chat or LLM blueprint to use this vector database ID. | |
vectorDatabaseSettings | VectorDatabaseSettings¦null | false | If specified, uses these vector database settings for the prompt and updates the settings of the corresponding chat or LLM blueprint to use these vector database settings. |
CustomModelLLMSettings
{
"systemPrompt": "string",
"validationId": "string"
}
CustomModelLLMSettings
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
systemPrompt | string¦null | false | maxLength: 500000 |
System prompt guides the style of the LLM response. It is a "universal" prompt, prepended to all individual prompts. |
validationId | string¦null | false | The validation ID of the custom model LLM. |
ExecutionStatus
"NEW"
ExecutionStatus
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
ExecutionStatus | string | false | Job execution status. |
Enumerated Values¶
Property | Value |
---|---|
ExecutionStatus | [NEW , RUNNING , COMPLETED , ERROR ] |
FeedbackResult
{
"negativeUserIds": [],
"positiveUserIds": []
}
FeedbackResult
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
negativeUserIds | [string] | false | The list of user IDs whose feedback is negative. | |
positiveUserIds | [string] | false | The list of user IDs whose feedback is positive. |
HTTPValidationErrorResponse
{
"detail": [
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
]
}
HTTPValidationErrorResponse
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
detail | [ValidationError] | false | none |
LanguageModelTypeId
"azure-openai-gpt-3.5-turbo"
LanguageModelTypeId
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
LanguageModelTypeId | string | false | The ID that defines the type of the LLM. |
Enumerated Values¶
Property | Value |
---|---|
LanguageModelTypeId | [azure-openai-gpt-3.5-turbo , azure-openai-gpt-3.5-turbo-16k , azure-openai-gpt-4 , azure-openai-gpt-4-32k , amazon-titan , anthropic-claude-2 , google-bison , custom-model ] |
ListChatPromptsResponse
{
"count": 0,
"data": [
{
"chatContextId": "string",
"chatId": "string",
"chatPromptIdsIncludedInHistory": [
"string"
],
"citations": [
{
"page": 0,
"source": "string",
"text": "string"
}
],
"confidenceScores": {
"bleu": 0,
"meteor": 0,
"rouge": 0
},
"creationDate": "2019-08-24T14:15:22Z",
"creationUserId": "string",
"executionStatus": "NEW",
"id": "string",
"llmBlueprintId": "string",
"llmId": "azure-openai-gpt-3.5-turbo",
"llmSettings": {
"maxCompletionLength": 0,
"systemPrompt": "string",
"temperature": 0,
"topP": 0
},
"resultMetadata": {
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"totalTokenCount": 0
},
"resultText": "string",
"text": "string",
"userName": "string",
"vectorDatabaseId": "string",
"vectorDatabaseSettings": {
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
}
],
"next": "string",
"previous": "string",
"totalCount": 0
}
ListChatPromptsResponse
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | The number of records on this page. | |
data | [ChatPromptResponse] | true | The list of records. | |
next | string¦null | true | The URL to the next page, or null if there is no such page. |
|
previous | string¦null | true | The URL to the previous page, or null if there is no such page. |
|
totalCount | integer | true | The total number of records. |
MetricMetadata
{
"costConfigurationId": "string",
"customModelId": "string",
"evaluationDatasetConfigurationId": "string",
"formattedName": "string",
"formattedValue": "string",
"name": "string",
"sidecarModelMetricValidationId": "string",
"value": null
}
MetricMetadata
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
costConfigurationId | string¦null | false | The ID of the cost configuration. | |
customModelId | string¦null | false | The ID of the custom model used for the metric. | |
evaluationDatasetConfigurationId | string¦null | false | The ID of the evaluation dataset configuration. | |
formattedName | string¦null | false | The formatted name of the metric. | |
formattedValue | string¦null | false | The formatted value of the metric. | |
name | string | true | The name of the metric. | |
sidecarModelMetricValidationId | string¦null | false | The validation ID of the sidecar model validation(in case of using a sidecar model deployment for the metric). | |
value | any | true | The value of the metric. |
ResultMetadata
{
"cost": 0,
"errorMessage": "string",
"estimatedDocsTokenCount": 0,
"feedbackResult": {
"negativeUserIds": [],
"positiveUserIds": []
},
"finalPrompt": "string",
"inputTokenCount": 0,
"latencyMilliseconds": 0,
"metrics": [],
"outputTokenCount": 0,
"totalTokenCount": 0
}
ResultMetadata
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
cost | number¦null | false | The estimated cost of executing the prompt. | |
errorMessage | string¦null | false | The error message for the prompt (in case of an errored prompt). | |
estimatedDocsTokenCount | integer | false | The estimated number of tokens in the documents retrieved from the vector database. | |
feedbackResult | FeedbackResult | false | The user feedback associated with the prompt. | |
finalPrompt | any | false | The final representation of the prompt that was submitted to the LLM. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | [object] | false | none | |
»» additionalProperties | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | object | false | none | |
»» additionalProperties | any | false | none |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | [object] | false | none | |
»»»» additionalProperties | string | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
inputTokenCount | integer | false | The number of tokens in the LLM input. This number includes the tokens in the system prompt, the user prompt, the chat history (for history-aware chats) and the documents retrieved from the vector database (in case of using a vector database). | |
latencyMilliseconds | integer | true | The latency of the LLM response (in milliseconds). | |
metrics | [MetricMetadata] | false | The evaluation metrics for the prompt. | |
outputTokenCount | integer | false | The number of tokens in the LLM output. | |
totalTokenCount | integer | false | The combined number of tokens in the LLM input and output. |
ValidationError
{
"loc": [
"string"
],
"msg": "string",
"type": "string"
}
ValidationError
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
loc | [anyOf] | true | none |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
msg | string | true | none | |
type | string | true | none |
VectorDatabaseSettings
{
"maxDocumentsRetrievedPerPrompt": 10,
"maxTokens": 0
}
VectorDatabaseSettings
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
maxDocumentsRetrievedPerPrompt | integer¦null | false | maximum: 10 |
The maximum number of documents to retrieve from the vector database. |
maxTokens | integer¦null | false | The maximum number of tokens to retrieve from the vector database. |