ML Ops
This page outlines the operations, endpoints, parameters, and example requests and responses for the ML Ops.
GET /api/v2/batchJobs/
Get a collection of batch jobs by statuses
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchJobs/?offset= 0 & limit = 100 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
offset
query
integer
true
This many results will be skipped
limit
query
integer
true
At most this many results are returned
status
query
any
false
Includes only jobs that have the status value that matches this flag. Repeat the parameter for filtering on multiple statuses.
source
query
any
false
Includes only jobs that have the source value that matches this flag. Repeat the parameter for filtering on multiple statuses.Prefix values with a dash (-
) to exclude those sources.
deploymentId
query
string
false
Includes only jobs for this particular deployment
modelId
query
string
false
ID of leaderboard model which is used in job for processing predictions dataset
jobId
query
string
false
Includes only job by specific id
orderBy
query
string
false
Sort order which will be applied to batch prediction list. Prefix the attribute name with a dash to sort in descending order, e.g. "-created".
allJobs
query
boolean
false
[DEPRECATED - replaced with RBAC permission model] - No effect
cutoffHours
query
integer
false
Only list jobs created at most this amount of hours ago.
startDateTime
query
string(date-time)
false
ISO-formatted datetime of the earliest time the job was added (inclusive). For example "2008-08-24T12:00:00Z". Will ignore cutoffHours if set.
endDateTime
query
string(date-time)
false
ISO-formatted datetime of the latest time the job was added (inclusive). For example "2008-08-24T12:00:00Z".
batchPredictionJobDefinitionId
query
string
false
Includes only jobs for this particular definition
hostname
query
any
false
Includes only jobs for this particular prediction instance hostname
batchJobType
query
any
false
Includes only jobs that have the batch job type that matches this flag. Repeat the parameter for filtering on multiple types.
intakeType
query
any
false
Includes only jobs for these particular intakes type
outputType
query
any
false
Includes only jobs for these particular outputs type
Enumerated Values
Parameter
Value
orderBy
[created
, -created
, status
, -status
]
Example responses
200 Response
{
"count" : 0 ,
"data" : [
{
"batchMonitoringJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"batchPredictionJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"elapsedTimeSec" : 0 ,
"failedRows" : 0 ,
"hidden" : "2019-08-24T14:15:22Z" ,
"id" : "string" ,
"intakeDatasetDisplayName" : "string" ,
"jobIntakeSize" : 0 ,
"jobOutputSize" : 0 ,
"jobSpec" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"links" : {
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
},
"logs" : [
"string"
],
"monitoringBatchId" : "string" ,
"percentageCompleted" : 100 ,
"queuePosition" : 0 ,
"queued" : true ,
"resultsDeleted" : true ,
"scoredRows" : 0 ,
"skippedRows" : 0 ,
"source" : "string" ,
"status" : "INITIALIZING" ,
"statusDetails" : "string"
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/batchJobs/fromJobDefinition/
Launches a one-time batch job based off of the previously supplied definition referring to the job definition ID and puts it on the queue.
Code samples
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchJobs/fromJobDefinition/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"jobDefinitionId" : "string"
}
Parameters
Example responses
202 Response
{
"batchMonitoringJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"batchPredictionJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"elapsedTimeSec" : 0 ,
"failedRows" : 0 ,
"hidden" : "2019-08-24T14:15:22Z" ,
"id" : "string" ,
"intakeDatasetDisplayName" : "string" ,
"jobIntakeSize" : 0 ,
"jobOutputSize" : 0 ,
"jobSpec" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"links" : {
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
},
"logs" : [
"string"
],
"monitoringBatchId" : "string" ,
"percentageCompleted" : 100 ,
"queuePosition" : 0 ,
"queued" : true ,
"resultsDeleted" : true ,
"scoredRows" : 0 ,
"skippedRows" : 0 ,
"source" : "string" ,
"status" : "INITIALIZING" ,
"statusDetails" : "string"
}
Responses
Status
Meaning
Description
Schema
202
Accepted
Job details for the created Batch Prediction job
BatchJobResponse
404
Not Found
Job was deleted, never existed or you do not have access to it
None
422
Unprocessable Entity
Could not create a Batch job. Possible reasons: {}
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/batchJobs/{batchJobId}/
If the job is running, it will be aborted. Then it will be removed, meaning all underlying data will be deleted and the job is removed from the list of jobs.
Code samples
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/batchJobs/{ batchJobId} / \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
batchJobId
path
string
true
ID of the Batch job
partNumber
path
integer
true
The number of which csv part is being uploaded when using multipart upload
Responses
Status
Meaning
Description
Schema
202
Accepted
Job cancelled
None
404
Not Found
Job does not exist or was not submitted to the queue.
None
409
Conflict
Job cannot be aborted for some reason. Possible reasons: job is already aborted or completed.
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/batchJobs/{batchJobId}/
Retrieve a Batch job.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchJobs/{ batchJobId} / \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
batchJobId
path
string
true
ID of the Batch job
partNumber
path
integer
true
The number of which csv part is being uploaded when using multipart upload
Example responses
200 Response
{
"batchMonitoringJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"batchPredictionJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"elapsedTimeSec" : 0 ,
"failedRows" : 0 ,
"hidden" : "2019-08-24T14:15:22Z" ,
"id" : "string" ,
"intakeDatasetDisplayName" : "string" ,
"jobIntakeSize" : 0 ,
"jobOutputSize" : 0 ,
"jobSpec" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"links" : {
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
},
"logs" : [
"string"
],
"monitoringBatchId" : "string" ,
"percentageCompleted" : 100 ,
"queuePosition" : 0 ,
"queued" : true ,
"resultsDeleted" : true ,
"scoredRows" : 0 ,
"skippedRows" : 0 ,
"source" : "string" ,
"status" : "INITIALIZING" ,
"statusDetails" : "string"
}
Responses
Status
Meaning
Description
Schema
200
OK
Job details for the requested Batch job
BatchJobResponse
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PUT /api/v2/batchJobs/{batchJobId}/csvUpload/
Stream CSV data to the job. Only available for jobs thatuses the localFile intake option.
Code samples
# You can also use wget
curl -X PUT https://app.datarobot.com/api/v2/batchJobs/{ batchJobId} /csvUpload/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
batchJobId
path
string
true
ID of the Batch job
partNumber
path
integer
true
The number of which csv part is being uploaded when using multipart upload
Responses
Status
Meaning
Description
Schema
202
Accepted
Job data was successfully submitted
None
404
Not Found
Job does not exist or does not require data
None
409
Conflict
Dataset upload has already begun
None
415
Unsupported Media Type
Not acceptable MIME type
None
422
Unprocessable Entity
Job was "ABORTED" due to too many errors in the data
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/batchJobs/{batchJobId}/download/
This is only valid for jobs scored using the "localFile" output option
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchJobs/{ batchJobId} /download/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
batchJobId
path
string
true
ID of the Batch job
partNumber
path
integer
true
The number of which csv part is being uploaded when using multipart upload
Responses
Status
Meaning
Description
Schema
200
OK
Job was downloaded correctly
None
404
Not Found
Job does not exist or is not completed
None
406
Not Acceptable
Not acceptable MIME type
None
422
Unprocessable Entity
Job was "ABORTED" due to too many errors in the data
None
Status
Header
Type
Format
Description
200
Content-Disposition
string
Contains an auto generated filename for this download ("attachment;filename=result-.csv").
200
Content-Type
string
MIME type of the returned data
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/batchMonitoring/
Submit the configuration for the job and it will be submitted to the queue
Code samples
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchMonitoring/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionThreshold" : 1 ,
"predictionWarningEnabled" : true ,
"secondaryDatasetsConfigId" : "string" ,
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
Parameters
Example responses
202 Response
{
"batchMonitoringJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"batchPredictionJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"elapsedTimeSec" : 0 ,
"failedRows" : 0 ,
"hidden" : "2019-08-24T14:15:22Z" ,
"id" : "string" ,
"intakeDatasetDisplayName" : "string" ,
"jobIntakeSize" : 0 ,
"jobOutputSize" : 0 ,
"jobSpec" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"links" : {
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
},
"logs" : [
"string"
],
"monitoringBatchId" : "string" ,
"percentageCompleted" : 100 ,
"queuePosition" : 0 ,
"queued" : true ,
"resultsDeleted" : true ,
"scoredRows" : 0 ,
"skippedRows" : 0 ,
"source" : "string" ,
"status" : "INITIALIZING" ,
"statusDetails" : "string"
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/batchMonitoringJobDefinitions/
List all Batch Monitoring jobs definitions available
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchMonitoringJobDefinitions/?offset= 0 & limit = 100 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
offset
query
integer
true
This many results will be skipped
limit
query
integer
true
At most this many results are returned
searchName
query
string
false
A human-readable name for the definition, must be unique across organisations.
deploymentId
query
string
false
Includes only definitions for this particular deployment
Example responses
200 Response
{
"count" : 0 ,
"data" : [
{
"batchMonitoringJob" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"enabled" : false ,
"id" : "string" ,
"lastFailedRunTime" : "2019-08-24T14:15:22Z" ,
"lastScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"lastStartedJobStatus" : "INITIALIZING" ,
"lastStartedJobTime" : "2019-08-24T14:15:22Z" ,
"lastSuccessfulRunTime" : "2019-08-24T14:15:22Z" ,
"name" : "string" ,
"nextScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"updated" : "2019-08-24T14:15:22Z" ,
"updatedBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/batchMonitoringJobDefinitions/
Create a Batch Monitoring Job definition. A configuration for a Batch Monitoring job which can either be executed manually upon request or on scheduled intervals, if enabled. The API payload is the same as for /batchJobs
along with optional enabled
and schedule
items.
Code samples
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchMonitoringJobDefinitions/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"enabled" : true ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"name" : "string" ,
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionThreshold" : 1 ,
"predictionWarningEnabled" : true ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"secondaryDatasetsConfigId" : "string" ,
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
Parameters
Example responses
202 Response
{
"batchMonitoringJob" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"enabled" : false ,
"id" : "string" ,
"lastFailedRunTime" : "2019-08-24T14:15:22Z" ,
"lastScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"lastStartedJobStatus" : "INITIALIZING" ,
"lastStartedJobTime" : "2019-08-24T14:15:22Z" ,
"lastSuccessfulRunTime" : "2019-08-24T14:15:22Z" ,
"name" : "string" ,
"nextScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"updated" : "2019-08-24T14:15:22Z" ,
"updatedBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
}
Responses
Status
Meaning
Description
Schema
202
Accepted
Job details for the created Batch Monitoring job definition
BatchMonitoringJobDefinitionsResponse
403
Forbidden
You are not authorized to create a job definition on this deployment due to your permissions role
None
422
Unprocessable Entity
You tried to create a job definition with incompatible or missing parameters to create a fully functioning job definition
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/batchMonitoringJobDefinitions/{jobDefinitionId}/
Delete a Batch Prediction job definition
Code samples
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/batchMonitoringJobDefinitions/{ jobDefinitionId} / \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
jobDefinitionId
path
string
true
ID of the Batch Prediction job definition
Responses
Status
Meaning
Description
Schema
204
No Content
none
None
403
Forbidden
You are not authorized to delete this job definition due to your permissions role
None
404
Not Found
Job was deleted, never existed or you do not have access to it
None
409
Conflict
Job could not be deleted, as there are currently running jobs in the queue.
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/batchMonitoringJobDefinitions/{jobDefinitionId}/
Retrieve a Batch Monitoring job definition
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchMonitoringJobDefinitions/{ jobDefinitionId} / \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
jobDefinitionId
path
string
true
ID of the Batch Prediction job definition
Example responses
200 Response
{
"batchMonitoringJob" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"enabled" : false ,
"id" : "string" ,
"lastFailedRunTime" : "2019-08-24T14:15:22Z" ,
"lastScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"lastStartedJobStatus" : "INITIALIZING" ,
"lastStartedJobTime" : "2019-08-24T14:15:22Z" ,
"lastSuccessfulRunTime" : "2019-08-24T14:15:22Z" ,
"name" : "string" ,
"nextScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"updated" : "2019-08-24T14:15:22Z" ,
"updatedBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/batchMonitoringJobDefinitions/{jobDefinitionId}/
Update a Batch Monitoring job definition
Code samples
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/batchMonitoringJobDefinitions/{ jobDefinitionId} / \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"enabled" : true ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"name" : "string" ,
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionThreshold" : 1 ,
"predictionWarningEnabled" : true ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"secondaryDatasetsConfigId" : "string" ,
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
Parameters
Example responses
200 Response
{
"batchMonitoringJob" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"enabled" : false ,
"id" : "string" ,
"lastFailedRunTime" : "2019-08-24T14:15:22Z" ,
"lastScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"lastStartedJobStatus" : "INITIALIZING" ,
"lastStartedJobTime" : "2019-08-24T14:15:22Z" ,
"lastSuccessfulRunTime" : "2019-08-24T14:15:22Z" ,
"name" : "string" ,
"nextScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"updated" : "2019-08-24T14:15:22Z" ,
"updatedBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
}
Responses
Status
Meaning
Description
Schema
200
OK
Job details for the updated Batch Monitoring job definition
BatchMonitoringJobDefinitionsResponse
403
Forbidden
You are not authorized to alter the contents of this monitoring job due to your permissions role
None
404
Not Found
Job was deleted, never existed or you do not have access to it
None
409
Conflict
You chose a name of your monitoring job that was already existing within your organization
None
422
Unprocessable Entity
Could not update the monitoring job. Possible reasons: {}
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/guardConfigurations/
List resource tags.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/guardConfigurations/?entityId= string& entityType = customModel \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
offset
query
integer
false
This many results will be skipped.
limit
query
integer
false
At most this many results are returned.
entityId
query
string
true
Filter guard configurations by the given entity ID.
entityType
query
string
true
Entity type of the given entity ID.
Enumerated Values
Parameter
Value
entityType
[customModel
, customModelVersion
, playground
]
Example responses
200 Response
{
"count" : 0 ,
"data" : [
{
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"type" : "guardModel"
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/guardConfigurations/
Create a guard configuration.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/guardConfigurations/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"templateId" : "string"
}
Parameters
Example responses
201 Response
{
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"type" : "guardModel"
}
Responses
Status
Meaning
Description
Schema
201
Created
none
GuardConfigurationRetrieveResponse
404
Not Found
Either the resource does not exist or the user does not have permission to create the configuration.
None
409
Conflict
The proposed configuration name is already in use for the same entity.
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/guardConfigurations/predictionEnvironmentsInUse/
AShow prediction environments in use for moderation.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/guardConfigurations/predictionEnvironmentsInUse/?customModelVersionId= string \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
offset
query
integer
false
This many results will be skipped.
limit
query
integer
false
At most this many results are returned.
customModelVersionId
query
string
true
Show prediction environment information for this custom model version.
Example responses
200 Response
{
"count" : 0 ,
"data" : [
{
"id" : "string" ,
"name" : "string" ,
"usedBy" : [
{
"configurationId" : "string" ,
"deploymentId" : "string" ,
"name" : "string"
}
]
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/guardConfigurations/toNewCustomModelVersion/
Apply moderation configuration to a new custom model version.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/guardConfigurations/toNewCustomModelVersion/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"customModelId" : "string" ,
"data" : [
{
"deploymentId" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"parameters" : [
"s"
],
"stages" : [
"prompt"
],
"type" : "guardModel"
}
],
"overallConfig" : {
"timeoutAction" : "block" ,
"timeoutSec" : 2
}
}
Parameters
Example responses
200 Response
{
"customModelVersionId" : "string"
}
Responses
Status
Meaning
Description
Schema
200
OK
none
GuardConfigurationToCustomModelResponse
404
Not Found
Either the resource does not exist or the user does not have permission to create the configuration.
None
409
Conflict
The destination custom model version is frozen. Create a new version to save configuration.
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/guardConfigurations/{configId}/
Delete a guard config.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/guardConfigurations/{ configId} / \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
configId
path
string
true
ID of the configuration.
Responses
Status
Meaning
Description
Schema
204
No Content
none
None
404
Not Found
Either the config does not exist or the user does not have permission to delete it.
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/guardConfigurations/{configId}/
Retrieve info about a guard configuration.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/guardConfigurations/{ configId} / \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
configId
path
string
true
ID of the configuration.
Example responses
200 Response
{
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"type" : "guardModel"
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/guardConfigurations/{configId}/
Update a guard config.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/guardConfigurations/{ configId} / \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"deploymentId" : "string" ,
"description" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string"
}
Parameters
Name
In
Type
Required
Description
configId
path
string
true
ID of the configuration.
body
body
GuardConfigurationUpdate
false
none
Example responses
200 Response
{
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"type" : "guardModel"
}
Responses
Status
Meaning
Description
Schema
200
OK
none
GuardConfigurationRetrieveResponse
404
Not Found
Either the resource does not exist or the user does not have permission to create the configuration.
None
409
Conflict
The proposed configuration name is already in use for the same entity.
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/guardTemplates/
List guard templates.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/guardTemplates/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
offset
query
integer
false
This many results will be skipped.
limit
query
integer
false
At most this many results are returned.
Example responses
200 Response
{
"count" : 0 ,
"data" : [
{
"allowedStages" : [
"prompt"
],
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"modifyMessage" : "string" ,
"sendNotification" : true
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiDeploymentId" : "string" ,
"orgId" : "string" ,
"productionOnly" : true ,
"type" : "guardModel"
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/guardTemplates/{templateId}/
Retrieve info about a guard template.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/guardTemplates/{ templateId} / \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
templateId
path
string
true
ID of the template.
Example responses
200 Response
{
"allowedStages" : [
"prompt"
],
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"modifyMessage" : "string" ,
"sendNotification" : true
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiDeploymentId" : "string" ,
"orgId" : "string" ,
"productionOnly" : true ,
"type" : "guardModel"
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/mlops/portablePredictionServerImage/
Fetches the latest Portable Prediction Server (PPS) Docker image. The resulting image can be docker-loaded. Since the image can be quite large (14GB+) consider querying its metadata to check the image size in advance and content hash to verify the downloaded image afterwards. In some environments it can HTTP redirect to some other service (like S3 or GCP) using pre-signed URLs.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/mlops/portablePredictionServerImage/ \
-H "Authorization: Bearer {access-token}"
Responses
Status
Meaning
Description
Schema
200
OK
Download the latest available Portable Prediction Server (PPS) Docker image
None
302
Found
Redirect to another service for more efficient content download using pre-signed URL
None
404
Not Found
No PPS images found in the system
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
Fetches currently active PPS Docker image metadata
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/mlops/portablePredictionServerImage/metadata/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Example responses
200 Response
{
"baseImageId" : "string" ,
"created" : "2019-08-24T14:15:22Z" ,
"datarobotRuntimeImageTag" : "string" ,
"dockerImageId" : "string" ,
"filename" : "string" ,
"hash" : "string" ,
"hashAlgorithm" : "SHA256" ,
"imageSize" : 0 ,
"shortDockerImageId" : "string"
}
Responses
Status
Meaning
Description
Schema
200
OK
Available Portable Prediction Server (PPS) image metadata
PPSImageMetadataResponse
404
Not Found
No PPS images found in the system
None
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/overallModerationConfiguration/
Get overall moderation configuration for an entity.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/overallModerationConfiguration/?entityId= string& entityType = customModel \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name
In
Type
Required
Description
entityId
query
string
true
Retrieve overall moderation configuration for the given entity ID.
entityType
query
string
true
Entity type of the given entity ID.
Enumerated Values
Parameter
Value
entityType
[customModel
, customModelVersion
, playground
]
Example responses
200 Response
{
"entityId" : "string" ,
"entityType" : "customModel" ,
"timeoutAction" : "block" ,
"timeoutSec" : 2 ,
"updatedAt" : "2019-08-24T14:15:22Z" ,
"updaterId" : "string"
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/overallModerationConfiguration/
Update overall moderation configuration for an entity.
Notice: Endpoint is currently in [PUBLIC_PREVIEW]. Do not use it in production workflows to reduce risk. See details:
This endpoint depends on the following features that are subject to change.
Feature Flag
Maturity
Enabled by default
Description
MODERATION_GUARDRAILS
PUBLIC_PREVIEW
false
Configure and use guardrails for LLM intervention and moderation.
Code samples
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/overallModerationConfiguration/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter
{
"entityId" : "string" ,
"entityType" : "customModel" ,
"timeoutAction" : "block" ,
"timeoutSec" : 0
}
Parameters
Name
In
Type
Required
Description
entityId
path
string
true
Retrieve overall moderation configuration for the given entity ID.
entityType
path
string
true
Entity type of the given entity ID.
body
body
OverallModerationConfigurationUpdate
false
none
Enumerated Values
Parameter
Value
entityType
[customModel
, customModelVersion
, playground
]
Example responses
200 Response
{
"entityId" : "string" ,
"entityType" : "customModel" ,
"timeoutAction" : "block" ,
"timeoutSec" : 2 ,
"updatedAt" : "2019-08-24T14:15:22Z" ,
"updaterId" : "string"
}
Responses
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
Schemas
AzureDataStreamer
{
"credentialId" : "string" ,
"format" : "csv" ,
"type" : "azure" ,
"url" : "string"
}
Stream CSV data chunks from Azure
Properties
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
Use the specified credential to access the url
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
format
string
false
Type of input file format
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
anonymous
[redacted]
format
[csv
, parquet
]
type
azure
AzureIntake
{
"credentialId" : "string" ,
"format" : "csv" ,
"type" : "azure" ,
"url" : "string"
}
Stream CSV data chunks from Azure
Properties
Name
Type
Required
Restrictions
Description
credentialId
string¦null
false
Use the specified credential to access the url
format
string
false
Type of input file format
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
format
[csv
, parquet
]
type
azure
AzureOutput
{
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
}
Save CSV data chunks to Azure Blob Storage
Properties
Name
Type
Required
Restrictions
Description
credentialId
string¦null
false
Use the specified credential to access the url
format
string
false
Type of output file format
partitionColumns
[string]
false
maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type
string
true
Type name for this output type
url
string(url)
true
URL for the file or directory
Enumerated Values
Property
Value
format
[csv
, parquet
]
type
azure
AzureOutputAdaptor
{
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
}
Save CSV data chunks to Azure Blob Storage
Properties
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
Use the specified credential to access the url
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
format
string
false
Type of output file format
partitionColumns
[string]
false
maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type
string
true
Type name for this output type
url
string(url)
true
URL for the file or directory
Enumerated Values
Property
Value
anonymous
[redacted]
format
[csv
, parquet
]
type
azure
BatchJobCSVSettings
{
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
}
The CSV settings used for this job
Properties
Name
Type
Required
Restrictions
Description
delimiter
any
true
CSV fields are delimited by this character. Use the string "tab" to denote TSV (TAB separated values).
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
maxLength: 1 minLength: 1 minLength: 1
none
continued
Name
Type
Required
Restrictions
Description
encoding
string
true
The encoding to be used for intake and output. For example (but not limited to): "shift_jis", "latin_1" or "mskanji".
quotechar
string
true
maxLength: 1 minLength: 1 minLength: 1
Fields containing the delimiter or newlines must be quoted using this character.
Enumerated Values
Property
Value
anonymous
tab
BatchJobCreatedBy
{
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
Who created this job
Properties
Name
Type
Required
Restrictions
Description
fullName
string¦null
true
The full name of the user who created this job (if defined by the user)
userId
string
true
The User ID of the user who created this job
username
string
true
The username (e-mail address) of the user who created this job
BatchJobDefinitionResponse
{
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
}
The Batch Prediction Job Definition linking to this job, if any.
Properties
Name
Type
Required
Restrictions
Description
createdBy
string
true
The ID of creator of this job definition
id
string
true
The ID of the Batch Prediction job definition
name
string
true
A human-readable name for the definition, must be unique across organisations
BatchJobDefinitionsSpecResponse
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
The Batch Monitoring Job specification to be put on the queue in intervals
Properties
Name
Type
Required
Restrictions
Description
abortOnError
boolean
true
Should this job abort if too many errors are encountered
batchJobType
string
false
Batch job type.
chunkSize
any
false
Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
maximum: 41943040 minimum: 20
none
continued
Name
Type
Required
Restrictions
Description
columnNamesRemapping
any
false
Remap (rename or remove columns from) the output from this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
object
false
Provide a dictionary with key/value pairs to remap (deprecated)
xor
Name
Type
Required
Restrictions
Description
» anonymous
[BatchJobRemapping ]
false
maxItems: 1000
Provide a list of items to remap
continued
Name
Type
Required
Restrictions
Description
csvSettings
BatchJobCSVSettings
true
The CSV settings used for this job
deploymentId
string
false
ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling
boolean
true
Skip row by row error handling
explanationAlgorithm
string
false
Which algorithm will be used to calculate prediction explanations
explanationClassNames
[string]
false
maxItems: 10 minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses
integer
false
maximum: 10 minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus
boolean
true
Include prediction status column in the output
includeProbabilities
boolean
true
Include probabilities for all classes
includeProbabilitiesClasses
[string]
true
maxItems: 100
Include only probabilities for these specific class names.
intakeSettings
any
true
The response option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureDataStreamer
false
Stream CSV data chunks from Azure
xor
Name
Type
Required
Restrictions
Description
» anonymous
DataStageDataStreamer
false
Stream CSV data chunks from data stage storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
CatalogDataStreamer
false
Stream CSV data chunks from AI catalog dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPDataStreamer
false
Stream CSV data chunks from Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryDataStreamer
false
Stream CSV data chunks from Big Query using GCS
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3DataStreamer
false
Stream CSV data chunks from Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeDataStreamer
false
Stream CSV data chunks from Snowflake
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseDataStreamer
false
Stream CSV data chunks from Azure Synapse
xor
Name
Type
Required
Restrictions
Description
» anonymous
DSSDataStreamer
false
Stream CSV data chunks from DSS dataset
xor
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPDataStreamer
false
Stream CSV data chunks from HTTP
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCDataStreamer
false
Stream CSV data chunks from JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileDataStreamer
false
Stream CSV data chunks from local file storage
continued
Name
Type
Required
Restrictions
Description
maxExplanations
integer
true
maximum: 100 minimum: 0
Number of explanations requested. Will be ordered by strength.
maxNgramExplanations
any
false
The maximum number of text ngram explanations to supply per row of the dataset. The default recommended maxNgramExplanations
is all
(no limit)
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
minimum: 0
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
modelId
string
false
ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId
string
false
ID of model package from registry is used in job for processing predictions dataset
monitoringAggregation
MonitoringAggregation
false
Defines the aggregation policy for monitoring jobs.
monitoringBatchPrefix
string¦null
false
Name of the batch to create with this job
monitoringColumns
MonitoringColumnsMapping
false
Column names mapping for monitoring
monitoringOutputSettings
MonitoringOutputSettings
false
Output settings for monitoring jobs
numConcurrent
integer
false
minimum: 0
Number of simultaneous requests to run against the prediction instance
outputSettings
any
false
The response option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureOutputAdaptor
false
Save CSV data chunks to Azure Blob Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPOutputAdaptor
false
Save CSV data chunks to Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryOutputAdaptor
false
Save CSV data chunks to Google BigQuery in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3OutputAdaptor
false
Saves CSV data chunks to Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeOutputAdaptor
false
Save CSV data chunks to Snowflake in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseOutputAdaptor
false
Save CSV data chunks to Azure Synapse in bulk
xor
xor
Name
Type
Required
Restrictions
Description
» anonymous
HttpOutputAdaptor
false
Save CSV data chunks to HTTP data endpoint
xor
Name
Type
Required
Restrictions
Description
» anonymous
JdbcOutputAdaptor
false
Save CSV data chunks via JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileOutputAdaptor
false
Save CSV data chunks to local file storage
continued
Name
Type
Required
Restrictions
Description
passthroughColumns
[string]
false
maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet
string
false
Pass through all columns from the original dataset
pinnedModelId
string
false
Specify a model ID used for scoring
predictionInstance
BatchJobPredictionInstance
false
Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled
boolean¦null
false
Enable prediction warnings.
redactedFields
[string]
true
A list of qualified field names from intake- and/or outputSettings that was redacted due to permissions and sharing settings. For example: intakeSettings.dataStoreId
skipDriftTracking
boolean
true
Skip drift tracking for this job.
thresholdHigh
number
false
Compute explanations for predictions above this threshold
thresholdLow
number
false
Compute explanations for predictions below this threshold
timeseriesSettings
any
false
Time Series settings included of this job is a Time Series job.
oneOf
xor
Enumerated Values
Property
Value
batchJobType
[monitoring
, prediction
]
anonymous
[auto
, fixed
, dynamic
]
explanationAlgorithm
[shap
, xemp
]
anonymous
all
passthroughColumnsSet
all
BatchJobLinks
{
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
}
Links useful for this job
Properties
Name
Type
Required
Restrictions
Description
csvUpload
string(url)
false
The URL used to upload the dataset for this job. Only available for localFile intake.
download
string¦null
false
The URL used to download the results from this job. Only available for localFile outputs. Will be null if the download is not yet available.
self
string(url)
true
The URL used access this job.
BatchJobListResponse
{
"count" : 0 ,
"data" : [
{
"batchMonitoringJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"batchPredictionJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"elapsedTimeSec" : 0 ,
"failedRows" : 0 ,
"hidden" : "2019-08-24T14:15:22Z" ,
"id" : "string" ,
"intakeDatasetDisplayName" : "string" ,
"jobIntakeSize" : 0 ,
"jobOutputSize" : 0 ,
"jobSpec" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"links" : {
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
},
"logs" : [
"string"
],
"monitoringBatchId" : "string" ,
"percentageCompleted" : 100 ,
"queuePosition" : 0 ,
"queued" : true ,
"resultsDeleted" : true ,
"scoredRows" : 0 ,
"skippedRows" : 0 ,
"source" : "string" ,
"status" : "INITIALIZING" ,
"statusDetails" : "string"
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Properties
Name
Type
Required
Restrictions
Description
count
integer
false
Number of items returned on this page.
data
[BatchJobResponse ]
true
maxItems: 10000
An array of jobs
next
string(uri)¦null
true
URL pointing to the next page (if null, there is no next page).
previous
string(uri)¦null
true
URL pointing to the previous page (if null, there is no previous page).
totalCount
integer
true
The total number of items across all pages.
BatchJobPredictionInstance
{
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
}
Override the default prediction instance from the deployment when scoring this job.
Properties
Name
Type
Required
Restrictions
Description
apiKey
string
false
By default, prediction requests will use the API key of the user that created the job. This allows you to make requests on behalf of other users.
datarobotKey
string
false
If running a job against a prediction instance in the Managed AI Cloud, you must provide the organization level DataRobot-Key.
hostName
string
true
Override the default host name of the deployment with this.
sslEnabled
boolean
true
Use SSL (HTTPS) when communicating with the overriden prediction server.
BatchJobRemapping
{
"inputName" : "string" ,
"outputName" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
inputName
string
true
Rename column with this name
outputName
string¦null
true
Rename column to this name (leave as null to remove from the output)
BatchJobResponse
{
"batchMonitoringJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"batchPredictionJobDefinition" : {
"createdBy" : "string" ,
"id" : "string" ,
"name" : "string"
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"elapsedTimeSec" : 0 ,
"failedRows" : 0 ,
"hidden" : "2019-08-24T14:15:22Z" ,
"id" : "string" ,
"intakeDatasetDisplayName" : "string" ,
"jobIntakeSize" : 0 ,
"jobOutputSize" : 0 ,
"jobSpec" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"links" : {
"csvUpload" : "string" ,
"download" : "string" ,
"self" : "string"
},
"logs" : [
"string"
],
"monitoringBatchId" : "string" ,
"percentageCompleted" : 100 ,
"queuePosition" : 0 ,
"queued" : true ,
"resultsDeleted" : true ,
"scoredRows" : 0 ,
"skippedRows" : 0 ,
"source" : "string" ,
"status" : "INITIALIZING" ,
"statusDetails" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
batchMonitoringJobDefinition
BatchJobDefinitionResponse
false
The Batch Prediction Job Definition linking to this job, if any.
batchPredictionJobDefinition
BatchJobDefinitionResponse
false
The Batch Prediction Job Definition linking to this job, if any.
created
string(date-time)
true
When was this job created
createdBy
BatchJobCreatedBy
true
Who created this job
elapsedTimeSec
integer
true
minimum: 0
Number of seconds the job has been processing for
failedRows
integer
true
minimum: 0
Number of rows that have failed scoring
hidden
string(date-time)
false
When was this job was hidden last, blank if visible
id
string
true
The ID of the Batch job
intakeDatasetDisplayName
string¦null
false
If applicable (e.g. for AI catalog), will contain the dataset name used for the intake dataset.
jobIntakeSize
integer¦null
true
minimum: 0
Number of bytes in the intake dataset for this job
jobOutputSize
integer¦null
true
minimum: 0
Number of bytes in the output dataset for this job
jobSpec
BatchJobSpecResponse
true
The job configuration used to create this job
links
BatchJobLinks
true
Links useful for this job
logs
[string]
true
The job log.
monitoringBatchId
string¦null
true
Id of the monitoring batch created by this job. Only present if the job runs on a deployment with batch monitoring enabled.
percentageCompleted
number
true
maximum: 100 minimum: 0
Indicates job progress which is based on number of already processed rows in dataset
queuePosition
integer¦null
false
minimum: 0
To ensure a dedicated prediction instance is not overloaded, only one job will be run against it at a time. This is the number of jobs that are awaiting processing before this job start running. May not be available in all environments.
queued
boolean
true
The job has been put on the queue for execution.
resultsDeleted
boolean
false
Indicates if the job was subject to garbage collection and had its artifacts deleted (output files, if any, and scoring data on local storage)
scoredRows
integer
true
minimum: 0
Number of rows that have been used in prediction computation
skippedRows
integer
true
minimum: 0
Number of rows that have been skipped during scoring. May contain non-zero value only in time-series predictions case if provided dataset contains more than required historical rows.
source
string
false
Source from which batch job was started
status
string
true
The current job status
statusDetails
string
true
Explanation for current status
Enumerated Values
Property
Value
status
[INITIALIZING
, RUNNING
, COMPLETED
, ABORTED
, FAILED
]
BatchJobSpecResponse
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
The job configuration used to create this job
Properties
Name
Type
Required
Restrictions
Description
abortOnError
boolean
true
Should this job abort if too many errors are encountered
batchJobType
string
false
Batch job type.
chunkSize
any
false
Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
maximum: 41943040 minimum: 20
none
continued
Name
Type
Required
Restrictions
Description
columnNamesRemapping
any
false
Remap (rename or remove columns from) the output from this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
object
false
Provide a dictionary with key/value pairs to remap (deprecated)
xor
Name
Type
Required
Restrictions
Description
» anonymous
[BatchJobRemapping ]
false
maxItems: 1000
Provide a list of items to remap
continued
Name
Type
Required
Restrictions
Description
csvSettings
BatchJobCSVSettings
true
The CSV settings used for this job
deploymentId
string
false
ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling
boolean
true
Skip row by row error handling
explanationAlgorithm
string
false
Which algorithm will be used to calculate prediction explanations
explanationClassNames
[string]
false
maxItems: 10 minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses
integer
false
maximum: 10 minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus
boolean
true
Include prediction status column in the output
includeProbabilities
boolean
true
Include probabilities for all classes
includeProbabilitiesClasses
[string]
true
maxItems: 100
Include only probabilities for these specific class names.
intakeSettings
any
true
The response option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureDataStreamer
false
Stream CSV data chunks from Azure
xor
Name
Type
Required
Restrictions
Description
» anonymous
DataStageDataStreamer
false
Stream CSV data chunks from data stage storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
CatalogDataStreamer
false
Stream CSV data chunks from AI catalog dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPDataStreamer
false
Stream CSV data chunks from Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryDataStreamer
false
Stream CSV data chunks from Big Query using GCS
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3DataStreamer
false
Stream CSV data chunks from Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeDataStreamer
false
Stream CSV data chunks from Snowflake
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseDataStreamer
false
Stream CSV data chunks from Azure Synapse
xor
Name
Type
Required
Restrictions
Description
» anonymous
DSSDataStreamer
false
Stream CSV data chunks from DSS dataset
xor
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPDataStreamer
false
Stream CSV data chunks from HTTP
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCDataStreamer
false
Stream CSV data chunks from JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileDataStreamer
false
Stream CSV data chunks from local file storage
continued
Name
Type
Required
Restrictions
Description
maxExplanations
integer
true
maximum: 100 minimum: 0
Number of explanations requested. Will be ordered by strength.
maxNgramExplanations
any
false
The maximum number of text ngram explanations to supply per row of the dataset. The default recommended maxNgramExplanations
is all
(no limit)
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
minimum: 0
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
modelId
string
false
ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId
string
false
ID of model package from registry is used in job for processing predictions dataset
monitoringAggregation
MonitoringAggregation
false
Defines the aggregation policy for monitoring jobs.
monitoringBatchPrefix
string¦null
false
Name of the batch to create with this job
monitoringColumns
MonitoringColumnsMapping
false
Column names mapping for monitoring
monitoringOutputSettings
MonitoringOutputSettings
false
Output settings for monitoring jobs
numConcurrent
integer
false
minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings
any
false
The response option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureOutputAdaptor
false
Save CSV data chunks to Azure Blob Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPOutputAdaptor
false
Save CSV data chunks to Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryOutputAdaptor
false
Save CSV data chunks to Google BigQuery in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3OutputAdaptor
false
Saves CSV data chunks to Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeOutputAdaptor
false
Save CSV data chunks to Snowflake in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseOutputAdaptor
false
Save CSV data chunks to Azure Synapse in bulk
xor
xor
Name
Type
Required
Restrictions
Description
» anonymous
HttpOutputAdaptor
false
Save CSV data chunks to HTTP data endpoint
xor
Name
Type
Required
Restrictions
Description
» anonymous
JdbcOutputAdaptor
false
Save CSV data chunks via JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileOutputAdaptor
false
Save CSV data chunks to local file storage
continued
Name
Type
Required
Restrictions
Description
passthroughColumns
[string]
false
maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet
string
false
Pass through all columns from the original dataset
pinnedModelId
string
false
Specify a model ID used for scoring
predictionInstance
BatchJobPredictionInstance
false
Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled
boolean¦null
false
Enable prediction warnings.
redactedFields
[string]
true
A list of qualified field names from intake- and/or outputSettings that was redacted due to permissions and sharing settings. For example: intakeSettings.dataStoreId
skipDriftTracking
boolean
true
Skip drift tracking for this job.
thresholdHigh
number
false
Compute explanations for predictions above this threshold
thresholdLow
number
false
Compute explanations for predictions below this threshold
timeseriesSettings
any
false
Time Series settings included of this job is a Time Series job.
oneOf
xor
Enumerated Values
Property
Value
batchJobType
[monitoring
, prediction
]
anonymous
[auto
, fixed
, dynamic
]
explanationAlgorithm
[shap
, xemp
]
anonymous
all
passthroughColumnsSet
all
BatchJobTimeSeriesSettingsForecast
{
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
Properties
Name
Type
Required
Restrictions
Description
forecastPoint
string(date-time)
false
Used for forecast predictions in order to override the inferred forecast point from the dataset.
relaxKnownInAdvanceFeaturesCheck
boolean
false
If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type
string
true
Forecast mode makes predictions using forecastPoint or rows in the dataset without target.
Enumerated Values
Property
Value
type
forecast
BatchJobTimeSeriesSettingsHistorical
{
"predictionsEndDate" : "2019-08-24T14:15:22Z" ,
"predictionsStartDate" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "historical"
}
Properties
Name
Type
Required
Restrictions
Description
predictionsEndDate
string(date-time)
false
Used for historical predictions in order to override date to which predictions should be calculated. By default value will be inferred automatically from the dataset.
predictionsStartDate
string(date-time)
false
Used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset.
relaxKnownInAdvanceFeaturesCheck
boolean
false
If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type
string
true
Historical mode enables bulk predictions which calculates predictions for all possible forecast points and forecast distances in the dataset within the predictionsStartDate/predictionsEndDate range.
Enumerated Values
Property
Value
type
historical
BatchMonitoringJobCreate
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionThreshold" : 1 ,
"predictionWarningEnabled" : true ,
"secondaryDatasetsConfigId" : "string" ,
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
Properties
Name
Type
Required
Restrictions
Description
abortOnError
boolean
true
Should this job abort if too many errors are encountered
batchJobType
string
false
Batch job type.
chunkSize
any
false
Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
maximum: 41943040 minimum: 20
none
continued
Name
Type
Required
Restrictions
Description
columnNamesRemapping
any
false
Remap (rename or remove columns from) the output from this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
object
false
Provide a dictionary with key/value pairs to remap (deprecated)
xor
Name
Type
Required
Restrictions
Description
» anonymous
[BatchPredictionJobRemapping ]
false
maxItems: 1000
Provide a list of items to remap
continued
Name
Type
Required
Restrictions
Description
csvSettings
BatchPredictionJobCSVSettings
true
The CSV settings used for this job
deploymentId
string
false
ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling
boolean
true
Skip row by row error handling
explanationAlgorithm
string
false
Which algorithm will be used to calculate prediction explanations
explanationClassNames
[string]
false
maxItems: 10 minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses
integer
false
maximum: 10 minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus
boolean
true
Include prediction status column in the output
includeProbabilities
boolean
true
Include probabilities for all classes
includeProbabilitiesClasses
[string]
true
maxItems: 100
Include only probabilities for these specific class names.
intakeSettings
any
true
The intake option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureIntake
false
Stream CSV data chunks from Azure
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryIntake
false
Stream CSV data chunks from Big Query using GCS
xor
Name
Type
Required
Restrictions
Description
» anonymous
DataStageIntake
false
Stream CSV data chunks from data stage storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
Catalog
false
Stream CSV data chunks from AI catalog dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
DSS
false
Stream CSV data chunks from DSS dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
FileSystemIntake
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPIntake
false
Stream CSV data chunks from Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPIntake
false
Stream CSV data chunks from HTTP
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCIntake
false
Stream CSV data chunks from JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileIntake
false
Stream CSV data chunks from local file storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3Intake
false
Stream CSV data chunks from Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeIntake
false
Stream CSV data chunks from Snowflake
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseIntake
false
Stream CSV data chunks from Azure Synapse
continued
Name
Type
Required
Restrictions
Description
maxExplanations
integer
true
maximum: 100 minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId
string
false
ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId
string
false
ID of model package from registry is used in job for processing predictions dataset
monitoringAggregation
MonitoringAggregation
false
Defines the aggregation policy for monitoring jobs.
monitoringBatchPrefix
string¦null
false
Name of the batch to create with this job
monitoringColumns
MonitoringColumnsMapping
false
Column names mapping for monitoring
monitoringOutputSettings
MonitoringOutputSettings
false
Output settings for monitoring jobs
numConcurrent
integer
false
minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings
any
false
The output option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureOutput
false
Save CSV data chunks to Azure Blob Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryOutput
false
Save CSV data chunks to Google BigQuery in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
FileSystemOutput
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPOutput
false
Save CSV data chunks to Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPOutput
false
Save CSV data chunks to HTTP data endpoint
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCOutput
false
Save CSV data chunks via JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileOutput
false
Save CSV data chunks to local file storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3Output
false
Saves CSV data chunks to Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeOutput
false
Save CSV data chunks to Snowflake in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseOutput
false
Save CSV data chunks to Azure Synapse in bulk
continued
Name
Type
Required
Restrictions
Description
passthroughColumns
[string]
false
maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet
string
false
Pass through all columns from the original dataset
pinnedModelId
string
false
Specify a model ID used for scoring
predictionInstance
BatchPredictionJobPredictionInstance
false
Override the default prediction instance from the deployment when scoring this job.
predictionThreshold
number
false
maximum: 1 minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled
boolean¦null
false
Enable prediction warnings.
secondaryDatasetsConfigId
string
false
Configuration id for secondary datasets to use when making a prediction.
skipDriftTracking
boolean
true
Skip drift tracking for this job.
thresholdHigh
number
false
Compute explanations for predictions above this threshold
thresholdLow
number
false
Compute explanations for predictions below this threshold
timeseriesSettings
any
false
Time Series settings included of this job is a Time Series job.
oneOf
xor
xor
Enumerated Values
Property
Value
batchJobType
[monitoring
, prediction
]
anonymous
[auto
, fixed
, dynamic
]
explanationAlgorithm
[shap
, xemp
]
passthroughColumnsSet
all
BatchMonitoringJobDefinitionsCreate
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"enabled" : true ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"name" : "string" ,
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionThreshold" : 1 ,
"predictionWarningEnabled" : true ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"secondaryDatasetsConfigId" : "string" ,
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
Properties
Name
Type
Required
Restrictions
Description
abortOnError
boolean
true
Should this job abort if too many errors are encountered
batchJobType
string
false
Batch job type.
chunkSize
any
false
Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
maximum: 41943040 minimum: 20
none
continued
Name
Type
Required
Restrictions
Description
columnNamesRemapping
any
false
Remap (rename or remove columns from) the output from this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
object
false
Provide a dictionary with key/value pairs to remap (deprecated)
xor
Name
Type
Required
Restrictions
Description
» anonymous
[BatchPredictionJobRemapping ]
false
maxItems: 1000
Provide a list of items to remap
continued
Name
Type
Required
Restrictions
Description
csvSettings
BatchPredictionJobCSVSettings
true
The CSV settings used for this job
deploymentId
string
true
ID of deployment that the monitoring jobs is associated with.
disableRowLevelErrorHandling
boolean
true
Skip row by row error handling
enabled
boolean
false
If this job definition is enabled as a scheduled job. Optional if no schedule is supplied.
explanationAlgorithm
string
false
Which algorithm will be used to calculate prediction explanations
explanationClassNames
[string]
false
maxItems: 10 minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses
integer
false
maximum: 10 minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus
boolean
true
Include prediction status column in the output
includeProbabilities
boolean
true
Include probabilities for all classes
includeProbabilitiesClasses
[string]
true
maxItems: 100
Include only probabilities for these specific class names.
intakeSettings
any
true
The intake option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureIntake
false
Stream CSV data chunks from Azure
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryIntake
false
Stream CSV data chunks from Big Query using GCS
xor
Name
Type
Required
Restrictions
Description
» anonymous
DataStageIntake
false
Stream CSV data chunks from data stage storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
Catalog
false
Stream CSV data chunks from AI catalog dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
DSS
false
Stream CSV data chunks from DSS dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
FileSystemIntake
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPIntake
false
Stream CSV data chunks from Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPIntake
false
Stream CSV data chunks from HTTP
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCIntake
false
Stream CSV data chunks from JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileIntake
false
Stream CSV data chunks from local file storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3Intake
false
Stream CSV data chunks from Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeIntake
false
Stream CSV data chunks from Snowflake
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseIntake
false
Stream CSV data chunks from Azure Synapse
continued
Name
Type
Required
Restrictions
Description
maxExplanations
integer
true
maximum: 100 minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId
string
false
ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId
string
false
ID of model package from registry is used in job for processing predictions dataset
monitoringAggregation
MonitoringAggregation
false
Defines the aggregation policy for monitoring jobs.
monitoringBatchPrefix
string¦null
false
Name of the batch to create with this job
monitoringColumns
MonitoringColumnsMapping
false
Column names mapping for monitoring
monitoringOutputSettings
MonitoringOutputSettings
false
Output settings for monitoring jobs
name
string
false
maxLength: 100 minLength: 1 minLength: 1
A human-readable name for the definition, must be unique across organisations, if left out the backend will generate one for you.
numConcurrent
integer
false
minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings
any
false
The output option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureOutput
false
Save CSV data chunks to Azure Blob Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryOutput
false
Save CSV data chunks to Google BigQuery in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
FileSystemOutput
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPOutput
false
Save CSV data chunks to Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPOutput
false
Save CSV data chunks to HTTP data endpoint
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCOutput
false
Save CSV data chunks via JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileOutput
false
Save CSV data chunks to local file storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3Output
false
Saves CSV data chunks to Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeOutput
false
Save CSV data chunks to Snowflake in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseOutput
false
Save CSV data chunks to Azure Synapse in bulk
continued
Name
Type
Required
Restrictions
Description
passthroughColumns
[string]
false
maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet
string
false
Pass through all columns from the original dataset
pinnedModelId
string
false
Specify a model ID used for scoring
predictionInstance
BatchPredictionJobPredictionInstance
false
Override the default prediction instance from the deployment when scoring this job.
predictionThreshold
number
false
maximum: 1 minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled
boolean¦null
false
Enable prediction warnings.
schedule
Schedule
false
The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
secondaryDatasetsConfigId
string
false
Configuration id for secondary datasets to use when making a prediction.
skipDriftTracking
boolean
true
Skip drift tracking for this job.
thresholdHigh
number
false
Compute explanations for predictions above this threshold
thresholdLow
number
false
Compute explanations for predictions below this threshold
timeseriesSettings
any
false
Time Series settings included of this job is a Time Series job.
oneOf
xor
xor
Enumerated Values
Property
Value
batchJobType
[monitoring
, prediction
]
anonymous
[auto
, fixed
, dynamic
]
explanationAlgorithm
[shap
, xemp
]
passthroughColumnsSet
all
BatchMonitoringJobDefinitionsListResponse
{
"count" : 0 ,
"data" : [
{
"batchMonitoringJob" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"enabled" : false ,
"id" : "string" ,
"lastFailedRunTime" : "2019-08-24T14:15:22Z" ,
"lastScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"lastStartedJobStatus" : "INITIALIZING" ,
"lastStartedJobTime" : "2019-08-24T14:15:22Z" ,
"lastSuccessfulRunTime" : "2019-08-24T14:15:22Z" ,
"name" : "string" ,
"nextScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"updated" : "2019-08-24T14:15:22Z" ,
"updatedBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Properties
Name
Type
Required
Restrictions
Description
count
integer
false
Number of items returned on this page.
data
[BatchMonitoringJobDefinitionsResponse ]
true
maxItems: 10000
An array of scheduled jobs
next
string(uri)¦null
true
URL pointing to the next page (if null, there is no next page).
previous
string(uri)¦null
true
URL pointing to the previous page (if null, there is no previous page).
totalCount
integer
true
The total number of items across all pages.
BatchMonitoringJobDefinitionsResponse
{
"batchMonitoringJob" : {
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"maxNgramExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"numConcurrent" : 0 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionWarningEnabled" : true ,
"redactedFields" : [
"string"
],
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
},
"created" : "2019-08-24T14:15:22Z" ,
"createdBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
},
"enabled" : false ,
"id" : "string" ,
"lastFailedRunTime" : "2019-08-24T14:15:22Z" ,
"lastScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"lastStartedJobStatus" : "INITIALIZING" ,
"lastStartedJobTime" : "2019-08-24T14:15:22Z" ,
"lastSuccessfulRunTime" : "2019-08-24T14:15:22Z" ,
"name" : "string" ,
"nextScheduledRunTime" : "2019-08-24T14:15:22Z" ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"updated" : "2019-08-24T14:15:22Z" ,
"updatedBy" : {
"fullName" : "string" ,
"userId" : "string" ,
"username" : "string"
}
}
Properties
Name
Type
Required
Restrictions
Description
batchMonitoringJob
BatchJobDefinitionsSpecResponse
true
The Batch Monitoring Job specification to be put on the queue in intervals
created
string(date-time)
true
When was this job created
createdBy
BatchJobCreatedBy
true
Who created this job
enabled
boolean
true
If this job definition is enabled as a scheduled job.
id
string
true
The ID of the Batch job definition
lastFailedRunTime
string(date-time)¦null
false
Last time this job had a failed run
lastScheduledRunTime
string(date-time)¦null
false
Last time this job was scheduled to run (though not guaranteed it actually ran at that time)
lastStartedJobStatus
string¦null
true
The status of the latest job launched to the queue (if any).
lastStartedJobTime
string(date-time)¦null
true
The last time (if any) a job was launched.
lastSuccessfulRunTime
string(date-time)¦null
false
Last time this job had a successful run
name
string
true
A human-readable name for the definition, must be unique across organisations
nextScheduledRunTime
string(date-time)¦null
false
Next time this job is scheduled to run
schedule
Schedule
false
The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
updated
string(date-time)
true
When was this job last updated
updatedBy
BatchJobCreatedBy
true
Who created this job
Enumerated Values
Property
Value
lastStartedJobStatus
[INITIALIZING
, RUNNING
, COMPLETED
, ABORTED
, FAILED
]
BatchMonitoringJobDefinitionsUpdate
{
"abortOnError" : true ,
"batchJobType" : "monitoring" ,
"chunkSize" : "auto" ,
"columnNamesRemapping" : {},
"csvSettings" : {
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
},
"deploymentId" : "string" ,
"disableRowLevelErrorHandling" : false ,
"enabled" : true ,
"explanationAlgorithm" : "shap" ,
"explanationClassNames" : [
"string"
],
"explanationNumTopClasses" : 1 ,
"includePredictionStatus" : false ,
"includeProbabilities" : true ,
"includeProbabilitiesClasses" : [],
"intakeSettings" : {
"type" : "localFile"
},
"maxExplanations" : 0 ,
"modelId" : "string" ,
"modelPackageId" : "string" ,
"monitoringAggregation" : {
"retentionPolicy" : "samples" ,
"retentionValue" : 0
},
"monitoringBatchPrefix" : "string" ,
"monitoringColumns" : {
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"monitoringOutputSettings" : {
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
},
"name" : "string" ,
"numConcurrent" : 1 ,
"outputSettings" : {
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "azure" ,
"url" : "string"
},
"passthroughColumns" : [
"string"
],
"passthroughColumnsSet" : "all" ,
"pinnedModelId" : "string" ,
"predictionInstance" : {
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
},
"predictionThreshold" : 1 ,
"predictionWarningEnabled" : true ,
"schedule" : {
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
},
"secondaryDatasetsConfigId" : "string" ,
"skipDriftTracking" : false ,
"thresholdHigh" : 0 ,
"thresholdLow" : 0 ,
"timeseriesSettings" : {
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
}
Properties
Name
Type
Required
Restrictions
Description
abortOnError
boolean
false
Should this job abort if too many errors are encountered
batchJobType
string
false
Batch job type.
chunkSize
any
false
Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
integer
false
maximum: 41943040 minimum: 20
none
continued
Name
Type
Required
Restrictions
Description
columnNamesRemapping
any
false
Remap (rename or remove columns from) the output from this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
object
false
Provide a dictionary with key/value pairs to remap (deprecated)
xor
Name
Type
Required
Restrictions
Description
» anonymous
[BatchPredictionJobRemapping ]
false
maxItems: 1000
Provide a list of items to remap
continued
Name
Type
Required
Restrictions
Description
csvSettings
BatchPredictionJobCSVSettings
false
The CSV settings used for this job
deploymentId
string
false
ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling
boolean
false
Skip row by row error handling
enabled
boolean
false
If this job definition is enabled as a scheduled job. Optional if no schedule is supplied.
explanationAlgorithm
string
false
Which algorithm will be used to calculate prediction explanations
explanationClassNames
[string]
false
maxItems: 10 minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses
integer
false
maximum: 10 minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus
boolean
false
Include prediction status column in the output
includeProbabilities
boolean
false
Include probabilities for all classes
includeProbabilitiesClasses
[string]
false
maxItems: 100
Include only probabilities for these specific class names.
intakeSettings
any
false
The intake option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureIntake
false
Stream CSV data chunks from Azure
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryIntake
false
Stream CSV data chunks from Big Query using GCS
xor
Name
Type
Required
Restrictions
Description
» anonymous
DataStageIntake
false
Stream CSV data chunks from data stage storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
Catalog
false
Stream CSV data chunks from AI catalog dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
DSS
false
Stream CSV data chunks from DSS dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
FileSystemIntake
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPIntake
false
Stream CSV data chunks from Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPIntake
false
Stream CSV data chunks from HTTP
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCIntake
false
Stream CSV data chunks from JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileIntake
false
Stream CSV data chunks from local file storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3Intake
false
Stream CSV data chunks from Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeIntake
false
Stream CSV data chunks from Snowflake
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseIntake
false
Stream CSV data chunks from Azure Synapse
continued
Name
Type
Required
Restrictions
Description
maxExplanations
integer
false
maximum: 100 minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId
string
false
ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId
string
false
ID of model package from registry is used in job for processing predictions dataset
monitoringAggregation
MonitoringAggregation
false
Defines the aggregation policy for monitoring jobs.
monitoringBatchPrefix
string¦null
false
Name of the batch to create with this job
monitoringColumns
MonitoringColumnsMapping
false
Column names mapping for monitoring
monitoringOutputSettings
MonitoringOutputSettings
false
Output settings for monitoring jobs
name
string
false
maxLength: 100 minLength: 1 minLength: 1
A human-readable name for the definition, must be unique across organisations, if left out the backend will generate one for you.
numConcurrent
integer
false
minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings
any
false
The output option configured for this job
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
AzureOutput
false
Save CSV data chunks to Azure Blob Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
BigQueryOutput
false
Save CSV data chunks to Google BigQuery in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
FileSystemOutput
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
GCPOutput
false
Save CSV data chunks to Google Storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
HTTPOutput
false
Save CSV data chunks to HTTP data endpoint
xor
Name
Type
Required
Restrictions
Description
» anonymous
JDBCOutput
false
Save CSV data chunks via JDBC
xor
Name
Type
Required
Restrictions
Description
» anonymous
LocalFileOutput
false
Save CSV data chunks to local file storage
xor
Name
Type
Required
Restrictions
Description
» anonymous
S3Output
false
Saves CSV data chunks to Amazon Cloud Storage S3
xor
Name
Type
Required
Restrictions
Description
» anonymous
SnowflakeOutput
false
Save CSV data chunks to Snowflake in bulk
xor
Name
Type
Required
Restrictions
Description
» anonymous
SynapseOutput
false
Save CSV data chunks to Azure Synapse in bulk
continued
Name
Type
Required
Restrictions
Description
passthroughColumns
[string]
false
maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet
string
false
Pass through all columns from the original dataset
pinnedModelId
string
false
Specify a model ID used for scoring
predictionInstance
BatchPredictionJobPredictionInstance
false
Override the default prediction instance from the deployment when scoring this job.
predictionThreshold
number
false
maximum: 1 minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled
boolean¦null
false
Enable prediction warnings.
schedule
Schedule
false
The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
secondaryDatasetsConfigId
string
false
Configuration id for secondary datasets to use when making a prediction.
skipDriftTracking
boolean
false
Skip drift tracking for this job.
thresholdHigh
number
false
Compute explanations for predictions above this threshold
thresholdLow
number
false
Compute explanations for predictions below this threshold
timeseriesSettings
any
false
Time Series settings included of this job is a Time Series job.
oneOf
xor
xor
Enumerated Values
Property
Value
batchJobType
[monitoring
, prediction
]
anonymous
[auto
, fixed
, dynamic
]
explanationAlgorithm
[shap
, xemp
]
passthroughColumnsSet
all
BatchPredictionJobCSVSettings
{
"delimiter" : "," ,
"encoding" : "utf-8" ,
"quotechar" : "\""
}
The CSV settings used for this job
Properties
Name
Type
Required
Restrictions
Description
delimiter
any
true
CSV fields are delimited by this character. Use the string "tab" to denote TSV (TAB separated values).
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
maxLength: 1 minLength: 1 minLength: 1
none
continued
Name
Type
Required
Restrictions
Description
encoding
string
true
The encoding to be used for intake and output. For example (but not limited to): "shift_jis", "latin_1" or "mskanji".
quotechar
string
true
maxLength: 1 minLength: 1 minLength: 1
Fields containing the delimiter or newlines must be quoted using this character.
Enumerated Values
Property
Value
anonymous
tab
BatchPredictionJobDefinitionId
{
"jobDefinitionId" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
jobDefinitionId
string
true
ID of the Batch Prediction job definition
BatchPredictionJobPredictionInstance
{
"apiKey" : "string" ,
"datarobotKey" : "string" ,
"hostName" : "string" ,
"sslEnabled" : true
}
Override the default prediction instance from the deployment when scoring this job.
Properties
Name
Type
Required
Restrictions
Description
apiKey
string
false
By default, prediction requests will use the API key of the user that created the job. This allows you to make requests on behalf of other users.
datarobotKey
string
false
If running a job against a prediction instance in the Managed AI Cloud, you must provide the organization level DataRobot-Key.
hostName
string
true
Override the default host name of the deployment with this.
sslEnabled
boolean
true
Use SSL (HTTPS) when communicating with the overriden prediction server.
BatchPredictionJobRemapping
{
"inputName" : "string" ,
"outputName" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
inputName
string
true
Rename column with this name
outputName
string¦null
true
Rename column to this name (leave as null to remove from the output)
BatchPredictionJobTimeSeriesSettingsForecast
{
"forecastPoint" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "forecast"
}
Properties
Name
Type
Required
Restrictions
Description
forecastPoint
string(date-time)
false
Used for forecast predictions in order to override the inferred forecast point from the dataset.
relaxKnownInAdvanceFeaturesCheck
boolean
false
If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type
string
true
Forecast mode makes predictions using forecastPoint or rows in the dataset without target.
Enumerated Values
Property
Value
type
forecast
BatchPredictionJobTimeSeriesSettingsHistorical
{
"predictionsEndDate" : "2019-08-24T14:15:22Z" ,
"predictionsStartDate" : "2019-08-24T14:15:22Z" ,
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "historical"
}
Properties
Name
Type
Required
Restrictions
Description
predictionsEndDate
string(date-time)
false
Used for historical predictions in order to override date to which predictions should be calculated. By default value will be inferred automatically from the dataset.
predictionsStartDate
string(date-time)
false
Used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset.
relaxKnownInAdvanceFeaturesCheck
boolean
false
If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type
string
true
Historical mode enables bulk predictions which calculates predictions for all possible forecast points and forecast distances in the dataset within the predictionsStartDate/predictionsEndDate range.
Enumerated Values
Property
Value
type
historical
BatchPredictionJobTimeSeriesSettingsTraining
{
"relaxKnownInAdvanceFeaturesCheck" : false ,
"type" : "training"
}
Properties
Name
Type
Required
Restrictions
Description
relaxKnownInAdvanceFeaturesCheck
boolean
false
If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type
string
true
Forecast mode used for making predictions on subsets of training data.
Enumerated Values
Property
Value
type
training
BigQueryDataStreamer
{
"bucket" : "string" ,
"credentialId" : "string" ,
"dataset" : "string" ,
"table" : "string" ,
"type" : "bigquery"
}
Stream CSV data chunks from Big Query using GCS
Properties
Name
Type
Required
Restrictions
Description
bucket
string
true
The name of gcs bucket for data export
credentialId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
The ID of the GCP credentials
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataset
string
true
The name of the specified big query dataset to read input data from
table
string
true
The name of the specified big query table to read input data from
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
anonymous
[redacted]
type
bigquery
BigQueryIntake
{
"bucket" : "string" ,
"credentialId" : "string" ,
"dataset" : "string" ,
"table" : "string" ,
"type" : "bigquery"
}
Stream CSV data chunks from Big Query using GCS
Properties
Name
Type
Required
Restrictions
Description
bucket
string
true
The name of gcs bucket for data export
credentialId
string
true
The ID of the GCP credentials
dataset
string
true
The name of the specified big query dataset to read input data from
table
string
true
The name of the specified big query table to read input data from
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
bigquery
BigQueryOutput
{
"bucket" : "string" ,
"credentialId" : "string" ,
"dataset" : "string" ,
"table" : "string" ,
"type" : "bigquery"
}
Save CSV data chunks to Google BigQuery in bulk
Properties
Name
Type
Required
Restrictions
Description
bucket
string
true
The name of gcs bucket for data loading
credentialId
string
true
The ID of the GCP credentials
dataset
string
true
The name of the specified big query dataset to write data back
table
string
true
The name of the specified big query table to write data back
type
string
true
Type name for this output type
Enumerated Values
Property
Value
type
bigquery
BigQueryOutputAdaptor
{
"bucket" : "string" ,
"credentialId" : "string" ,
"dataset" : "string" ,
"table" : "string" ,
"type" : "bigquery"
}
Save CSV data chunks to Google BigQuery in bulk
Properties
Name
Type
Required
Restrictions
Description
bucket
string
true
The name of gcs bucket for data loading
credentialId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
The ID of the GCP credentials
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataset
string
true
The name of the specified big query dataset to write data back
table
string
true
The name of the specified big query table to write data back
type
string
true
Type name for this output type
Enumerated Values
Property
Value
anonymous
[redacted]
type
bigquery
Catalog
{
"datasetId" : "string" ,
"datasetVersionId" : "string" ,
"type" : "dataset"
}
Stream CSV data chunks from AI catalog dataset
Properties
Name
Type
Required
Restrictions
Description
datasetId
string
true
The ID of the AI catalog dataset
datasetVersionId
string
false
The ID of the AI catalog dataset version
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
dataset
CatalogDataStreamer
{
"datasetId" : "string" ,
"datasetVersionId" : "string" ,
"type" : "dataset"
}
Stream CSV data chunks from AI catalog dataset
Properties
Name
Type
Required
Restrictions
Description
datasetId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
The ID of the AI catalog dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
datasetVersionId
string
false
The ID of the AI catalog dataset version
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
anonymous
[redacted]
type
dataset
DSS
{
"datasetId" : "string" ,
"partition" : "holdout" ,
"projectId" : "string" ,
"type" : "dss"
}
Stream CSV data chunks from DSS dataset
Properties
Name
Type
Required
Restrictions
Description
datasetId
string
false
The ID of the dataset
partition
string
false
Partition used to predict
projectId
string
true
The ID of the project
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
partition
[holdout
, validation
, allBacktests
, None
]
type
dss
DSSDataStreamer
{
"datasetId" : "string" ,
"partition" : "holdout" ,
"projectId" : "string" ,
"type" : "dss"
}
Stream CSV data chunks from DSS dataset
Properties
Name
Type
Required
Restrictions
Description
datasetId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
The ID of the dataset
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
partition
string
false
Partition used to predict
projectId
string
true
The ID of the project
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
anonymous
[redacted]
partition
[holdout
, validation
, allBacktests
, None
]
type
dss
DataStageDataStreamer
{
"dataStageId" : "string" ,
"type" : "dataStage"
}
Stream CSV data chunks from data stage storage
Properties
Name
Type
Required
Restrictions
Description
dataStageId
string
true
The ID of the data stage
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
dataStage
DataStageIntake
{
"dataStageId" : "string" ,
"type" : "dataStage"
}
Stream CSV data chunks from data stage storage
Properties
Name
Type
Required
Restrictions
Description
dataStageId
string
true
The ID of the data stage
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
dataStage
DeploymentAndGuardResponse
{
"configurationId" : "string" ,
"deploymentId" : "string" ,
"name" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
configurationId
string
true
ID of guard configuration.
deploymentId
string
true
ID of guard model deployment.
name
string
true
Name of guard configuration.
FileSystemDataStreamer
{
"path" : "string" ,
"type" : "filesystem"
}
Properties
Name
Type
Required
Restrictions
Description
path
string
true
Path to data on host filesystem
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
filesystem
FileSystemIntake
{
"path" : "string" ,
"type" : "filesystem"
}
Properties
Name
Type
Required
Restrictions
Description
path
string
true
Path to data on host filesystem
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
filesystem
FileSystemOutput
{
"path" : "string" ,
"type" : "filesystem"
}
Properties
Name
Type
Required
Restrictions
Description
path
string
true
Path to results on host filesystem
type
string
true
Type name for this output type
Enumerated Values
Property
Value
type
filesystem
FileSystemOutputAdaptor
{
"path" : "string" ,
"type" : "filesystem"
}
Properties
Name
Type
Required
Restrictions
Description
path
string
true
Path to results on host filesystem
type
string
true
Type name for this output type
Enumerated Values
Property
Value
type
filesystem
GCPDataStreamer
{
"credentialId" : "string" ,
"format" : "csv" ,
"type" : "gcp" ,
"url" : "string"
}
Stream CSV data chunks from Google Storage
Properties
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
Use the specified credential to access the url
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
format
string
false
Type of input file format
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
anonymous
[redacted]
format
[csv
, parquet
]
type
gcp
GCPIntake
{
"credentialId" : "string" ,
"format" : "csv" ,
"type" : "gcp" ,
"url" : "string"
}
Stream CSV data chunks from Google Storage
Properties
Name
Type
Required
Restrictions
Description
credentialId
string¦null
false
Use the specified credential to access the url
format
string
false
Type of input file format
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
format
[csv
, parquet
]
type
gcp
GCPOutput
{
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "gcp" ,
"url" : "string"
}
Save CSV data chunks to Google Storage
Properties
Name
Type
Required
Restrictions
Description
credentialId
string¦null
false
Use the specified credential to access the url
format
string
false
Type of input file format
partitionColumns
[string]
false
maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type
string
true
Type name for this output type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
format
[csv
, parquet
]
type
gcp
GCPOutputAdaptor
{
"credentialId" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"type" : "gcp" ,
"url" : "string"
}
Save CSV data chunks to Google Storage
Properties
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
Use the specified credential to access the url
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
format
string
false
Type of input file format
partitionColumns
[string]
false
maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type
string
true
Type name for this output type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
anonymous
[redacted]
format
[csv
, parquet
]
type
gcp
GuardConditionResponse
{
"comparand" : true ,
"comparator" : "greaterThan"
}
Condition to trigger intervention
Properties
Name
Type
Required
Restrictions
Description
comparand
any
true
Condition comparand (basis of comparison)
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
boolean
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
[string]
false
maxItems: 10
none
continued
Name
Type
Required
Restrictions
Description
comparator
string
true
Condition comparator (operator)
Enumerated Values
Property
Value
comparator
[greaterThan
, lessThan
, equals
, notEquals
, is
, isNot
, matches
, doesNotMatch
, contains
, doesNotContain
]
GuardConfigurationConditionResponse
{
"comparand" : true ,
"comparator" : "greaterThan"
}
Condition to trigger intervention
Properties
Name
Type
Required
Restrictions
Description
comparand
any
true
Condition comparand (basis of comparison
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
boolean
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
[string]
false
maxItems: 10
none
continued
Name
Type
Required
Restrictions
Description
comparator
string
true
Condition comparator (operator)
Enumerated Values
Property
Value
comparator
[greaterThan
, lessThan
, equals
, notEquals
, is
, isNot
, matches
, doesNotMatch
, contains
, doesNotContain
]
GuardConfigurationCreate
{
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"templateId" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
deploymentId
string¦null
false
ID of deployed model, for model guards.
description
string
false
maxLength: 4096
Guard configuration description
entityId
string
true
ID of custom model or playground for this guard.
entityType
string
true
Type of associated entity.
intervention
GuardConfigurationInterventionResponse
false
Intervention configuration for the guard.
llmType
string¦null
false
Type of LLM used by this guard
modelInfo
GuardConfigurationPayloadModelInfo
false
Configuration info for guards using deployed models.
name
string
true
maxLength: 255
Guard configuration name
nemoInfo
GuardConfigurationNemoInfoResponse
false
Configuration info for NeMo guards.
openaiApiBase
string¦null
false
maxLength: 255
Azure OpenAI API Base URL
openaiApiKey
string¦null
false
maxLength: 255
Azure OpenAI API Key
openaiCredential
string¦null
false
ID of user credential containing an OpenAI token.
openaiDeploymentId
string¦null
false
maxLength: 255
Open API Deployment ID
stages
[string]
true
maxItems: 16
The stages where the guard can run.
templateId
string
true
ID of template this guard is based on.
Enumerated Values
Property
Value
entityType
[customModel
, customModelVersion
, playground
]
llmType
[openAi
, azureOpenAi
]
GuardConfigurationFullPost
{
"deploymentId" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"parameters" : [
"s"
],
"stages" : [
"prompt"
],
"type" : "guardModel"
}
Complete guard configuration to push
Properties
Name
Type
Required
Restrictions
Description
deploymentId
string¦null
false
ID of deployed model, for model guards.
description
string
true
maxLength: 4096
Guard configuration description
errorMessage
string¦null
false
Error message if the guard configuration is invalid.
intervention
GuardConfigurationInterventionResponse
false
Intervention configuration for the guard.
isValid
boolean
false
Whether the guard is valid or not.
llmType
string¦null
false
Type of LLM used by this guard
modelInfo
GuardModelInfoResponse
false
Configuration info for guards using deployed models.
name
string
true
maxLength: 255
Guard configuration name
nemoInfo
GuardConfigurationNemoInfoResponse
false
Configuration info for NeMo guards.
ootbType
string¦null
false
Guard template "Out of the Box" metric type
openaiApiBase
string¦null
false
maxLength: 255
Azure OpenAI API Base URL
openaiApiKey
string¦null
false
maxLength: 255
Azure OpenAI API Key
openaiCredential
string¦null
false
ID of user credential containing an OpenAI token.
openaiDeploymentId
string¦null
false
maxLength: 255
Open API Deployment ID
parameters
[string]
false
maxItems: 1
Parameter list, not used, deprecated.
stages
[string]
true
maxItems: 16
The stages where the guard is configured to run.
type
string
true
Guard configuration type
Enumerated Values
Property
Value
llmType
[openAi
, azureOpenAi
]
ootbType
[token_count
, faithfulness
, rouge_1
]
type
[guardModel
, nemo
, ootb
, pii
, userModel
]
GuardConfigurationInterventionResponse
{
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
}
Intervention configuration for the guard.
Properties
Name
Type
Required
Restrictions
Description
action
string
true
Action to take if conditions are met
allowedActions
[string]
false
maxItems: 10
The actions this guard is allowed to take.
conditionLogic
string
false
Action to take if conditions are met
conditions
[GuardConfigurationConditionResponse ]
true
maxItems: 1
List of conditions to trigger intervention
message
string
true
maxLength: 4096
Message to use if prompt or response is blocked
sendNotification
boolean
false
Create a notification event if intervention is triggered
Enumerated Values
Property
Value
action
[block
, report
, replace
]
conditionLogic
any
GuardConfigurationListResponse
{
"count" : 0 ,
"data" : [
{
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"type" : "guardModel"
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Properties
Name
Type
Required
Restrictions
Description
count
integer
false
Number of items returned on this page.
data
[GuardConfigurationRetrieveResponse ]
true
maxItems: 200
list of guard configurations.
next
string(uri)¦null
true
URL pointing to the next page (if null, there is no next page).
previous
string(uri)¦null
true
URL pointing to the previous page (if null, there is no previous page).
totalCount
integer
true
The total number of items across all pages.
GuardConfigurationNemoInfoResponse
{
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
}
Configuration info for NeMo guards.
Properties
Name
Type
Required
Restrictions
Description
actions
string
false
maxLength: 4096
NeMo guardrails actions file
blockedTerms
string
true
maxLength: 4096
NeMo guardrails blocked terms list
credentialId
string¦null
false
NeMo guardrails credential ID (deprecated; use "openai_credential")
llmPrompts
string
false
maxLength: 4096
NeMo guardrails prompts
mainConfig
string
true
maxLength: 4096
Overall NeMo configuration YAML
railsConfig
string
true
maxLength: 4096
NeMo guardrails configuration Colang
GuardConfigurationPayloadModelInfo
{
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
}
Configuration info for guards using deployed models.
Properties
Name
Type
Required
Restrictions
Description
classNames
[string]
false
maxItems: 100
List of class names for multiclass models
inputColumnName
string
true
maxLength: 255
Input column name
modelId
string¦null
false
ID of registered model, for model guards.
modelName
string
false
maxLength: 255
ID of registered model, for .model guards
outputColumnName
string
true
maxLength: 255
Output column name
replacementTextColumnName
string
false
maxLength: 255
Name of the output column with replacement text. Required only if intervention.action is replace
.
targetType
string¦null
false
Target type
Enumerated Values
Property
Value
targetType
[Binary
, Regression
, Multiclass
, TextGeneration
]
GuardConfigurationPredictionEnvironmentsInUseListResponse
{
"count" : 0 ,
"data" : [
{
"id" : "string" ,
"name" : "string" ,
"usedBy" : [
{
"configurationId" : "string" ,
"deploymentId" : "string" ,
"name" : "string"
}
]
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Properties
Name
Type
Required
Restrictions
Description
count
integer
false
Number of items returned on this page.
data
[PredictionEnvironmentInUseResponse ]
true
maxItems: 32
list of prediction environments in use for this custom model version.
next
string(uri)¦null
true
URL pointing to the next page (if null, there is no next page).
previous
string(uri)¦null
true
URL pointing to the previous page (if null, there is no previous page).
totalCount
integer
true
The total number of items across all pages.
GuardConfigurationRetrieveResponse
{
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"deploymentId" : "string" ,
"description" : "string" ,
"entityId" : "string" ,
"entityType" : "customModel" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"stages" : [
"prompt"
],
"type" : "guardModel"
}
Properties
Name
Type
Required
Restrictions
Description
createdAt
string(date-time)
true
When the configuration was created.
creatorId
string¦null
false
ID of the user who created the Guard configuration.
creatorName
string
false
maxLength: 1000
Name of the user who created the Guard configuration.
deploymentId
string¦null
false
ID of deployed model, for model guards.
description
string
true
maxLength: 4096
Guard configuration description
entityId
string¦null
true
ID of custom model or playground for this guard.
entityType
string
true
Type of associated entity.
errorMessage
string¦null
false
Error message if the guard configuration is invalid.
id
string
true
Guard configuration object ID
intervention
GuardConfigurationInterventionResponse
false
Intervention configuration for the guard.
isValid
boolean
false
Whether the guard is valid or not.
llmType
string¦null
false
Type of LLM used by this guard
modelInfo
GuardModelInfoResponse
false
Configuration info for guards using deployed models.
name
string
true
maxLength: 255
Guard configuration name
nemoInfo
GuardConfigurationNemoInfoResponse
false
Configuration info for NeMo guards.
ootbType
string¦null
false
Guard template "Out of the Box" metric type
openaiApiBase
string¦null
false
maxLength: 255
Azure OpenAI API Base URL
openaiApiKey
string¦null
false
maxLength: 255
Azure OpenAI API Key
openaiCredential
string¦null
false
ID of user credential containing an OpenAI token.
openaiDeploymentId
string¦null
false
maxLength: 255
Open API Deployment ID
stages
[string]
true
maxItems: 16
The stages where the guard is configured to run.
type
string
true
Guard configuration type
Enumerated Values
Property
Value
entityType
[customModel
, customModelVersion
, playground
]
llmType
[openAi
, azureOpenAi
]
ootbType
[token_count
, faithfulness
, rouge_1
]
type
[guardModel
, nemo
, ootb
, pii
, userModel
]
GuardConfigurationToCustomModelResponse
{
"customModelVersionId" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
customModelVersionId
string
true
ID of the new custom model version created.
GuardConfigurationToCustomModelVersion
{
"customModelId" : "string" ,
"data" : [
{
"deploymentId" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string" ,
"parameters" : [
"s"
],
"stages" : [
"prompt"
],
"type" : "guardModel"
}
],
"overallConfig" : {
"timeoutAction" : "block" ,
"timeoutSec" : 2
}
}
Properties
Name
Type
Required
Restrictions
Description
customModelId
string
true
ID the custom model the user is working with.
data
[GuardConfigurationFullPost ]
true
maxItems: 200
List of complete guard configurations to push
overallConfig
OverallConfigUpdate
false
Overall moderation configuration to push (not specific to one guard)
GuardConfigurationUpdate
{
"deploymentId" : "string" ,
"description" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"message" : "string" ,
"sendNotification" : false
},
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "string" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "string" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiCredential" : "string" ,
"openaiDeploymentId" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
deploymentId
string¦null
false
ID of deployed model, for model guards.
description
string
false
maxLength: 4096
Guard configuration description
intervention
GuardConfigurationInterventionResponse
false
Intervention configuration for the guard.
llmType
string¦null
false
Type of LLM used by this guard
modelInfo
GuardConfigurationPayloadModelInfo
false
Configuration info for guards using deployed models.
name
string
false
maxLength: 255
Guard configuration name
nemoInfo
GuardConfigurationNemoInfoResponse
false
Configuration info for NeMo guards.
openaiApiBase
string¦null
false
maxLength: 255
Azure OpenAI API Base URL
openaiApiKey
string¦null
false
maxLength: 255
Azure OpenAI API Key
openaiCredential
string¦null
false
ID of user credential containing an OpenAI token.
openaiDeploymentId
string¦null
false
maxLength: 255
Open API Deployment ID
Enumerated Values
Property
Value
llmType
[openAi
, azureOpenAi
]
GuardInterventionResponse
{
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"modifyMessage" : "string" ,
"sendNotification" : true
}
Intervention configuration for the guard.
Properties
Name
Type
Required
Restrictions
Description
action
string
true
Action to take if conditions are met
allowedActions
[string]
false
maxItems: 10
The actions this guard is allowed to take.
conditionLogic
string
false
Action to take if conditions are met
conditions
[GuardConditionResponse ]
true
maxItems: 1
List of conditions to trigger intervention
modifyMessage
string
true
maxLength: 4096
Message to use if prompt or response is blocked
sendNotification
boolean
false
Create a notification event if intervention is triggered
Enumerated Values
Property
Value
action
[block
, report
, replace
]
conditionLogic
any
GuardModelInfoResponse
{
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
}
Configuration info for guards using deployed models.
Properties
Name
Type
Required
Restrictions
Description
classNames
[string]
false
maxItems: 100
List of class names for multiclass models
inputColumnName
string
true
maxLength: 255
Input column name
modelId
string¦null
false
ID of registered model, for model guards.
modelName
string
false
maxLength: 255
ID of registered model, for .model guards
outputColumnName
string
true
maxLength: 255
Output column name
replacementTextColumnName
string
false
maxLength: 255
Name of the output column with replacement text. Required only if intervention.action is replace
.
targetType
string
true
Target type
Enumerated Values
Property
Value
targetType
[Binary
, Regression
, Multiclass
, TextGeneration
]
GuardNemoInfoResponse
{
"actions" : "" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
}
Configuration info for NeMo guards.
Properties
Name
Type
Required
Restrictions
Description
actions
string
false
maxLength: 4096
NeMo guardrails actions
blockedTerms
string
true
maxLength: 4096
NeMo guardrails blocked terms list
credentialId
string¦null
false
NeMo guardrails credential ID (deprecated; use "openai_api_key")
llmPrompts
string
false
maxLength: 4096
NeMo guardrails prompts
mainConfig
string
true
maxLength: 4096
Overall NeMo configuration YAML
railsConfig
string
true
maxLength: 4096
NeMo guardrails configuration Colang
GuardTemplateListResponse
{
"count" : 0 ,
"data" : [
{
"allowedStages" : [
"prompt"
],
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"modifyMessage" : "string" ,
"sendNotification" : true
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiDeploymentId" : "string" ,
"orgId" : "string" ,
"productionOnly" : true ,
"type" : "guardModel"
}
],
"next" : "http://example.com" ,
"previous" : "http://example.com" ,
"totalCount" : 0
}
Properties
Name
Type
Required
Restrictions
Description
count
integer
false
Number of items returned on this page.
data
[GuardTemplateRetrieveResponse ]
true
maxItems: 200
list of guard templates.
next
string(uri)¦null
true
URL pointing to the next page (if null, there is no next page).
previous
string(uri)¦null
true
URL pointing to the previous page (if null, there is no previous page).
totalCount
integer
true
The total number of items across all pages.
GuardTemplateRetrieveResponse
{
"allowedStages" : [
"prompt"
],
"createdAt" : "2019-08-24T14:15:22Z" ,
"creatorId" : "string" ,
"creatorName" : "string" ,
"description" : "string" ,
"errorMessage" : "string" ,
"id" : "string" ,
"intervention" : {
"action" : "block" ,
"allowedActions" : [
"block"
],
"conditionLogic" : "any" ,
"conditions" : [
{
"comparand" : true ,
"comparator" : "greaterThan"
}
],
"modifyMessage" : "string" ,
"sendNotification" : true
},
"isValid" : true ,
"llmType" : "openAi" ,
"modelInfo" : {
"classNames" : [
"string"
],
"inputColumnName" : "string" ,
"modelId" : "string" ,
"modelName" : "" ,
"outputColumnName" : "string" ,
"replacementTextColumnName" : "" ,
"targetType" : "Binary"
},
"name" : "string" ,
"nemoInfo" : {
"actions" : "" ,
"blockedTerms" : "string" ,
"credentialId" : "string" ,
"llmPrompts" : "" ,
"mainConfig" : "string" ,
"railsConfig" : "string"
},
"ootbType" : "token_count" ,
"openaiApiBase" : "string" ,
"openaiApiKey" : "string" ,
"openaiDeploymentId" : "string" ,
"orgId" : "string" ,
"productionOnly" : true ,
"type" : "guardModel"
}
Properties
Name
Type
Required
Restrictions
Description
allowedStages
[string]
true
maxItems: 16
The stages where the guard can run.
createdAt
string(date-time)
true
When the template was created.
creatorId
string¦null
false
ID of the user who created the Guard template.
creatorName
string
false
maxLength: 1000
ID of the user who created the Guard template.
description
string
true
maxLength: 4096
Guard template description
errorMessage
string¦null
false
Error message if the guard configuration is invalid.
id
string
true
Guard template object ID
intervention
GuardInterventionResponse
false
Intervention configuration for the guard.
isValid
boolean
false
True if the guard is fully configured and valid.
llmType
string¦null
false
Type of LLM used by this guard
modelInfo
GuardModelInfoResponse
false
Configuration info for guards using deployed models.
name
string
true
maxLength: 255
Guard template name
nemoInfo
GuardNemoInfoResponse
false
Configuration info for NeMo guards.
ootbType
string¦null
false
Guard template "Out of the Box" metric type
openaiApiBase
string¦null
false
maxLength: 255
Azure OpenAI API Base URL
openaiApiKey
string¦null
false
maxLength: 255
Azure OpenAI API Key
openaiDeploymentId
string¦null
false
maxLength: 255
Open API Deployment ID
orgId
string¦null
false
Organization ID of the user who created the Guard template.
productionOnly
boolean¦null
false
Whether the guard is for production only, or if it can be used in production and playground.
type
string
true
Guard template type
Enumerated Values
Property
Value
llmType
[openAi
, azureOpenAi
]
ootbType
[token_count
, faithfulness
, rouge_1
]
type
[guardModel
, nemo
, ootb
, pii
, userModel
]
HTTPDataStreamer
{
"type" : "http" ,
"url" : "string"
}
Stream CSV data chunks from HTTP
Properties
Name
Type
Required
Restrictions
Description
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
HTTPIntake
{
"type" : "http" ,
"url" : "string"
}
Stream CSV data chunks from HTTP
Properties
Name
Type
Required
Restrictions
Description
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
HTTPOutput
{
"headers" : {},
"method" : "POST" ,
"type" : "http" ,
"url" : "string"
}
Save CSV data chunks to HTTP data endpoint
Properties
Name
Type
Required
Restrictions
Description
headers
object
false
Extra headers to send with the request
method
string
true
Method to use when saving the CSV file
type
string
true
Type name for this output type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
method
[POST
, PUT
]
type
http
HttpOutputAdaptor
{
"headers" : {},
"method" : "POST" ,
"type" : "http" ,
"url" : "string"
}
Save CSV data chunks to HTTP data endpoint
Properties
Name
Type
Required
Restrictions
Description
headers
object
false
Extra headers to send with the request
method
string
true
Method to use when saving the CSV file
type
string
true
Type name for this output type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
method
[POST
, PUT
]
type
http
JDBCDataStreamer
{
"catalog" : "string" ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"fetchSize" : 1 ,
"query" : "string" ,
"schema" : "string" ,
"table" : "string" ,
"type" : "jdbc"
}
Stream CSV data chunks from JDBC
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to read input data from.
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with read access to the JDBC data source.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataStoreId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
ID of the data store to connect to
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
fetchSize
integer
false
maximum: 1000000 minimum: 1
A user specified fetch size. Changing it can be used to balance throughput and memory usage. Deprecated and ignored since v2.21.
query
string
false
A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema
string
false
The name of the specified database schema to read input data from.
table
string
false
The name of the specified database table to read input data from.
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
anonymous
[redacted]
anonymous
[redacted]
type
jdbc
JDBCIntake
{
"catalog" : "string" ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"fetchSize" : 1 ,
"query" : "string" ,
"schema" : "string" ,
"table" : "string" ,
"type" : "jdbc"
}
Stream CSV data chunks from JDBC
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to read input data from.
credentialId
string¦null
false
The ID of the credential holding information about a user with read access to the JDBC data source.
dataStoreId
string
true
ID of the data store to connect to
fetchSize
integer
false
maximum: 1000000 minimum: 1
A user specified fetch size. Changing it can be used to balance throughput and memory usage. Deprecated and ignored since v2.21.
query
string
false
A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema
string
false
The name of the specified database schema to read input data from.
table
string
false
The name of the specified database table to read input data from.
type
string
true
Type name for this intake type
Enumerated Values
JDBCOutput
{
"catalog" : "string" ,
"commitInterval" : 600 ,
"createTableIfNotExists" : false ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"schema" : "string" ,
"statementType" : "createTable" ,
"table" : "string" ,
"type" : "jdbc" ,
"updateColumns" : [
"string"
],
"whereColumns" : [
"string"
]
}
Save CSV data chunks via JDBC
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to write output data to.
commitInterval
integer
false
maximum: 86400 minimum: 0
Defines a time interval in seconds between each commit is done to the JDBC source. If set to 0, the batch prediction operation will write the entire job before committing.
createTableIfNotExists
boolean
false
Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType
parameter.
credentialId
string¦null
false
The ID of the credential holding information about a user with write access to the JDBC data source.
dataStoreId
string
true
ID of the data store to connect to
schema
string
false
The name of the specified database schema to write the results to.
statementType
string
true
The statement type to use when writing the results. Deprecation Warning: Use of create_table
is now discouraged. Use one of the other possibilities along with the parameter createTableIfNotExists
set to true
.
table
string
true
The name of the specified database table to write the results to.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
type
string
true
Type name for this intake type
updateColumns
[string]
false
maxItems: 100
The column names to be updated if statementType is set to either update or upsert.
whereColumns
[string]
false
maxItems: 100
The column names to be used in the where clause if statementType is set to update or upsert.
Enumerated Values
Property
Value
statementType
[createTable
, create_table
, insert
, insertUpdate
, insert_update
, update
]
type
jdbc
JdbcOutputAdaptor
{
"catalog" : "string" ,
"commitInterval" : 600 ,
"createTableIfNotExists" : false ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"schema" : "string" ,
"statementType" : "createTable" ,
"table" : "string" ,
"type" : "jdbc" ,
"updateColumns" : [
"string"
],
"whereColumns" : [
"string"
]
}
Save CSV data chunks via JDBC
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to write output data to.
commitInterval
integer
false
maximum: 86400 minimum: 0
Defines a time interval in seconds between each commit is done to the JDBC source. If set to 0, the batch prediction operation will write the entire job before committing.
createTableIfNotExists
boolean
false
Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType
parameter.
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with write access to the JDBC data source.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataStoreId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
ID of the data store to connect to
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
schema
string
false
The name of the specified database schema to write the results to.
statementType
string
true
The statement type to use when writing the results. Deprecation Warning: Use of create_table
is now discouraged. Use one of the other possibilities along with the parameter createTableIfNotExists
set to true
.
table
string
true
The name of the specified database table to write the results to.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
type
string
true
Type name for this intake type
updateColumns
[string]
false
maxItems: 100
The column names to be updated if statementType is set to either update or upsert.
whereColumns
[string]
false
maxItems: 100
The column names to be used in the where clause if statementType is set to update or upsert.
Enumerated Values
Property
Value
anonymous
[redacted]
anonymous
[redacted]
statementType
[createTable
, create_table
, insert
, insertUpdate
, insert_update
, update
]
type
jdbc
LocalFileDataStreamer
{
"async" : true ,
"multipart" : true ,
"type" : "local_file"
}
Stream CSV data chunks from local file storage
Properties
Name
Type
Required
Restrictions
Description
async
boolean¦null
false
The default behavior (async: true) will still submit the job to the queue and start processing as soon as the upload is started.Setting it to false will postpone submitting the job to the queue until all data has been uploaded.This is helpful if the user is on a bad connection and bottlednecked by the upload speed. Instead of blocking the queue this will allow others to submit to the queue until the upload has finished.
multipart
boolean
false
specify if the data will be uploaded in multiple parts instead of a single file
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
[local_file
, localFile
]
LocalFileIntake
{
"async" : true ,
"multipart" : true ,
"type" : "local_file"
}
Stream CSV data chunks from local file storage
Properties
Name
Type
Required
Restrictions
Description
async
boolean¦null
false
The default behavior (async: true) will still submit the job to the queue and start processing as soon as the upload is started.Setting it to false will postpone submitting the job to the queue until all data has been uploaded.This is helpful if the user is on a bad connection and bottlednecked by the upload speed. Instead of blocking the queue this will allow others to submit to the queue until the upload has finished.
multipart
boolean
false
specify if the data will be uploaded in multiple parts instead of a single file
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
[local_file
, localFile
]
LocalFileOutput
Save CSV data chunks to local file storage
Properties
Name
Type
Required
Restrictions
Description
type
string
true
Type name for this output type
Enumerated Values
Property
Value
type
[local_file
, localFile
]
LocalFileOutputAdaptor
Save CSV data chunks to local file storage
Properties
Name
Type
Required
Restrictions
Description
type
string
true
Type name for this output type
Enumerated Values
Property
Value
type
[local_file
, localFile
]
MonitoringAggregation
{
"retentionPolicy" : "samples" ,
"retentionValue" : 0
}
Defines the aggregation policy for monitoring jobs.
Properties
Name
Type
Required
Restrictions
Description
retentionPolicy
string
false
Monitoring jobs retention policy for aggregation.
retentionValue
integer
false
Amount/percentage of samples to retain.
Enumerated Values
Property
Value
retentionPolicy
[samples
, percentage
]
MonitoringColumnsMapping
{
"actedUponColumn" : "string" ,
"actualsTimestampColumn" : "string" ,
"actualsValueColumn" : "string" ,
"associationIdColumn" : "string" ,
"customMetricId" : "string" ,
"customMetricTimestampColumn" : "string" ,
"customMetricTimestampFormat" : "string" ,
"customMetricValueColumn" : "string" ,
"monitoredStatusColumn" : "string" ,
"predictionsColumns" : [
{
"className" : "string" ,
"columnName" : "string"
}
],
"reportDrift" : true ,
"reportPredictions" : true ,
"uniqueRowIdentifierColumns" : [
"string"
]
}
Column names mapping for monitoring
Properties
Name
Type
Required
Restrictions
Description
actedUponColumn
string
false
Name of column that contains value for acted_on.
actualsTimestampColumn
string
false
Name of column that contains actual timestamps.
actualsValueColumn
string
false
Name of column that contains actuals value.
associationIdColumn
string
false
Name of column that contains association Id.
customMetricId
string
false
Id of custom metric to process values for.
customMetricTimestampColumn
string
false
Name of column that contains custom metric values timestamps.
customMetricTimestampFormat
string
false
Format of timestamps from customMetricTimestampColumn.
customMetricValueColumn
string
false
Name of column that contains values for custom metric.
monitoredStatusColumn
string
false
Column name used to mark monitored rows.
predictionsColumns
any
false
Name of the column(s) which contain prediction values.
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
[PredictionColumMap ]
false
maxItems: 100
Map containing column name(s) and class name(s) for multiclass problem.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
Column name that contains the prediction for regressions problem.
continued
Name
Type
Required
Restrictions
Description
reportDrift
boolean
false
True to report drift, False otherwise.
reportPredictions
boolean
false
True to report prediction, False otherwise.
uniqueRowIdentifierColumns
[string]
false
maxItems: 100
Column(s) name of unique row identifiers.
MonitoringOutputSettings
{
"monitoredStatusColumn" : "string" ,
"uniqueRowIdentifierColumns" : [
"string"
]
}
Output settings for monitoring jobs
Properties
Name
Type
Required
Restrictions
Description
monitoredStatusColumn
string
true
Column name used to mark monitored rows.
uniqueRowIdentifierColumns
[string]
true
maxItems: 100
Column(s) name of unique row identifiers.
OverallConfigUpdate
{
"timeoutAction" : "block" ,
"timeoutSec" : 2
}
Overall moderation configuration to push (not specific to one guard)
Properties
Name
Type
Required
Restrictions
Description
timeoutAction
string
true
Action to take if timeout occurs
timeoutSec
integer
true
minimum: 2
Timeout value in seconds for any guard
Enumerated Values
Property
Value
timeoutAction
[block
, score
]
OverallModerationConfigurationResponse
{
"entityId" : "string" ,
"entityType" : "customModel" ,
"timeoutAction" : "block" ,
"timeoutSec" : 2 ,
"updatedAt" : "2019-08-24T14:15:22Z" ,
"updaterId" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
entityId
string
true
ID of custom model or playground for this configuration.
entityType
string
true
Type of associated entity.
timeoutAction
string
true
Action to take if timeout occurs
timeoutSec
integer
true
minimum: 2
Timeout value in seconds for any guard
updatedAt
string(date-time)
false
When the configuration was updated.
updaterId
string¦null
true
ID of user who updated the configuration.
Enumerated Values
Property
Value
entityType
[customModel
, customModelVersion
, playground
]
timeoutAction
[block
, score
]
OverallModerationConfigurationUpdate
{
"entityId" : "string" ,
"entityType" : "customModel" ,
"timeoutAction" : "block" ,
"timeoutSec" : 0
}
Properties
Name
Type
Required
Restrictions
Description
entityId
string
true
ID of custom model or playground for this configuration.
entityType
string
true
Type of associated entity.
timeoutAction
string
true
Action to take if timeout occurs
timeoutSec
integer
true
minimum: 0
Timeout value in seconds for any guard
Enumerated Values
Property
Value
entityType
[customModel
, customModelVersion
, playground
]
timeoutAction
[block
, score
]
{
"baseImageId" : "string" ,
"created" : "2019-08-24T14:15:22Z" ,
"datarobotRuntimeImageTag" : "string" ,
"dockerImageId" : "string" ,
"filename" : "string" ,
"hash" : "string" ,
"hashAlgorithm" : "SHA256" ,
"imageSize" : 0 ,
"shortDockerImageId" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
baseImageId
string
true
Internal base image entity id for troubleshooting purposes
created
string(date-time)
true
ISO formatted image upload date
datarobotRuntimeImageTag
string¦null
false
For internal use only.
dockerImageId
string
true
A Docker image id (immutable, content-based) hash associated with the given image
filename
string
true
The name of the file when the download requested
hash
string
true
Hash of the image content, supposed to be used for verifying content after the download. The algorithm used for hashing is specified in hashAlgorithm
field. Note that hash is calculated over compressed image data.
hashAlgorithm
string
true
An algorithm name used for calculating content hash
imageSize
integer
true
Size in bytes of the compressed PPS image data
shortDockerImageId
string
true
A 12-chars shortened version of the dockerImageId
as shown in 'docker images' command line command output
Enumerated Values
Property
Value
hashAlgorithm
SHA256
PredictionColumMap
{
"className" : "string" ,
"columnName" : "string"
}
Properties
Name
Type
Required
Restrictions
Description
className
string
true
Class name.
columnName
string
true
Column name that contains the prediction for a specific class.
PredictionEnvironmentInUseResponse
{
"id" : "string" ,
"name" : "string" ,
"usedBy" : [
{
"configurationId" : "string" ,
"deploymentId" : "string" ,
"name" : "string"
}
]
}
Properties
Name
Type
Required
Restrictions
Description
id
string
true
ID of prediction environment.
name
string
true
Name of prediction environment.
usedBy
[DeploymentAndGuardResponse ]
true
maxItems: 32
Guards using this prediction environment.
S3DataStreamer
{
"credentialId" : "string" ,
"endpointUrl" : "string" ,
"format" : "csv" ,
"type" : "s3" ,
"url" : "string"
}
Stream CSV data chunks from Amazon Cloud Storage S3
Properties
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
Use the specified credential to access the url
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
endpointUrl
string(url)
false
Endpoint URL for the S3 connection (omit to use the default)
format
string
false
Type of input file format
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
anonymous
[redacted]
format
[csv
, parquet
]
type
s3
S3Intake
{
"credentialId" : "string" ,
"endpointUrl" : "string" ,
"format" : "csv" ,
"type" : "s3" ,
"url" : "string"
}
Stream CSV data chunks from Amazon Cloud Storage S3
Properties
Name
Type
Required
Restrictions
Description
credentialId
string¦null
false
Use the specified credential to access the url
endpointUrl
string(url)
false
Endpoint URL for the S3 connection (omit to use the default)
format
string
false
Type of input file format
type
string
true
Type name for this intake type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
format
[csv
, parquet
]
type
s3
S3Output
{
"credentialId" : "string" ,
"endpointUrl" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"serverSideEncryption" : {
"algorithm" : "string" ,
"customerAlgorithm" : "string" ,
"customerKey" : "string" ,
"kmsEncryptionContext" : "string" ,
"kmsKeyId" : "string"
},
"type" : "s3" ,
"url" : "string"
}
Saves CSV data chunks to Amazon Cloud Storage S3
Properties
Name
Type
Required
Restrictions
Description
credentialId
string¦null
false
Use the specified credential to access the url
endpointUrl
string(url)
false
Endpoint URL for the S3 connection (omit to use the default)
format
string
false
Type of output file format
partitionColumns
[string]
false
maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
serverSideEncryption
ServerSideEncryption
false
Configure Server-Side Encryption for S3 output
type
string
true
Type name for this output type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
format
[csv
, parquet
]
type
s3
S3OutputAdaptor
{
"credentialId" : "string" ,
"endpointUrl" : "string" ,
"format" : "csv" ,
"partitionColumns" : [
"string"
],
"serverSideEncryption" : {
"algorithm" : "string" ,
"customerAlgorithm" : "string" ,
"customerKey" : "string" ,
"kmsEncryptionContext" : "string" ,
"kmsKeyId" : "string"
},
"type" : "s3" ,
"url" : "string"
}
Saves CSV data chunks to Amazon Cloud Storage S3
Properties
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
Use the specified credential to access the url
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
endpointUrl
string(url)
false
Endpoint URL for the S3 connection (omit to use the default)
format
string
false
Type of output file format
partitionColumns
[string]
false
maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
serverSideEncryption
ServerSideEncryption
false
Configure Server-Side Encryption for S3 output
type
string
true
Type name for this output type
url
string(url)
true
URL for the CSV file
Enumerated Values
Property
Value
anonymous
[redacted]
format
[csv
, parquet
]
type
s3
Schedule
{
"dayOfMonth" : [
"*"
],
"dayOfWeek" : [
"*"
],
"hour" : [
"*"
],
"minute" : [
"*"
],
"month" : [
"*"
]
}
The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
Properties
Name
Type
Required
Restrictions
Description
dayOfMonth
[anyOf]
true
The date(s) of the month that the job will run. Allowed values are either [1 ... 31]
or ["*"]
for all days of the month. This field is additive with dayOfWeek
, meaning the job will run both on the date(s) defined in this field and the day specified by dayOfWeek
(for example, dates 1st, 2nd, 3rd, plus every Tuesday). If dayOfMonth
is set to ["*"]
and dayOfWeek
is defined, the scheduler will trigger on every day of the month that matches dayOfWeek
(for example, Tuesday the 2nd, 9th, 16th, 23rd, 30th). Invalid dates such as February 31st are ignored.
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dayOfWeek
[anyOf]
true
The day(s) of the week that the job will run. Allowed values are [0 .. 6]
, where (Sunday=0), or ["*"]
, for all days of the week. Strings, either 3-letter abbreviations or the full name of the day, can be used interchangeably (e.g., "sunday", "Sunday", "sun", or "Sun", all map to [0]
. This field is additive with dayOfMonth
, meaning the job will run both on the date specified by dayOfMonth
and the day defined in this field.
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
hour
[anyOf]
true
The hour(s) of the day that the job will run. Allowed values are either ["*"]
meaning every hour of the day or [0 ... 23]
.
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
minute
[anyOf]
true
The minute(s) of the day that the job will run. Allowed values are either ["*"]
meaning every minute of the day or[0 ... 59]
.
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
month
[anyOf]
true
The month(s) of the year that the job will run. Allowed values are either [1 ... 12]
or ["*"]
for all months of the year. Strings, either 3-letter abbreviations or the full name of the month, can be used interchangeably (e.g., "jan" or "october"). Months that are not compatible with dayOfMonth
are ignored, for example {"dayOfMonth": [31], "month":["feb"]}
.
anyOf
Name
Type
Required
Restrictions
Description
» anonymous
number
false
none
or
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
ServerSideEncryption
{
"algorithm" : "string" ,
"customerAlgorithm" : "string" ,
"customerKey" : "string" ,
"kmsEncryptionContext" : "string" ,
"kmsKeyId" : "string"
}
Configure Server-Side Encryption for S3 output
Properties
Name
Type
Required
Restrictions
Description
algorithm
string
false
The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
customerAlgorithm
string
false
Specifies the algorithm to use to when encrypting the object (for example, AES256).
customerKey
string
false
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in customerAlgorithm. The key must be sent as an base64 encoded string.
kmsEncryptionContext
string
false
Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.
kmsKeyId
string
false
Specifies the ID of the symmetric customer managed key to use for object encryption.
SnowflakeDataStreamer
{
"catalog" : "string" ,
"cloudStorageCredentialId" : "string" ,
"cloudStorageType" : "azure" ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalStage" : "string" ,
"query" : "string" ,
"schema" : "string" ,
"table" : "string" ,
"type" : "snowflake"
}
Stream CSV data chunks from Snowflake
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to read input data from.
cloudStorageCredentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with read access to the cloud storage.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
cloudStorageType
string
false
Type name for cloud storage
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with read access to the Snowflake data source.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataStoreId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
ID of the data store to connect to
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
externalStage
string
true
External storage
query
string
false
A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema
string
false
The name of the specified database schema to read input data from.
table
string
false
The name of the specified database table to read input data from.
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
anonymous
[redacted]
cloudStorageType
[azure
, gcp
, s3
]
anonymous
[redacted]
anonymous
[redacted]
type
snowflake
SnowflakeIntake
{
"catalog" : "string" ,
"cloudStorageCredentialId" : "string" ,
"cloudStorageType" : "azure" ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalStage" : "string" ,
"query" : "string" ,
"schema" : "string" ,
"table" : "string" ,
"type" : "snowflake"
}
Stream CSV data chunks from Snowflake
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to read input data from.
cloudStorageCredentialId
string¦null
false
The ID of the credential holding information about a user with read access to the cloud storage.
cloudStorageType
string
false
Type name for cloud storage
credentialId
string¦null
false
The ID of the credential holding information about a user with read access to the Snowflake data source.
dataStoreId
string
true
ID of the data store to connect to
externalStage
string
true
External storage
query
string
false
A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema
string
false
The name of the specified database schema to read input data from.
table
string
false
The name of the specified database table to read input data from.
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
cloudStorageType
[azure
, gcp
, s3
]
type
snowflake
SnowflakeOutput
{
"catalog" : "string" ,
"cloudStorageCredentialId" : "string" ,
"cloudStorageType" : "azure" ,
"createTableIfNotExists" : false ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalStage" : "string" ,
"schema" : "string" ,
"statementType" : "insert" ,
"table" : "string" ,
"type" : "snowflake"
}
Save CSV data chunks to Snowflake in bulk
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to write output data to.
cloudStorageCredentialId
string¦null
false
The ID of the credential holding information about a user with write access to the cloud storage.
cloudStorageType
string
false
Type name for cloud storage
createTableIfNotExists
boolean
false
Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType
parameter.
credentialId
string¦null
false
The ID of the credential holding information about a user with write access to the Snowflake data source.
dataStoreId
string
true
ID of the data store to connect to
externalStage
string
true
External storage
schema
string
false
The name of the specified database schema to write results to.
statementType
string
true
The statement type to use when writing the results.
table
string
true
The name of the specified database table to write results to.
type
string
true
Type name for this output type
Enumerated Values
Property
Value
cloudStorageType
[azure
, gcp
, s3
]
statementType
[insert
, create_table
, createTable
]
type
snowflake
SnowflakeOutputAdaptor
{
"catalog" : "string" ,
"cloudStorageCredentialId" : "string" ,
"cloudStorageType" : "azure" ,
"createTableIfNotExists" : false ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalStage" : "string" ,
"schema" : "string" ,
"statementType" : "insert" ,
"table" : "string" ,
"type" : "snowflake"
}
Save CSV data chunks to Snowflake in bulk
Properties
Name
Type
Required
Restrictions
Description
catalog
string
false
The name of the specified database catalog to write output data to.
cloudStorageCredentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with write access to the cloud storage.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
cloudStorageType
string
false
Type name for cloud storage
createTableIfNotExists
boolean
false
Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType
parameter.
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with write access to the Snowflake data source.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataStoreId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
ID of the data store to connect to
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
externalStage
string
true
External storage
schema
string
false
The name of the specified database schema to write results to.
statementType
string
true
The statement type to use when writing the results.
table
string
true
The name of the specified database table to write results to.
type
string
true
Type name for this output type
Enumerated Values
Property
Value
anonymous
[redacted]
cloudStorageType
[azure
, gcp
, s3
]
anonymous
[redacted]
anonymous
[redacted]
statementType
[insert
, create_table
, createTable
]
type
snowflake
SynapseDataStreamer
{
"cloudStorageCredentialId" : "string" ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalDataSource" : "string" ,
"query" : "string" ,
"schema" : "string" ,
"table" : "string" ,
"type" : "synapse"
}
Stream CSV data chunks from Azure Synapse
Properties
Name
Type
Required
Restrictions
Description
cloudStorageCredentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the Azure credential holding information about a user with read access to the cloud storage.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with read access to the JDBC data source.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataStoreId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
ID of the data store to connect to
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
externalDataSource
string
true
External datasource name
query
string
false
A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema
string
false
The name of the specified database schema to read input data from.
table
string
false
The name of the specified database table to read input data from.
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
anonymous
[redacted]
anonymous
[redacted]
anonymous
[redacted]
type
synapse
SynapseIntake
{
"cloudStorageCredentialId" : "string" ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalDataSource" : "string" ,
"query" : "string" ,
"schema" : "string" ,
"table" : "string" ,
"type" : "synapse"
}
Stream CSV data chunks from Azure Synapse
Properties
Name
Type
Required
Restrictions
Description
cloudStorageCredentialId
string¦null
false
The ID of the Azure credential holding information about a user with read access to the cloud storage.
credentialId
string¦null
false
The ID of the credential holding information about a user with read access to the JDBC data source.
dataStoreId
string
true
ID of the data store to connect to
externalDataSource
string
true
External datasource name
query
string
false
A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema
string
false
The name of the specified database schema to read input data from.
table
string
false
The name of the specified database table to read input data from.
type
string
true
Type name for this intake type
Enumerated Values
Property
Value
type
synapse
SynapseOutput
{
"cloudStorageCredentialId" : "string" ,
"createTableIfNotExists" : false ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalDataSource" : "string" ,
"schema" : "string" ,
"statementType" : "insert" ,
"table" : "string" ,
"type" : "synapse"
}
Save CSV data chunks to Azure Synapse in bulk
Properties
Name
Type
Required
Restrictions
Description
cloudStorageCredentialId
string¦null
false
The ID of the credential holding information about a user with write access to the cloud storage.
createTableIfNotExists
boolean
false
Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType
parameter.
credentialId
string¦null
false
The ID of the credential holding information about a user with write access to the JDBC data source.
dataStoreId
string
true
ID of the data store to connect to
externalDataSource
string
true
External data source name
schema
string
false
The name of the specified database schema to write results to.
statementType
string
true
The statement type to use when writing the results.
table
string
true
The name of the specified database table to write results to.
type
string
true
Type name for this output type
Enumerated Values
Property
Value
statementType
[insert
, create_table
, createTable
]
type
synapse
SynapseOutputAdaptor
{
"cloudStorageCredentialId" : "string" ,
"createTableIfNotExists" : false ,
"credentialId" : "string" ,
"dataStoreId" : "string" ,
"externalDataSource" : "string" ,
"schema" : "string" ,
"statementType" : "insert" ,
"table" : "string" ,
"type" : "synapse"
}
Save CSV data chunks to Azure Synapse in bulk
Properties
Name
Type
Required
Restrictions
Description
cloudStorageCredentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with write access to the cloud storage.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
createTableIfNotExists
boolean
false
Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType
parameter.
credentialId
any
false
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string¦null
false
The ID of the credential holding information about a user with write access to the JDBC data source.
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
dataStoreId
any
true
Either the populated value of the field or [redacted] due to permission settings
oneOf
Name
Type
Required
Restrictions
Description
» anonymous
string
false
ID of the data store to connect to
xor
Name
Type
Required
Restrictions
Description
» anonymous
string
false
none
continued
Name
Type
Required
Restrictions
Description
externalDataSource
string
true
External data source name
schema
string
false
The name of the specified database schema to write results to.
statementType
string
true
The statement type to use when writing the results.
table
string
true
The name of the specified database table to write results to.
type
string
true
Type name for this output type
Enumerated Values
Property
Value
anonymous
[redacted]
anonymous
[redacted]
anonymous
[redacted]
statementType
[insert
, create_table
, createTable
]
type
synapse
更新しました September 18, 2024
送信
アンケートにご協力いただき、ありがとうございました。