Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Predictions

This page outlines the operations, endpoints, parameters, and example requests and responses for the Predictions.

GET /api/v2/batchPredictionJobDefinitions/

List all Batch Prediction jobs definitions available

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/?offset=0&limit=100 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
searchName query string false A human-readable name for the definition, must be unique across organisations.
deploymentId query string false Includes only definitions for this particular deployment

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "batchPredictionJob": {
        "abortOnError": true,
        "batchJobType": "monitoring",
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "maxNgramExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringAggregation": {
          "retentionPolicy": "samples",
          "retentionValue": 0
        },
        "monitoringBatchPrefix": "string",
        "monitoringColumns": {
          "actedUponColumn": "string",
          "actualsTimestampColumn": "string",
          "actualsValueColumn": "string",
          "associationIdColumn": "string",
          "customMetricId": "string",
          "customMetricTimestampColumn": "string",
          "customMetricTimestampFormat": "string",
          "customMetricValueColumn": "string",
          "monitoredStatusColumn": "string",
          "predictionsColumns": [
            {
              "className": "string",
              "columnName": "string"
            }
          ],
          "uniqueRowIdentifierColumns": [
            "string"
          ]
        },
        "monitoringOutputSettings": {
          "monitoredStatusColumn": "string",
          "uniqueRowIdentifierColumns": [
            "string"
          ]
        },
        "numConcurrent": 0,
        "outputSettings": {
          "credentialId": "string",
          "format": "csv",
          "partitionColumns": [
            "string"
          ],
          "type": "azure",
          "url": "string"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "enabled": false,
      "id": "string",
      "lastFailedRunTime": "2019-08-24T14:15:22Z",
      "lastScheduledRunTime": "2019-08-24T14:15:22Z",
      "lastStartedJobStatus": "INITIALIZING",
      "lastStartedJobTime": "2019-08-24T14:15:22Z",
      "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
      "name": "string",
      "nextScheduledRunTime": "2019-08-24T14:15:22Z",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "updated": "2019-08-24T14:15:22Z",
      "updatedBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      }
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Responses

Status Meaning Description Schema
200 OK List of all available jobs BatchPredictionJobDefinitionsListResponse
422 Unprocessable Entity Your input data or query arguments did not work together None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictionJobDefinitions/

Create a Batch Prediction Job definition. A configuration for a Batch Prediction job which can either be executed manually upon request or on scheduled intervals, if enabled. The API payload is the same as for /batchPredictions along with optional enabled and schedule items.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Parameters

Name In Type Required Description
body body BatchPredictionJobDefinitionsCreate false none

Example responses

202 Response

{
  "batchPredictionJob": {
    "abortOnError": true,
    "batchJobType": "monitoring",
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "maxNgramExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringAggregation": {
      "retentionPolicy": "samples",
      "retentionValue": 0
    },
    "monitoringBatchPrefix": "string",
    "monitoringColumns": {
      "actedUponColumn": "string",
      "actualsTimestampColumn": "string",
      "actualsValueColumn": "string",
      "associationIdColumn": "string",
      "customMetricId": "string",
      "customMetricTimestampColumn": "string",
      "customMetricTimestampFormat": "string",
      "customMetricValueColumn": "string",
      "monitoredStatusColumn": "string",
      "predictionsColumns": [
        {
          "className": "string",
          "columnName": "string"
        }
      ],
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "monitoringOutputSettings": {
      "monitoredStatusColumn": "string",
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "numConcurrent": 0,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job definition BatchPredictionJobDefinitionsResponse
403 Forbidden You are not authorized to create a job definition on this deployment due to your permissions role None
422 Unprocessable Entity You tried to create a job definition with uncompatible or missing parameters to create a fully functioning job definition None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/

Delete a Batch Prediction job definition

Code samples

# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition

Responses

Status Meaning Description Schema
204 No Content none None
403 Forbidden You are not authorized to delete this job definition due to your permissions role None
404 Not Found Job was deleted, never existed or you do not have access to it None
409 Conflict Job could not be deleted, as there are currently running jobs in the queue. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/

Retrieve a Batch Prediction job definition

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/ \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition

Example responses

200 Response

{
  "batchPredictionJob": {
    "abortOnError": true,
    "batchJobType": "monitoring",
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "maxNgramExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringAggregation": {
      "retentionPolicy": "samples",
      "retentionValue": 0
    },
    "monitoringBatchPrefix": "string",
    "monitoringColumns": {
      "actedUponColumn": "string",
      "actualsTimestampColumn": "string",
      "actualsValueColumn": "string",
      "associationIdColumn": "string",
      "customMetricId": "string",
      "customMetricTimestampColumn": "string",
      "customMetricTimestampFormat": "string",
      "customMetricValueColumn": "string",
      "monitoredStatusColumn": "string",
      "predictionsColumns": [
        {
          "className": "string",
          "columnName": "string"
        }
      ],
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "monitoringOutputSettings": {
      "monitoredStatusColumn": "string",
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "numConcurrent": 0,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Responses

Status Meaning Description Schema
200 OK Job details for the requested Batch Prediction job definition BatchPredictionJobDefinitionsResponse
404 Not Found Job was deleted, never existed or you do not have access to it None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PATCH /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/

Update a Batch Prediction job definition

Code samples

# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition
body body BatchPredictionJobDefinitionsUpdate false none

Example responses

200 Response

{
  "batchPredictionJob": {
    "abortOnError": true,
    "batchJobType": "monitoring",
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "maxNgramExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringAggregation": {
      "retentionPolicy": "samples",
      "retentionValue": 0
    },
    "monitoringBatchPrefix": "string",
    "monitoringColumns": {
      "actedUponColumn": "string",
      "actualsTimestampColumn": "string",
      "actualsValueColumn": "string",
      "associationIdColumn": "string",
      "customMetricId": "string",
      "customMetricTimestampColumn": "string",
      "customMetricTimestampFormat": "string",
      "customMetricValueColumn": "string",
      "monitoredStatusColumn": "string",
      "predictionsColumns": [
        {
          "className": "string",
          "columnName": "string"
        }
      ],
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "monitoringOutputSettings": {
      "monitoredStatusColumn": "string",
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "numConcurrent": 0,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Responses

Status Meaning Description Schema
200 OK Job details for the updated Batch Prediction job definition BatchPredictionJobDefinitionsResponse
403 Forbidden You are not authorized to alter the contents of this job definition due to your permissions role None
404 Not Found Job was deleted, never existed or you do not have access to it None
409 Conflict You chose a name of your job definition that was already existing within your organization None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/portable/

Retrieve a Batch Prediction job definition for Portable Batch Predictions

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/portable/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition

Responses

Status Meaning Description Schema
200 OK Snippet for Portable Batch Predictions None
404 Not Found Job was deleted, never existed or you do not have access to it None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictions/

Get a collection of batch prediction jobs by statuses

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/?offset=0&limit=100 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
status query any false Includes only jobs that have the status value that matches this flag. Repeat the parameter for filtering on multiple statuses.
source query any false Includes only jobs that have the source value that matches this flag. Repeat the parameter for filtering on multiple statuses.Prefix values with a dash (-) to exclude those sources.
deploymentId query string false Includes only jobs for this particular deployment
modelId query string false ID of leaderboard model which is used in job for processing predictions dataset
jobId query string false Includes only job by specific id
orderBy query string false Sort order which will be applied to batch prediction list. Prefix the attribute name with a dash to sort in descending order, e.g. "-created".
allJobs query boolean false [DEPRECATED - replaced with RBAC permission model] - No effect
cutoffHours query integer false Only list jobs created at most this amount of hours ago.
startDateTime query string(date-time) false ISO-formatted datetime of the earliest time the job was added (inclusive). For example "2008-08-24T12:00:00Z". Will ignore cutoffHours if set.
endDateTime query string(date-time) false ISO-formatted datetime of the latest time the job was added (inclusive). For example "2008-08-24T12:00:00Z".
batchPredictionJobDefinitionId query string false Includes only jobs for this particular definition
hostname query any false Includes only jobs for this particular prediction instance hostname
intakeType query any false Includes only jobs for these particular intakes type
outputType query any false Includes only jobs for these particular outputs type

Enumerated Values

Parameter Value
orderBy [created, -created, status, -status]

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "batchPredictionJobDefinition": {
        "createdBy": "string",
        "id": "string",
        "name": "string"
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "elapsedTimeSec": 0,
      "failedRows": 0,
      "hidden": "2019-08-24T14:15:22Z",
      "id": "string",
      "intakeDatasetDisplayName": "string",
      "jobIntakeSize": 0,
      "jobOutputSize": 0,
      "jobSpec": {
        "abortOnError": true,
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringBatchPrefix": "string",
        "numConcurrent": 1,
        "outputSettings": {
          "credentialId": "string",
          "format": "csv",
          "partitionColumns": [
            "string"
          ],
          "type": "azure",
          "url": "string"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionThreshold": 1,
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "links": {
        "csvUpload": "string",
        "download": "string",
        "self": "string"
      },
      "logs": [
        "string"
      ],
      "percentageCompleted": 100,
      "queuePosition": 0,
      "queued": true,
      "resultsDeleted": true,
      "scoredRows": 0,
      "skippedRows": 0,
      "source": "string",
      "status": "INITIALIZING",
      "statusDetails": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Responses

Status Meaning Description Schema
200 OK A list of Batch Prediction job objects BatchPredictionJobListResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/

Submit the configuration for the job and it will be submitted to the queue

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchPredictions/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Parameters

Name In Type Required Description
body body BatchPredictionJobCreate false none

Example responses

202 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionThreshold": 1,
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job BatchPredictionJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/fromExisting/

Copies an existing job and submits it to the queue.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchPredictions/fromExisting/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "partNumber": 0,
  "predictionJobId": "string"
}

Parameters

Name In Type Required Description
body body BatchPredictionJobId false none

Example responses

202 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionThreshold": 1,
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job BatchPredictionJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/fromJobDefinition/

Launches a one-time batch prediction job based off of the previously supplied definition referring to the job definition ID and puts it on the queue.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchPredictions/fromJobDefinition/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "jobDefinitionId": "string"
}

Parameters

Name In Type Required Description
body body BatchPredictionJobDefinitionId false none

Example responses

202 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionThreshold": 1,
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job BatchPredictionJobResponse
404 Not Found Job was deleted, never existed or you do not have access to it None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/batchPredictions/{predictionJobId}/

If the job is running, it will be aborted. Then it will be removed, meaning all underlying data will be deleted and the job is removed from the list of jobs.

Code samples

# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Job cancelled None
404 Not Found Job does not exist or was not submitted to the queue. None
409 Conflict Job cannot be aborted for some reason. Possible reasons: job is already aborted or completed. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictions/{predictionJobId}/

Retrieve a Batch Prediction job.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/ \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Example responses

200 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionThreshold": 1,
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
200 OK Job details for the requested Batch Prediction job BatchPredictionJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PATCH /api/v2/batchPredictions/{predictionJobId}/

If a job has finished execution regardless of the result, it can have parameters changed to ensure better filtering in the job list upon retrieval. Another case: updating job scoring status externally.

Code samples

# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "aborted": "2019-08-24T14:15:22Z",
  "completed": "2019-08-24T14:15:22Z",
  "failedRows": 0,
  "hidden": true,
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "logs": [
    "string"
  ],
  "scoredRows": 0,
  "skippedRows": 0,
  "started": "2019-08-24T14:15:22Z",
  "status": "INITIALIZING"
}

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload
body body BatchPredictionJobUpdate false none

Example responses

200 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionThreshold": 1,
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
200 OK Job updated BatchPredictionJobResponse
404 Not Found Job does not exist or was not submitted to the queue. None
409 Conflict Job cannot be hidden for some reason. Possible reasons: job is not in a deletable state. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PUT /api/v2/batchPredictions/{predictionJobId}/csvUpload/

Stream CSV data to the prediction job. Only available for jobs thatuses the localFile intake option.

Code samples

# You can also use wget
curl -X PUT https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/csvUpload/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Job data was successfully submitted None
404 Not Found Job does not exist or does not require data None
406 Not Acceptable Not acceptable MIME type None
409 Conflict Dataset upload has already begun None
422 Unprocessable Entity Job was "ABORTED" due to too many errors in the data None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/{predictionJobId}/csvUpload/finalizeMultipart/

Finalize a multipart upload, indicating that no further chunks will be sent

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/csvUpload/finalizeMultipart/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Acknowledgement that the request was accepted or an error message None
404 Not Found Job was deleted, never existed or you do not have access to it None
409 Conflict Only multipart jobs can be finalized. None
422 Unprocessable Entity No data was uploaded None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PUT /api/v2/batchPredictions/{predictionJobId}/csvUpload/part/{partNumber}/

Stream CSV data to the prediction job in many parts.Only available for jobs that uses the localFile intake option.

Code samples

# You can also use wget
curl -X PUT https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/csvUpload/part/{partNumber}/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Job data was successfully submitted None
404 Not Found Job does not exist or does not require data None
406 Not Acceptable Not acceptable MIME type None
409 Conflict Dataset upload has already begun None
422 Unprocessable Entity Job was "ABORTED" due to too many errors in the data None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictions/{predictionJobId}/download/

This is only valid for jobs scored using the "localFile" output option

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/batchPredictions/{predictionJobId}/download/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
200 OK Job was downloaded correctly None
404 Not Found Job does not exist or is not completed None
406 Not Acceptable Not acceptable MIME type None
422 Unprocessable Entity Job was "ABORTED" due to too many errors in the data None

Response Headers

Status Header Type Format Description
200 Content-Disposition string Contains an auto generated filename for this download ("attachment;filename=result-.csv").
200 Content-Type string MIME type of the returned data

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictJobs/

List all prediction jobs for a project

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictJobs/ \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
status query string false If provided, only jobs with the same status will be included in the results; otherwise, queued and inprogress jobs (but not errored jobs) will be returned.
projectId path string true The project ID.

Enumerated Values

Parameter Value
status [queue, inprogress, error]

Example responses

200 Response

[
  {
    "id": "string",
    "isBlocked": true,
    "message": "string",
    "modelId": "string",
    "projectId": "string",
    "status": "queue"
  }
]

Responses

Status Meaning Description Schema
200 OK A list of prediction jobs for a project Inline
404 Not Found Job was not found None

Response Schema

Status Code 200

Name Type Required Restrictions Description
anonymous [PredictJobDetailsResponse] false none
» id string true the job ID of the job
» isBlocked boolean true True if a job is waiting for its dependencies to be resolved first.
» message string true An optional message about the job
» modelId string true The ID of the model
» projectId string true the project the job belongs to
» status string true the status of the job

Enumerated Values

Property Value
status [queue, inprogress, error, ABORTED, COMPLETED]

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/projects/{projectId}/predictJobs/{jobId}/

Cancel a queued prediction job

Code samples

# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/predictJobs/{jobId}/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
projectId path string true The project ID.
jobId path string true The job ID

Responses

Status Meaning Description Schema
204 No Content The job has been successfully cancelled None
404 Not Found Job was not found or the job has already completed None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictJobs/{jobId}/

Look up a particular prediction job

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictJobs/{jobId}/ \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
projectId path string true The project ID.
jobId path string true The job ID

Example responses

200 Response

{
  "id": "string",
  "isBlocked": true,
  "message": "string",
  "modelId": "string",
  "projectId": "string",
  "status": "queue"
}

Responses

Status Meaning Description Schema
200 OK The job has been successfully retrieved and has not yet finished. PredictJobDetailsResponse
303 See Other The job has been successfully retrieved and has been completed. See Location header. The response json is also included. None

Response Headers

Status Header Type Format Description
200 Location string url present only when the requested job has finished - contains a url from which the completed predictions may be retrieved as with GET /api/v2/projects/{projectId}/predictions/{predictionId}/

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionDatasets/

List predictions datasets uploaded to a project.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/?offset=0&limit=0 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped.
limit query integer true At most this many results are returned. If 0, all results.
projectId path string true The project ID to query.

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "catalogId": "string",
      "catalogVersionId": "string",
      "containsTargetValues": true,
      "created": "2019-08-24T14:15:22Z",
      "dataEndDate": "2019-08-24T14:15:22Z",
      "dataQualityWarnings": {
        "hasKiaMissingValuesInForecastWindow": true,
        "insufficientRowsForEvaluatingModels": true,
        "singleClassActualValueColumn": true
      },
      "dataStartDate": "2019-08-24T14:15:22Z",
      "detectedActualValueColumns": [
        {
          "missingCount": 0,
          "name": "string"
        }
      ],
      "forecastPoint": "string",
      "forecastPointRange": [
        "2019-08-24T14:15:22Z"
      ],
      "id": "string",
      "maxForecastDate": "2019-08-24T14:15:22Z",
      "name": "string",
      "numColumns": 0,
      "numRows": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "secondaryDatasetsConfigId": "string"
    }
  ],
  "next": "string",
  "previous": "string"
}

Responses

Status Meaning Description Schema
200 OK Request to list the uploaded predictions datasets was successful. PredictionDatasetListControllerResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/dataSourceUploads/

Upload a dataset for predictions from a DataSource.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/dataSourceUploads/ \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "dataSourceId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID to which the data source will be uploaded to.
body body PredictionDataSource false none

Responses

Status Meaning Description Schema
202 Accepted Upload successfully started. See the Location header. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/datasetUploads/

Create a prediction dataset from a Dataset Asset referenced by AI Catalog item/version ID.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/datasetUploads/ \
  -H "Content-Type: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "datasetId": "string",
  "datasetVersionId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID.
body body PredictionFromCatalogDataset false none

Example responses

202 Response

{
  "datasetId": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Creation has successfully started. See the Location header. CreatePredictionDatasetResponse
422 Unprocessable Entity Target not set yet or cannot specify time series options with a non time series project. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/fileUploads/

Upload a file for predictions from an attached file.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/fileUploads/ \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "actualValueColumn": "string",
  "credentials": "string",
  "file": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": "false",
  "secondaryDatasetsConfigId": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID to which the data will be uploaded for prediction.
body body PredictionFileUpload false none

Responses

Status Meaning Description Schema
202 Accepted Upload successfully started. See the Location header. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/urlUploads/

Upload a file for predictions from a URL.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/urlUploads/ \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "actualValueColumn": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "url": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID to which the data will be uploaded for prediction.
body body PredictionURLUpload false none

Responses

Status Meaning Description Schema
202 Accepted Upload successfully started. See the Location header. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/projects/{projectId}/predictionDatasets/{datasetId}/

Delete a dataset that was uploaded for prediction.

Code samples

# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/{datasetId}/ \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
projectId path string true The project ID that owns the data.
datasetId path string true The dataset ID to delete.

Responses

Status Meaning Description Schema
204 No Content The dataset has been successfully deleted. None
404 Not Found No dataset with the specified datasetId found. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionDatasets/{datasetId}/

Get the metadata of a specific dataset. This only works for datasets uploaded to an existing project for prediction.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictionDatasets/{datasetId}/ \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
projectId path string true The project ID that owns the data.
datasetId path string true The dataset ID to query for.

Example responses

200 Response

{
  "actualValueColumn": "string",
  "catalogId": "string",
  "catalogVersionId": "string",
  "containsTargetValues": true,
  "created": "2019-08-24T14:15:22Z",
  "dataEndDate": "2019-08-24T14:15:22Z",
  "dataQualityWarnings": {
    "hasKiaMissingValuesInForecastWindow": true,
    "insufficientRowsForEvaluatingModels": true,
    "singleClassActualValueColumn": true
  },
  "dataStartDate": "2019-08-24T14:15:22Z",
  "detectedActualValueColumns": [
    {
      "missingCount": 0,
      "name": "string"
    }
  ],
  "forecastPoint": "string",
  "forecastPointRange": [
    "2019-08-24T14:15:22Z"
  ],
  "id": "string",
  "maxForecastDate": "2019-08-24T14:15:22Z",
  "name": "string",
  "numColumns": 0,
  "numRows": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "secondaryDatasetsConfigId": "string"
}

Responses

Status Meaning Description Schema
200 OK Request to retrieve the metadata of a specified dataset was successful. PredictionDatasetRetrieveResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictions/

Get a list of prediction records.

.. deprecated:: v2.21 Use GET /api/v2/projects/{projectId}/predictionsMetadata/ instead. The only difference is that parameter datasetId is renamed to predictionDatasetId both in request and response.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictions/?offset=0&limit=1000 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned. To specify no limit, use 0. The default may change and a maximum limit may be imposed without notice.
datasetId query string false Dataset id used to create the predictions
modelId query string false Model id
projectId path string true The project of the predictions.

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "datasetId": "string",
      "explanationAlgorithm": "string",
      "featureDerivationWindowCounts": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "id": "string",
      "includesPredictionIntervals": true,
      "maxExplanations": 0,
      "modelId": "string",
      "predictionDatasetId": "string",
      "predictionIntervalsSize": 0,
      "predictionThreshold": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "shapWarnings": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      },
      "url": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK The json array of prediction metadata objects. RetrieveListPredictionMetadataObjectsResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictions/

There are two ways of making predictions. The recommended way is to first upload your dataset to the project, and then using the corresponding datasetId, predict against that dataset. To follow that pattern, send the json request body.

Note that requesting prediction intervals will automatically trigger backtesting if backtests were not already completed for this model.

The legacy method which is deprecated is to send the file directly with the predictions request. If you need to predict against a file 10MB in size or larger, you will be required to use the above workflow for uploaded datasets. However, the following multipart/form-data can be used with small files:

:form file: a dataset to make predictions on :form modelId: the model to use to make predictions

.. note:: If using the legacy method of uploading data to this endpoint, a new dataset will be created behind the scenes. For performance reasons, it would be much better to utilize the workflow of creating the dataset first and using the supported method of making predictions of this endpoint. However, to preserve the functionality of existing workflows, the legacy method still exists.

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/predictions/ \
  -H "Content-Type: application/json" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "shap",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "includeFdwCounts": false,
  "includePredictionIntervals": true,
  "maxExplanations": 1,
  "modelId": "string",
  "predictionIntervalsSize": 1,
  "predictionThreshold": 1,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z"
}

Parameters

Name In Type Required Description
projectId path string true The project to make predictions within.
Content-Type header string true Content types available for making request. multipart/form-data is the legacy deprecated method to send the small file with the prediction request.
body body CreatePredictionFromDataset false none

Enumerated Values

Parameter Value
Content-Type [application/json, multipart/form-data]

Responses

Status Meaning Description Schema
202 Accepted Prediction has successfully been requested. See Location header. None
422 Unprocessable Entity The request cannot be processed. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status of the predictions as with GET /api/v2/projects/{projectId}/predictJobs/{jobId}/

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictions/{predictionId}/

Retrieve predictions that have previously been computed. Training predictions encoded either as JSON or CSV. If CSV output was requested, the returned CSV data will contain the following columns:

  • For regression projects: row_id and prediction.
  • For binary classification projects: row_id, prediction, class_<positive_class_label> and class_<negative_class_label>.
  • For multiclass projects: row_id, prediction and a class_<class_label> for each class.
  • For multilabel projects: row_id and for each class prediction_<class_label> and class_<class_label>.
  • For time-series, these additional columns will be added: forecast_point, forecast_distance, timestamp, and series_id.

.. minversion:: v2.21

* If `explanationAlgorithm` = 'shap', these additional columns will be added:
  triplets of (`Explanation_<i>_feature_name`,
  `Explanation_<i>_feature_value`, and `Explanation_<i>_strength`) for `i` ranging
  from 1 to `maxExplanations`, `shap_remaining_total` and `shap_base_value`. Binary
  classification projects will also have `explained_class`, the class for which
  positive SHAP values imply an increased probability.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictions/{predictionId}/ \
  -H "Accept: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
shapMulticlassLevel query string false Required in multiclass projects with SHAP prediction explanations. This parameter specifies which of the target classes (levels) you would like to retrieve explanations for. This will NOT affect a non-multiclass project.
predictionId path string true The id of the prediction record to retrieve. If you have the jobId, you can retrieve the predictionId using GET /api/v2/projects/{projectId}/predictJobs/{jobId}/.
projectId path string true The id of the project the prediction belongs to.
Accept header string false Requested MIME type for the returned data

Enumerated Values

Parameter Value
Accept [application/json, text/csv]

Example responses

200 Response

{
  "actualValueColumn": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "positiveClass": "string",
  "predictionIntervalsSize": 0,
  "predictions": [
    {
      "actualValue": "string",
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "originalFormatTimestamp": "string",
      "positiveProbability": 0,
      "prediction": 0,
      "predictionExplanationMetadata": [
        {
          "shapRemainingTotal": 0
        }
      ],
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionIntervalLowerBound": 0,
      "predictionIntervalUpperBound": 0,
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "segmentId": "string",
      "seriesId": "string",
      "target": "string",
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "shapBaseValue": 0,
  "shapWarnings": [
    {
      "maxNormalizedMismatch": 0,
      "mismatchRowCount": 0
    }
  ],
  "task": "Regression"
}

Responses

Status Meaning Description Schema
200 OK Predictions that have previously been computed. PredictionRetrieveResponse
404 Not Found No prediction data found. None

Response Headers

Status Header Type Format Description
200 Content-Type string MIME type of the returned data

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionsMetadata/

Use the ID of a metadata object to get the complete set of predictions.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictionsMetadata/?offset=0&limit=1000 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned. To specify no limit, use 0. The default may change and a maximum limit may be imposed without notice.
predictionDatasetId query string false Dataset id used to create the predictions
modelId query string false Model id
projectId path string true The project of the predictions.

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "datasetId": "string",
      "explanationAlgorithm": "string",
      "featureDerivationWindowCounts": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "id": "string",
      "includesPredictionIntervals": true,
      "maxExplanations": 0,
      "modelId": "string",
      "predictionDatasetId": "string",
      "predictionIntervalsSize": 0,
      "predictionThreshold": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "shapWarnings": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      },
      "url": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK The json array of prediction metadata objects. RetrieveListPredictionMetadataObjectsResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionsMetadata/{predictionId}/

Use the ID of a metadata object to get the complete set of predictions.

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/predictionsMetadata/{predictionId}/ \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
predictionId path string true The id of the prediction record to retrieve. If you have the jobId, you can retrieve the predictionId using GET /api/v2/projects/{projectId}/predictJobs/{jobId}/.
projectId path string true The id of the project the prediction belongs to.

Example responses

200 Response

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "id": "string",
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "modelId": "string",
  "predictionDatasetId": "string",
  "predictionIntervalsSize": 0,
  "predictionThreshold": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "shapWarnings": {
    "maxNormalizedMismatch": 0,
    "mismatchRowCount": 0
  },
  "url": "string"
}

Responses

Status Meaning Description Schema
200 OK Prediction metadata object. RetrievePredictionMetadataObject
404 Not Found Training predictions not found. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/trainingPredictions/

Get a list of training prediction records

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/trainingPredictions/?offset=0&limit=0 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
projectId path string true Project ID to retrieve training predictions for

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "dataSubset": "all",
      "explanationAlgorithm": "shap",
      "id": "string",
      "maxExplanations": 100,
      "modelId": "string",
      "shapWarnings": [
        {
          "partitionName": "string",
          "value": {
            "maxNormalizedMismatch": 0,
            "mismatchRowCount": 0
          }
        }
      ],
      "url": "http://example.com"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK A list of training prediction jobs TrainingPredictionsListResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/trainingPredictions/

Create training data predictions

Code samples

# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/trainingPredictions/ \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer {access-token}"

Body parameter

{
  "dataSubset": "all",
  "explanationAlgorithm": "string",
  "maxExplanations": 1,
  "modelId": "string"
}

Parameters

Name In Type Required Description
projectId path string true Project ID to compute training predictions for
body body CreateTrainingPrediction false none

Responses

Status Meaning Description Schema
202 Accepted Submitted successfully. See Location header. None
422 Unprocessable Entity - Model/Timeseries/Blender does not support shap based prediction explanations
- Error message from StackedPredictionRequestValidationError
- Could not create training predictions job. Request with same parameters already submitted. None

Response Headers

Status Header Type Format Description
202 Location string URL for tracking async job status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/trainingPredictions/{predictionId}/

Retrieve training predictions that have previously been computed

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/trainingPredictions/{predictionId}/?offset=0&limit=0 \
  -H "Accept: application/json" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
projectId path string true Project ID to retrieve training predictions for
predictionId path string true Prediction ID to retrieve training predictions for
Accept header string false Requested MIME type for the returned data

Enumerated Values

Parameter Value
Accept [application/json, text/csv]

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "partitionId": "string",
      "prediction": 0,
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "seriesId": "string",
      "shapMetadata": {
        "shapBaseValue": 0,
        "shapRemainingTotal": 0,
        "warnings": [
          {
            "maxNormalizedMismatch": 0,
            "mismatchRowCount": 0
          }
        ]
      },
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK Training predictions encoded either as JSON or CSV string
404 Not Found Job does not exist or is not completed None

Response Headers

Status Header Type Format Description
200 Content-Type string MIME type of the returned data

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/scheduledJobs/

Get a list of scheduled batch prediction jobs a user can view

Code samples

# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/scheduledJobs/?offset=0&limit=20 \
  -H "Accept: application/json" \
  -H "Authorization: Bearer {access-token}"

Parameters

Name In Type Required Description
offset query integer true The number of scheduled jobs to skip. Defaults to 0.
limit query integer true The number of scheduled jobs (max 100) to return. Defaults to 20
orderBy query string false The order to sort the scheduled jobs. Defaults to order by last successful run timestamp in descending order.
search query string false Case insensitive search against scheduled jobs name or type name.
deploymentId query string false Filter by the prediction integration deployment ID. Ignored for non prediction integration type ID.
typeId query string false filter by scheduled job type ID.
queryByUser query string false Which user field to filter with.
filterEnabled query string false Filter jobs using the enabled field. If true, only enabled jobs are returned, otherwise if false, only disabled jobs are returned. The default returns both enabled and disabled jobs.

Enumerated Values

Parameter Value
typeId datasetRefresh
queryByUser [createdBy, updatedBy]
filterEnabled [false, False, true, True]

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "createdBy": "string",
      "deploymentId": "string",
      "enabled": true,
      "id": "string",
      "name": "string",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "scheduledJobId": "string",
      "status": {
        "lastFailedRun": "2019-08-24T14:15:22Z",
        "lastSuccessfulRun": "2019-08-24T14:15:22Z",
        "nextRunTime": "2019-08-24T14:15:22Z",
        "queuePosition": 0,
        "running": true
      },
      "typeId": "string",
      "updatedAt": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0,
  "updatedAt": "2019-08-24T14:15:22Z",
  "updatedBy": "string"
}

Responses

Status Meaning Description Schema
200 OK A list of scheduled batch prediction jobs ScheduledJobsListResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

Schemas

ActualValueColumnInfo

{
  "missingCount": 0,
  "name": "string"
}

Properties

Name Type Required Restrictions Description
missingCount integer true Count of the missing values in the column.
name string true Name of the column.

AzureDataStreamer

{
  "credentialId": "string",
  "format": "csv",
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
format string false Type of input file format
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format [csv, parquet]
type azure

AzureIntake

{
  "credentialId": "string",
  "format": "csv",
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false Use the specified credential to access the url
format string false Type of input file format
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
format [csv, parquet]
type azure

AzureOutput

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false Use the specified credential to access the url
format string false Type of output file format
partitionColumns [string] false maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true Type name for this output type
url string(url) true URL for the file or directory

Enumerated Values

Property Value
format [csv, parquet]
type azure

AzureOutputAdaptor

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
format string false Type of output file format
partitionColumns [string] false maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true Type name for this output type
url string(url) true URL for the file or directory

Enumerated Values

Property Value
anonymous [redacted]
format [csv, parquet]
type azure

AzureServicePrincipalCredentials

{
  "azureTenantId": "string",
  "clientId": "string",
  "clientSecret": "string",
  "credentialType": "azure_service_principal"
}

Properties

Name Type Required Restrictions Description
azureTenantId string true Tenant ID of the Azure AD service principal.
clientId string true Client ID of the Azure AD service principal.
clientSecret string true Client Secret of the Azure AD service principal.
credentialType string true The type of these credentials, 'azure_service_principal' here.

Enumerated Values

Property Value
credentialType azure_service_principal

BasicCredentials

{
  "credentialType": "basic",
  "password": "string",
  "user": "string"
}

Properties

Name Type Required Restrictions Description
credentialType string true The type of these credentials, 'basic' here.
password string true The password for database authentication. The password is encrypted at rest and never saved / stored.
user string true The username for database authentication.

Enumerated Values

Property Value
credentialType basic

BatchJobCSVSettings

{
  "delimiter": ",",
  "encoding": "utf-8",
  "quotechar": "\""
}

Properties

Name Type Required Restrictions Description
delimiter any true CSV fields are delimited by this character. Use the string "tab" to denote TSV (TAB separated values).

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous string false maxLength: 1
minLength: 1
minLength: 1
none

continued

Name Type Required Restrictions Description
encoding string true The encoding to be used for intake and output. For example (but not limited to): "shift_jis", "latin_1" or "mskanji".
quotechar string true maxLength: 1
minLength: 1
minLength: 1
Fields containing the delimiter or newlines must be quoted using this character.

Enumerated Values

Property Value
anonymous tab

BatchJobCreatedBy

{
  "fullName": "string",
  "userId": "string",
  "username": "string"
}

Properties

Name Type Required Restrictions Description
fullName string¦null true The full name of the user who created this job (if defined by the user)
userId string true The User ID of the user who created this job
username string true The username (e-mail address) of the user who created this job

BatchJobPredictionInstance

{
  "apiKey": "string",
  "datarobotKey": "string",
  "hostName": "string",
  "sslEnabled": true
}

Properties

Name Type Required Restrictions Description
apiKey string false By default, prediction requests will use the API key of the user that created the job. This allows you to make requests on behalf of other users.
datarobotKey string false If running a job against a prediction instance in the Managed AI Cloud, you must provide the organization level DataRobot-Key.
hostName string true Override the default host name of the deployment with this.
sslEnabled boolean true Use SSL (HTTPS) when communicating with the overriden prediction server.

BatchJobRemapping

{
  "inputName": "string",
  "outputName": "string"
}

Properties

Name Type Required Restrictions Description
inputName string true Rename column with this name
outputName string¦null true Rename column to this name (leave as null to remove from the output)

BatchJobTimeSeriesSettingsForecast

{
  "forecastPoint": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "forecast"
}

Properties

Name Type Required Restrictions Description
forecastPoint string(date-time) false Used for forecast predictions in order to override the inferred forecast point from the dataset.
relaxKnownInAdvanceFeaturesCheck boolean false If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true Forecast mode makes predictions using forecastPoint or rows in the dataset without target.

Enumerated Values

Property Value
type forecast

BatchJobTimeSeriesSettingsHistorical

{
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "historical"
}

Properties

Name Type Required Restrictions Description
predictionsEndDate string(date-time) false Used for historical predictions in order to override date to which predictions should be calculated. By default value will be inferred automatically from the dataset.
predictionsStartDate string(date-time) false Used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset.
relaxKnownInAdvanceFeaturesCheck boolean false If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true Historical mode enables bulk predictions which calculates predictions for all possible forecast points and forecast distances in the dataset within the predictionsStartDate/predictionsEndDate range.

Enumerated Values

Property Value
type historical

BatchPredictionCreatedBy

{
  "fullName": "string",
  "userId": "string",
  "username": "string"
}

Properties

Name Type Required Restrictions Description
fullName string¦null true The full name of the user who created this job (if defined by the user)
userId string true The User ID of the user who created this job
username string true The username (e-mail address) of the user who created this job

BatchPredictionJobCSVSettings

{
  "delimiter": ",",
  "encoding": "utf-8",
  "quotechar": "\""
}

Properties

Name Type Required Restrictions Description
delimiter any true CSV fields are delimited by this character. Use the string "tab" to denote TSV (TAB separated values).

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous string false maxLength: 1
minLength: 1
minLength: 1
none

continued

Name Type Required Restrictions Description
encoding string true The encoding to be used for intake and output. For example (but not limited to): "shift_jis", "latin_1" or "mskanji".
quotechar string true maxLength: 1
minLength: 1
minLength: 1
Fields containing the delimiter or newlines must be quoted using this character.

Enumerated Values

Property Value
anonymous tab

BatchPredictionJobCreate

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true Should this job abort if too many errors are encountered
chunkSize any false Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous integer false maximum: 41943040
minimum: 20
none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false maxItems: 1000
Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true The CSV settings used for this job
deploymentId string false ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true Skip row by row error handling
explanationAlgorithm string false Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false maxItems: 10
minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false maximum: 10
minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true Include prediction status column in the output
includeProbabilities boolean true Include probabilities for all classes
includeProbabilitiesClasses [string] true maxItems: 100
Include only probabilities for these specific class names.
intakeSettings any true The intake option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureIntake false Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous BigQueryIntake false Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous DataStageIntake false Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous Catalog false Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous DSS false Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemIntake false none

xor

Name Type Required Restrictions Description
» anonymous GCPIntake false Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPIntake false Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCIntake false Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileIntake false Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Intake false Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeIntake false Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseIntake false Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer true maximum: 100
minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId string false ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false Name of the batch to create with this job
numConcurrent integer false minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings any false The output option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutput false Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutput false Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutput false none

xor

Name Type Required Restrictions Description
» anonymous GCPOutput false Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPOutput false Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JDBCOutput false Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutput false Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Output false Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutput false Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutput false Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous Tableau false Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet string false Pass through all columns from the original dataset
pinnedModelId string false Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false Override the default prediction instance from the deployment when scoring this job.
predictionThreshold number false maximum: 1
minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled boolean¦null false Enable prediction warnings.
skipDriftTracking boolean true Skip drift tracking for this job.
thresholdHigh number false Compute explanations for predictions above this threshold
thresholdLow number false Compute explanations for predictions below this threshold
timeseriesSettings any false Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsTraining false none

Enumerated Values

Property Value
anonymous [auto, fixed, dynamic]
explanationAlgorithm [shap, xemp]
passthroughColumnsSet all

BatchPredictionJobDefinitionId

{
  "jobDefinitionId": "string"
}

Properties

Name Type Required Restrictions Description
jobDefinitionId string true ID of the Batch Prediction job definition

BatchPredictionJobDefinitionJobSpecResponse

{
  "abortOnError": true,
  "batchJobType": "monitoring",
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "maxNgramExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringAggregation": {
    "retentionPolicy": "samples",
    "retentionValue": 0
  },
  "monitoringBatchPrefix": "string",
  "monitoringColumns": {
    "actedUponColumn": "string",
    "actualsTimestampColumn": "string",
    "actualsValueColumn": "string",
    "associationIdColumn": "string",
    "customMetricId": "string",
    "customMetricTimestampColumn": "string",
    "customMetricTimestampFormat": "string",
    "customMetricValueColumn": "string",
    "monitoredStatusColumn": "string",
    "predictionsColumns": [
      {
        "className": "string",
        "columnName": "string"
      }
    ],
    "uniqueRowIdentifierColumns": [
      "string"
    ]
  },
  "monitoringOutputSettings": {
    "monitoredStatusColumn": "string",
    "uniqueRowIdentifierColumns": [
      "string"
    ]
  },
  "numConcurrent": 0,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "redactedFields": [
    "string"
  ],
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true Should this job abort if too many errors are encountered
batchJobType string false Batch job type.
chunkSize any false Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous integer false maximum: 41943040
minimum: 20
none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchJobRemapping] false maxItems: 1000
Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchJobCSVSettings true The CSV settings used for this job
deploymentId string false ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true Skip row by row error handling
explanationAlgorithm string false Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false maxItems: 10
minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false maximum: 10
minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true Include prediction status column in the output
includeProbabilities boolean true Include probabilities for all classes
includeProbabilitiesClasses [string] true maxItems: 100
Include only probabilities for these specific class names.
intakeSettings any true The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureDataStreamer false Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous DataStageDataStreamer false Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous CatalogDataStreamer false Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous GCPDataStreamer false Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryDataStreamer false Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous S3DataStreamer false Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeDataStreamer false Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseDataStreamer false Stream CSV data chunks from Azure Synapse

xor

Name Type Required Restrictions Description
» anonymous DSSDataStreamer false Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemDataStreamer false none

xor

Name Type Required Restrictions Description
» anonymous HTTPDataStreamer false Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCDataStreamer false Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileDataStreamer false Stream CSV data chunks from local file storage

continued

Name Type Required Restrictions Description
maxExplanations integer true maximum: 100
minimum: 0
Number of explanations requested. Will be ordered by strength.
maxNgramExplanations any false The maximum number of text ngram explanations to supply per row of the dataset. The default recommended maxNgramExplanations is all (no limit)

oneOf

Name Type Required Restrictions Description
» anonymous integer false minimum: 0
none

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
modelId string false ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false ID of model package from registry is used in job for processing predictions dataset
monitoringAggregation MonitoringAggregation false Defines the aggregation policy for monitoring jobs.
monitoringBatchPrefix string¦null false Name of the batch to create with this job
monitoringColumns MonitoringColumnsMapping false Column names mapping for monitoring
monitoringOutputSettings MonitoringOutputSettings false Output settings for monitoring jobs
numConcurrent integer true minimum: 0
Number of simultaneous requests to run against the prediction instance
outputSettings any false The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutputAdaptor false Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous GCPOutputAdaptor false Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutputAdaptor false Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous S3OutputAdaptor false Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutputAdaptor false Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutputAdaptor false Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutputAdaptor false none

xor

Name Type Required Restrictions Description
» anonymous HttpOutputAdaptor false Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JdbcOutputAdaptor false Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutputAdaptor false Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous TableauOutputAdaptor false Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet string false Pass through all columns from the original dataset
pinnedModelId string false Specify a model ID used for scoring
predictionInstance BatchJobPredictionInstance false Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled boolean¦null false Enable prediction warnings.
redactedFields [string] true A list of qualified field names from intake- and/or outputSettings that was redacted due to permissions and sharing settings. For example: intakeSettings.dataStoreId
skipDriftTracking boolean true Skip drift tracking for this job.
thresholdHigh number false Compute explanations for predictions above this threshold
thresholdLow number false Compute explanations for predictions below this threshold
timeseriesSettings any false Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchJobTimeSeriesSettingsForecast false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecastWithPolicy false none

xor

Name Type Required Restrictions Description
» anonymous BatchJobTimeSeriesSettingsHistorical false none

Enumerated Values

Property Value
batchJobType [monitoring, prediction]
anonymous [auto, fixed, dynamic]
explanationAlgorithm [shap, xemp]
anonymous all
passthroughColumnsSet all

BatchPredictionJobDefinitionResponse

{
  "createdBy": "string",
  "id": "string",
  "name": "string"
}

Properties

Name Type Required Restrictions Description
createdBy string true The ID of creator of this job definition
id string true The ID of the Batch Prediction job definition
name string true A human-readable name for the definition, must be unique across organisations

BatchPredictionJobDefinitionsCreate

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true Should this job abort if too many errors are encountered
chunkSize any false Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous integer false maximum: 41943040
minimum: 20
none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false maxItems: 1000
Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true The CSV settings used for this job
deploymentId string true ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true Skip row by row error handling
enabled boolean false If this job definition is enabled as a scheduled job. Optional if no schedule is supplied.
explanationAlgorithm string false Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false maxItems: 10
minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false maximum: 10
minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true Include prediction status column in the output
includeProbabilities boolean true Include probabilities for all classes
includeProbabilitiesClasses [string] true maxItems: 100
Include only probabilities for these specific class names.
intakeSettings any true The intake option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureIntake false Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous BigQueryIntake false Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous DataStageIntake false Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous Catalog false Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous DSS false Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemIntake false none

xor

Name Type Required Restrictions Description
» anonymous GCPIntake false Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPIntake false Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCIntake false Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileIntake false Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Intake false Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeIntake false Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseIntake false Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer true maximum: 100
minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId string false ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false Name of the batch to create with this job
name string false maxLength: 100
minLength: 1
minLength: 1
A human-readable name for the definition, must be unique across organisations, if left out the backend will generate one for you.
numConcurrent integer false minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings any false The output option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutput false Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutput false Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutput false none

xor

Name Type Required Restrictions Description
» anonymous GCPOutput false Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPOutput false Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JDBCOutput false Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutput false Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Output false Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutput false Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutput false Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous Tableau false Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet string false Pass through all columns from the original dataset
pinnedModelId string false Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false Override the default prediction instance from the deployment when scoring this job.
predictionThreshold number false maximum: 1
minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled boolean¦null false Enable prediction warnings.
schedule Schedule false The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
skipDriftTracking boolean true Skip drift tracking for this job.
thresholdHigh number false Compute explanations for predictions above this threshold
thresholdLow number false Compute explanations for predictions below this threshold
timeseriesSettings any false Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchJobTimeSeriesSettingsForecast false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecastWithPolicy false none

xor

Name Type Required Restrictions Description
» anonymous BatchJobTimeSeriesSettingsHistorical false none

Enumerated Values

Property Value
anonymous [auto, fixed, dynamic]
explanationAlgorithm [shap, xemp]
passthroughColumnsSet all

BatchPredictionJobDefinitionsListResponse

{
  "count": 0,
  "data": [
    {
      "batchPredictionJob": {
        "abortOnError": true,
        "batchJobType": "monitoring",
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "maxNgramExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringAggregation": {
          "retentionPolicy": "samples",
          "retentionValue": 0
        },
        "monitoringBatchPrefix": "string",
        "monitoringColumns": {
          "actedUponColumn": "string",
          "actualsTimestampColumn": "string",
          "actualsValueColumn": "string",
          "associationIdColumn": "string",
          "customMetricId": "string",
          "customMetricTimestampColumn": "string",
          "customMetricTimestampFormat": "string",
          "customMetricValueColumn": "string",
          "monitoredStatusColumn": "string",
          "predictionsColumns": [
            {
              "className": "string",
              "columnName": "string"
            }
          ],
          "uniqueRowIdentifierColumns": [
            "string"
          ]
        },
        "monitoringOutputSettings": {
          "monitoredStatusColumn": "string",
          "uniqueRowIdentifierColumns": [
            "string"
          ]
        },
        "numConcurrent": 0,
        "outputSettings": {
          "credentialId": "string",
          "format": "csv",
          "partitionColumns": [
            "string"
          ],
          "type": "azure",
          "url": "string"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "enabled": false,
      "id": "string",
      "lastFailedRunTime": "2019-08-24T14:15:22Z",
      "lastScheduledRunTime": "2019-08-24T14:15:22Z",
      "lastStartedJobStatus": "INITIALIZING",
      "lastStartedJobTime": "2019-08-24T14:15:22Z",
      "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
      "name": "string",
      "nextScheduledRunTime": "2019-08-24T14:15:22Z",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "updated": "2019-08-24T14:15:22Z",
      "updatedBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      }
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Properties

Name Type Required Restrictions Description
count integer false Number of items returned on this page.
data [BatchPredictionJobDefinitionsResponse] true An array of scheduled jobs
next string(uri)¦null true URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true URL pointing to the previous page (if null, there is no previous page).
totalCount integer true The total number of items across all pages.

BatchPredictionJobDefinitionsResponse

{
  "batchPredictionJob": {
    "abortOnError": true,
    "batchJobType": "monitoring",
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "maxNgramExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringAggregation": {
      "retentionPolicy": "samples",
      "retentionValue": 0
    },
    "monitoringBatchPrefix": "string",
    "monitoringColumns": {
      "actedUponColumn": "string",
      "actualsTimestampColumn": "string",
      "actualsValueColumn": "string",
      "associationIdColumn": "string",
      "customMetricId": "string",
      "customMetricTimestampColumn": "string",
      "customMetricTimestampFormat": "string",
      "customMetricValueColumn": "string",
      "monitoredStatusColumn": "string",
      "predictionsColumns": [
        {
          "className": "string",
          "columnName": "string"
        }
      ],
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "monitoringOutputSettings": {
      "monitoredStatusColumn": "string",
      "uniqueRowIdentifierColumns": [
        "string"
      ]
    },
    "numConcurrent": 0,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Properties

Name Type Required Restrictions Description
batchPredictionJob BatchPredictionJobDefinitionJobSpecResponse true The Batch Prediction Job specification to be put on the queue in intervals
created string(date-time) true When was this job created
createdBy BatchJobCreatedBy true Who created this job
enabled boolean true If this job definition is enabled as a scheduled job.
id string true The ID of the Batch job definition
lastFailedRunTime string(date-time)¦null false Last time this job had a failed run
lastScheduledRunTime string(date-time)¦null false Last time this job was scheduled to run (though not guaranteed it actually ran at that time)
lastStartedJobStatus string¦null true The status of the latest job launched to the queue (if any).
lastStartedJobTime string(date-time)¦null true The last time (if any) a job was launched.
lastSuccessfulRunTime string(date-time)¦null false Last time this job had a successful run
name string true A human-readable name for the definition, must be unique across organisations
nextScheduledRunTime string(date-time)¦null false Next time this job is scheduled to run
schedule Schedule false The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
updated string(date-time) true When was this job last updated
updatedBy BatchJobCreatedBy true Who updated this job last

Enumerated Values

Property Value
lastStartedJobStatus [INITIALIZING, RUNNING, COMPLETED, ABORTED, FAILED]

BatchPredictionJobDefinitionsUpdate

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean false Should this job abort if too many errors are encountered
chunkSize any false Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous integer false maximum: 41943040
minimum: 20
none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false maxItems: 1000
Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings false The CSV settings used for this job
deploymentId string false ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean false Skip row by row error handling
enabled boolean false If this job definition is enabled as a scheduled job. Optional if no schedule is supplied.
explanationAlgorithm string false Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false maxItems: 10
minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false maximum: 10
minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean false Include prediction status column in the output
includeProbabilities boolean false Include probabilities for all classes
includeProbabilitiesClasses [string] false maxItems: 100
Include only probabilities for these specific class names.
intakeSettings any false The intake option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureIntake false Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous BigQueryIntake false Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous DataStageIntake false Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous Catalog false Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous DSS false Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemIntake false none

xor

Name Type Required Restrictions Description
» anonymous GCPIntake false Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPIntake false Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCIntake false Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileIntake false Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Intake false Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeIntake false Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseIntake false Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer false maximum: 100
minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId string false ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false Name of the batch to create with this job
name string false maxLength: 100
minLength: 1
minLength: 1
A human-readable name for the definition, must be unique across organisations, if left out the backend will generate one for you.
numConcurrent integer false minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings any false The output option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutput false Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutput false Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutput false none

xor

Name Type Required Restrictions Description
» anonymous GCPOutput false Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPOutput false Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JDBCOutput false Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutput false Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Output false Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutput false Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutput false Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous Tableau false Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet string false Pass through all columns from the original dataset
pinnedModelId string false Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false Override the default prediction instance from the deployment when scoring this job.
predictionThreshold number false maximum: 1
minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled boolean¦null false Enable prediction warnings.
schedule Schedule false The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
skipDriftTracking boolean false Skip drift tracking for this job.
thresholdHigh number false Compute explanations for predictions above this threshold
thresholdLow number false Compute explanations for predictions below this threshold
timeseriesSettings any false Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchJobTimeSeriesSettingsForecast false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecastWithPolicy false none

xor

Name Type Required Restrictions Description
» anonymous BatchJobTimeSeriesSettingsHistorical false none

Enumerated Values

Property Value
anonymous [auto, fixed, dynamic]
explanationAlgorithm [shap, xemp]
passthroughColumnsSet all

BatchPredictionJobId

{
  "partNumber": 0,
  "predictionJobId": "string"
}

Properties

Name Type Required Restrictions Description
partNumber integer true minimum: 0
The number of which csv part is being uploaded when using multipart upload
predictionJobId string true ID of the Batch Prediction job

{
  "csvUpload": "string",
  "download": "string",
  "self": "string"
}

Properties

Name Type Required Restrictions Description
csvUpload string(url) false The URL used to upload the dataset for this job. Only available for localFile intake.
download string¦null false The URL used to download the results from this job. Only available for localFile outputs. Will be null if the download is not yet available.
self string(url) true The URL used access this job.

BatchPredictionJobListResponse

{
  "count": 0,
  "data": [
    {
      "batchPredictionJobDefinition": {
        "createdBy": "string",
        "id": "string",
        "name": "string"
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "elapsedTimeSec": 0,
      "failedRows": 0,
      "hidden": "2019-08-24T14:15:22Z",
      "id": "string",
      "intakeDatasetDisplayName": "string",
      "jobIntakeSize": 0,
      "jobOutputSize": 0,
      "jobSpec": {
        "abortOnError": true,
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringBatchPrefix": "string",
        "numConcurrent": 1,
        "outputSettings": {
          "credentialId": "string",
          "format": "csv",
          "partitionColumns": [
            "string"
          ],
          "type": "azure",
          "url": "string"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionThreshold": 1,
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "links": {
        "csvUpload": "string",
        "download": "string",
        "self": "string"
      },
      "logs": [
        "string"
      ],
      "percentageCompleted": 100,
      "queuePosition": 0,
      "queued": true,
      "resultsDeleted": true,
      "scoredRows": 0,
      "skippedRows": 0,
      "source": "string",
      "status": "INITIALIZING",
      "statusDetails": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Properties

Name Type Required Restrictions Description
count integer false Number of items returned on this page.
data [BatchPredictionJobResponse] true An array of jobs
next string(uri)¦null true URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true URL pointing to the previous page (if null, there is no previous page).
totalCount integer true The total number of items across all pages.

BatchPredictionJobPredictionInstance

{
  "apiKey": "string",
  "datarobotKey": "string",
  "hostName": "string",
  "sslEnabled": true
}

Properties

Name Type Required Restrictions Description
apiKey string false By default, prediction requests will use the API key of the user that created the job. This allows you to make requests on behalf of other users.
datarobotKey string false If running a job against a prediction instance in the Managed AI Cloud, you must provide the organization level DataRobot-Key.
hostName string true Override the default host name of the deployment with this.
sslEnabled boolean true Use SSL (HTTPS) when communicating with the overriden prediction server.

BatchPredictionJobRemapping

{
  "inputName": "string",
  "outputName": "string"
}

Properties

Name Type Required Restrictions Description
inputName string true Rename column with this name
outputName string¦null true Rename column to this name (leave as null to remove from the output)

BatchPredictionJobResponse

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "credentialId": "string",
      "format": "csv",
      "partitionColumns": [
        "string"
      ],
      "type": "azure",
      "url": "string"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionThreshold": 1,
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Properties

Name Type Required Restrictions Description
batchPredictionJobDefinition BatchPredictionJobDefinitionResponse false The Batch Prediction Job Definition linking to this job, if any.
created string(date-time) true When was this job created
createdBy BatchPredictionCreatedBy true Who created this job
elapsedTimeSec integer true minimum: 0
Number of seconds the job has been processing for
failedRows integer true minimum: 0
Number of rows that have failed scoring
hidden string(date-time) false When was this job was hidden last, blank if visible
id string true The ID of the Batch Prediction job
intakeDatasetDisplayName string¦null false If applicable (e.g. for AI catalog), will contain the dataset name used for the intake dataset.
jobIntakeSize integer¦null true minimum: 0
Number of bytes in the intake dataset for this job
jobOutputSize integer¦null true minimum: 0
Number of bytes in the output dataset for this job
jobSpec BatchPredictionJobSpecResponse true The job configuration used to create this job
links BatchPredictionJobLinks true Links useful for this job
logs [string] true The job log.
percentageCompleted number true maximum: 100
minimum: 0
Indicates job progress which is based on number of already processed rows in dataset
queuePosition integer¦null false minimum: 0
To ensure a dedicated prediction instance is not overloaded, only one job will be run against it at a time. This is the number of jobs that are awaiting processing before this job start running. May not be available in all environments.
queued boolean true The job has been put on the queue for execution.
resultsDeleted boolean false Indicates if the job was subject to garbage collection and had its artifacts deleted (output files, if any, and scoring data on local storage)
scoredRows integer true minimum: 0
Number of rows that have been used in prediction computation
skippedRows integer true minimum: 0
Number of rows that have been skipped during scoring. May contain non-zero value only in time-series predictions case if provided dataset contains more than required historical rows.
source string false Source from which batch job was started
status string true The current job status
statusDetails string true Explanation for current status

Enumerated Values

Property Value
status [INITIALIZING, RUNNING, COMPLETED, ABORTED, FAILED]

BatchPredictionJobSpecResponse

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "credentialId": "string",
    "format": "csv",
    "partitionColumns": [
      "string"
    ],
    "type": "azure",
    "url": "string"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionThreshold": 1,
  "predictionWarningEnabled": true,
  "redactedFields": [
    "string"
  ],
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true Should this job abort if too many errors are encountered
chunkSize any false Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous integer false maximum: 41943040
minimum: 20
none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false maxItems: 1000
Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true The CSV settings used for this job
deploymentId string false ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true Skip row by row error handling
explanationAlgorithm string false Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false maxItems: 10
minItems: 1
List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false maximum: 10
minimum: 1
Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true Include prediction status column in the output
includeProbabilities boolean true Include probabilities for all classes
includeProbabilitiesClasses [string] true maxItems: 100
Include only probabilities for these specific class names.
intakeSettings any true The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureDataStreamer false Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous DataStageDataStreamer false Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous CatalogDataStreamer false Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous GCPDataStreamer false Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryDataStreamer false Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous S3DataStreamer false Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeDataStreamer false Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseDataStreamer false Stream CSV data chunks from Azure Synapse

xor

Name Type Required Restrictions Description
» anonymous DSSDataStreamer false Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemDataStreamer false none

xor

Name Type Required Restrictions Description
» anonymous HTTPDataStreamer false Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCDataStreamer false Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileDataStreamer false Stream CSV data chunks from local file storage

continued

Name Type Required Restrictions Description
maxExplanations integer true maximum: 100
minimum: 0
Number of explanations requested. Will be ordered by strength.
modelId string false ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false Name of the batch to create with this job
numConcurrent integer false minimum: 1
Number of simultaneous requests to run against the prediction instance
outputSettings any false The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutputAdaptor false Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous GCPOutputAdaptor false Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutputAdaptor false Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous S3OutputAdaptor false Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutputAdaptor false Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutputAdaptor false Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutputAdaptor false none

xor

Name Type Required Restrictions Description
» anonymous HttpOutputAdaptor false Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JdbcOutputAdaptor false Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutputAdaptor false Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous TableauOutputAdaptor false Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false maxItems: 100
Pass through columns from the original dataset
passthroughColumnsSet string false Pass through all columns from the original dataset
pinnedModelId string false Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false Override the default prediction instance from the deployment when scoring this job.
predictionThreshold number false maximum: 1
minimum: 0
Threshold is the point that sets the class boundary for a predicted value. The model classifies an observation below the threshold as FALSE, and an observation above the threshold as TRUE. In other words, DataRobot automatically assigns the positive class label to any prediction exceeding the threshold. This value can be set between 0.0 and 1.0.
predictionWarningEnabled boolean¦null false Enable prediction warnings.
redactedFields [string] true A list of qualified field names from intake- and/or outputSettings that was redacted due to permissions and sharing settings. For example: intakeSettings.dataStoreId
skipDriftTracking boolean true Skip drift tracking for this job.
thresholdHigh number false Compute explanations for predictions above this threshold
thresholdLow number false Compute explanations for predictions below this threshold
timeseriesSettings any false Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsTraining false none

Enumerated Values

Property Value
anonymous [auto, fixed, dynamic]
explanationAlgorithm [shap, xemp]
passthroughColumnsSet all

BatchPredictionJobTimeSeriesSettingsForecast

{
  "forecastPoint": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "forecast"
}

Properties

Name Type Required Restrictions Description
forecastPoint string(date-time) false Used for forecast predictions in order to override the inferred forecast point from the dataset.
relaxKnownInAdvanceFeaturesCheck boolean false If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true Forecast mode makes predictions using forecastPoint or rows in the dataset without target.

Enumerated Values

Property Value
type forecast

BatchPredictionJobTimeSeriesSettingsForecastWithPolicy

{
  "forecastPointPolicy": {
    "configuration": {
      "offset": "string"
    },
    "type": "jobRunTimeBased"
  },
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "forecast"
}

Properties

Name Type Required Restrictions Description
forecastPointPolicy JobRunTimeBasedForecastPointPolicy true Forecast point policy
relaxKnownInAdvanceFeaturesCheck boolean false If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true Forecast mode makes predictions using forecastPoint or rows in the dataset without target.

Enumerated Values

Property Value
type forecast

BatchPredictionJobTimeSeriesSettingsHistorical

{
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "historical"
}

Properties

Name Type Required Restrictions Description
predictionsEndDate string(date-time) false Used for historical predictions in order to override date to which predictions should be calculated. By default value will be inferred automatically from the dataset.
predictionsStartDate string(date-time) false Used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset.
relaxKnownInAdvanceFeaturesCheck boolean false If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true Historical mode enables bulk predictions which calculates predictions for all possible forecast points and forecast distances in the dataset within the predictionsStartDate/predictionsEndDate range.

Enumerated Values

Property Value
type historical

BatchPredictionJobTimeSeriesSettingsTraining

{
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "training"
}

Properties

Name Type Required Restrictions Description
relaxKnownInAdvanceFeaturesCheck boolean false If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true Forecast mode used for making predictions on subsets of training data.

Enumerated Values

Property Value
type training

BatchPredictionJobUpdate

{
  "aborted": "2019-08-24T14:15:22Z",
  "completed": "2019-08-24T14:15:22Z",
  "failedRows": 0,
  "hidden": true,
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "logs": [
    "string"
  ],
  "scoredRows": 0,
  "skippedRows": 0,
  "started": "2019-08-24T14:15:22Z",
  "status": "INITIALIZING"
}

Properties

Name Type Required Restrictions Description
aborted string(date-time)¦null false Time when job abortion happened
completed string(date-time)¦null false Time when job completed scoring
failedRows integer false Number of rows that have failed scoring
hidden boolean false Hides or unhides the job from the job list
jobIntakeSize integer¦null false Number of bytes in the intake dataset for this job
jobOutputSize integer¦null false Number of bytes in the output dataset for this job
logs [string] false The job log.
scoredRows integer false Number of rows that have been used in prediction computation
skippedRows integer false Number of rows that have been skipped during scoring. May contain non-zero value only in time-series predictions case if provided dataset contains more than required historical rows.
started string(date-time)¦null false Time when job scoring begin
status string false The current job status

Enumerated Values

Property Value
status [INITIALIZING, RUNNING, COMPLETED, ABORTED, FAILED]

BigQueryDataStreamer

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true The name of gcs bucket for data export
credentialId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false The ID of the GCP credentials

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataset string true The name of the specified big query dataset to read input data from
table string true The name of the specified big query table to read input data from
type string true Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
type bigquery

BigQueryIntake

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true The name of gcs bucket for data export
credentialId string true The ID of the GCP credentials
dataset string true The name of the specified big query dataset to read input data from
table string true The name of the specified big query table to read input data from
type string true Type name for this intake type

Enumerated Values

Property Value
type bigquery

BigQueryOutput

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true The name of gcs bucket for data loading
credentialId string true The ID of the GCP credentials
dataset string true The name of the specified big query dataset to write data back
table string true The name of the specified big query table to write data back
type string true Type name for this output type

Enumerated Values

Property Value
type bigquery

BigQueryOutputAdaptor

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true The name of gcs bucket for data loading
credentialId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false The ID of the GCP credentials

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataset string true The name of the specified big query dataset to write data back
table string true The name of the specified big query table to write data back
type string true Type name for this output type

Enumerated Values

Property Value
anonymous [redacted]
type bigquery

Catalog

{
  "datasetId": "string",
  "datasetVersionId": "string",
  "type": "dataset"
}

Properties

Name Type Required Restrictions Description
datasetId string true The ID of the AI catalog dataset
datasetVersionId string false The ID of the AI catalog dataset version
type string true Type name for this intake type

Enumerated Values

Property Value
type dataset

CatalogDataStreamer

{
  "datasetId": "string",
  "datasetVersionId": "string",
  "type": "dataset"
}

Properties

Name Type Required Restrictions Description
datasetId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false The ID of the AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
datasetVersionId string false The ID of the AI catalog dataset version
type string true Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
type dataset

CreatePredictionDatasetResponse

{
  "datasetId": "string"
}

Properties

Name Type Required Restrictions Description
datasetId string true The ID of the newly created prediction dataset.

CreatePredictionFromDataset

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "shap",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "includeFdwCounts": false,
  "includePredictionIntervals": true,
  "maxExplanations": 1,
  "modelId": "string",
  "predictionIntervalsSize": 1,
  "predictionThreshold": 1,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false For time series projects only. Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. This value is optional.
datasetId string true The dataset to compute predictions for - must have previously been uploaded.
explanationAlgorithm string false If set to shap, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).
forecastPoint string(date-time) false For time series projects only. The time in the dataset relative to which predictions are generated. This value is optional. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
includeFdwCounts boolean false For time series projects with partial history only. Indicates if feature derivation window counts featureDerivationWindowCounts will be part of the response.
includePredictionIntervals boolean false Specifies whether prediction intervals should be calculated for this request. Defaults to True if predictionIntervalsSize is specified, otherwise defaults to False.
maxExplanations integer false maximum: 100
minimum: 1
Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of 'shap': If not set, explanations are returned for all features. If the number of features is greater than the 'maxExplanations', the sum of remaining values will also be returned as 'shapRemainingTotal'. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Cannot be set if 'explanationAlgorithm' is omitted.
modelId string true The model to make predictions on.
predictionIntervalsSize integer false maximum: 100
minimum: 1
Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if includePredictionIntervals is True.
predictionThreshold number false maximum: 1
minimum: 0
Threshold used for binary classification in predictions. Accepts values from 0.0 to 1.0. If not specified, model default prediction threshold will be used.
predictionsEndDate string(date-time) false The end date for bulk predictions, exclusive. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false The start date for bulk predictions. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.

Enumerated Values

Property Value
explanationAlgorithm shap

CreateTrainingPrediction

{
  "dataSubset": "all",
  "explanationAlgorithm": "string",
  "maxExplanations": 1,
  "modelId": "string"
}

Properties

Name Type Required Restrictions Description
dataSubset string true Subset of data predicted on: The value "all" returns predictions for all rows in the dataset including data used for training, validation, holdout and any rows discarded. This is not available for large datasets or projects created with Date/Time partitioning. The value "validationAndHoldout" returns predictions for the rows used to calculate the validation score and the holdout score. Not available for large projects or Date/Time projects for models trained into the validation set. The value "holdout" returns predictions for the rows used to calculate the holdout score. Not available for projects created without a holdout or for models trained into holdout for large datasets or created with Date/Time partitioning. The value "allBacktests" returns predictions for the rows used to calculate the backtesting scores for Date/Time projects. The value "validation" returns predictions for the rows used to calculate the validation score.
explanationAlgorithm string false If set to "shap", the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations)
maxExplanations integer false maximum: 100
minimum: 1
Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of "shap": If not set, explanations are returned for all features. If the number of features is greater than the "maxExplanations", the sum of remaining values will also be returned as "shapRemainingTotal". Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Cannot be set if "explanationAlgorithm" is omitted.
modelId string true The model to make predictions on

Enumerated Values

Property Value
dataSubset [all, validationAndHoldout, holdout, allBacktests, validation, crossValidation]

CredentialId

{
  "catalogVersionId": "string",
  "credentialId": "string",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
catalogVersionId string false The ID of the latest version of the catalog entry.
credentialId string true The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional.
url string false The link to retrieve more detailed information about the entity that uses this catalog dataset.

DSS

{
  "datasetId": "string",
  "partition": "holdout",
  "projectId": "string",
  "type": "dss"
}

Properties

Name Type Required Restrictions Description
datasetId string false The ID of the dataset
partition string false Partition used to predict
projectId string true The ID of the project
type string true Type name for this intake type

Enumerated Values

Property Value
partition [holdout, validation, allBacktests, None]
type dss

DSSDataStreamer

{
  "datasetId": "string",
  "partition": "holdout",
  "projectId": "string",
  "type": "dss"
}

Properties

Name Type Required Restrictions Description
datasetId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false The ID of the dataset

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
partition string false Partition used to predict
projectId string true The ID of the project
type string true Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
partition [holdout, validation, allBacktests, None]
type dss

DataQualityWarningsRecord

{
  "hasKiaMissingValuesInForecastWindow": true,
  "insufficientRowsForEvaluatingModels": true,
  "singleClassActualValueColumn": true
}

Properties

Name Type Required Restrictions Description
hasKiaMissingValuesInForecastWindow boolean false If true, known-in-advance features in this dataset have missing values in the forecast window. Absence of the known-in-advance values can negatively impact prediction quality. Only applies for time series projects.
insufficientRowsForEvaluatingModels boolean false If true, the dataset has a target column present indicating it can be used to evaluate model performance but too few rows to be trustworthy in so doing. If false, either it has no target column at all or it has sufficient rows for model evaluation. Only applies for regression, binary classification, multiclass classification projects and time series unsupervised projects.
singleClassActualValueColumn boolean false If true, actual value column has only one class and such insights as ROC curve can not be calculated. Only applies for binary classification projects or unsupervised projects.

DataStageDataStreamer

{
  "dataStageId": "string",
  "type": "dataStage"
}

Properties

Name Type Required Restrictions Description
dataStageId string true The ID of the data stage
type string true Type name for this intake type

Enumerated Values

Property Value
type dataStage

DataStageIntake

{
  "dataStageId": "string",
  "type": "dataStage"
}

Properties

Name Type Required Restrictions Description
dataStageId string true The ID of the data stage
type string true Type name for this intake type

Enumerated Values

Property Value
type dataStage

DatabricksAccessTokenCredentials

{
  "credentialType": "databricks_access_token_account",
  "databricksAccessToken": "string"
}

Properties

Name Type Required Restrictions Description
credentialType string true The type of these credentials, 'databricks_access_token_account' here.
databricksAccessToken string true minLength: 1
minLength: 1
Databricks personal access token.

Enumerated Values

Property Value
credentialType databricks_access_token_account

FileSystemDataStreamer

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true Path to data on host filesystem
type string true Type name for this intake type

Enumerated Values

Property Value
type filesystem

FileSystemIntake

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true Path to data on host filesystem
type string true Type name for this intake type

Enumerated Values

Property Value
type filesystem

FileSystemOutput

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true Path to results on host filesystem
type string true Type name for this output type

Enumerated Values

Property Value
type filesystem

FileSystemOutputAdaptor

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true Path to results on host filesystem
type string true Type name for this output type

Enumerated Values

Property Value
type filesystem

GCPDataStreamer

{
  "credentialId": "string",
  "format": "csv",
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
format string false Type of input file format
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format [csv, parquet]
type gcp

GCPIntake

{
  "credentialId": "string",
  "format": "csv",
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false Use the specified credential to access the url
format string false Type of input file format
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
format [csv, parquet]
type gcp

GCPKey

{
  "authProviderX509CertUrl": "http://example.com",
  "authUri": "http://example.com",
  "clientEmail": "string",
  "clientId": "string",
  "clientX509CertUrl": "http://example.com",
  "privateKey": "string",
  "privateKeyId": "string",
  "projectId": "string",
  "tokenUri": "http://example.com",
  "type": "service_account"
}

Properties

Name Type Required Restrictions Description
authProviderX509CertUrl string(uri) false Auth provider X509 certificate URL.
authUri string(uri) false Auth URI.
clientEmail string false Client email address.
clientId string false Client ID.
clientX509CertUrl string(uri) false Client X509 certificate URL.
privateKey string false Private key.
privateKeyId string false Private key ID
projectId string false Project ID.
tokenUri string(uri) false Token URI.
type string true GCP account type.

Enumerated Values

Property Value
type service_account

GCPOutput

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false Use the specified credential to access the url
format string false Type of input file format
partitionColumns [string] false maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true Type name for this output type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
format [csv, parquet]
type gcp

GCPOutputAdaptor

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
format string false Type of input file format
partitionColumns [string] false maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true Type name for this output type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format [csv, parquet]
type gcp

GoogleServiceAccountCredentials

{
  "configId": "string",
  "credentialType": "gcp",
  "gcpKey": {
    "authProviderX509CertUrl": "http://example.com",
    "authUri": "http://example.com",
    "clientEmail": "string",
    "clientId": "string",
    "clientX509CertUrl": "http://example.com",
    "privateKey": "string",
    "privateKeyId": "string",
    "projectId": "string",
    "tokenUri": "http://example.com",
    "type": "service_account"
  },
  "googleConfigId": "string"
}

Properties

Name Type Required Restrictions Description
configId string false ID of Secure configurations shared by admin.Alternative to googleConfigId (deprecated). If specified, cannot include gcpKey.
credentialType string true The type of these credentials, 'gcp' here.
gcpKey GCPKey false The Google Cloud Platform (GCP) key. Output is the downloaded JSON resulting from creating a service account User Managed Key (in the IAM & admin > Service accounts section of GCP).Required if googleConfigId/configId is not specified.Cannot include this parameter if googleConfigId/configId is specified.
googleConfigId string false ID of Secure configurations shared by admin. This is deprecated.Please use configId instead. If specified, cannot include gcpKey.

Enumerated Values

Property Value
credentialType gcp

HTTPDataStreamer

{
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
type http

HTTPIntake

{
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
type http

HTTPOutput

{
  "headers": {},
  "method": "POST",
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
headers object false Extra headers to send with the request
method string true Method to use when saving the CSV file
type string true Type name for this output type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
method [POST, PUT]
type http

HttpOutputAdaptor

{
  "headers": {},
  "method": "POST",
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
headers object false Extra headers to send with the request
method string true Method to use when saving the CSV file
type string true Type name for this output type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
method [POST, PUT]
type http

JDBCDataStreamer

{
  "catalog": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "fetchSize": 1,
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "jdbc"
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to read input data from.
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with read access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataStoreId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
fetchSize integer false maximum: 1000000
minimum: 1
A user specified fetch size. Changing it can be used to balance throughput and memory usage. Deprecated and ignored since v2.21.
query string false A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false The name of the specified database schema to read input data from.
table string false The name of the specified database table to read input data from.
type string true Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
type jdbc

JDBCIntake

{
  "catalog": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "fetchSize": 1,
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "jdbc"
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to read input data from.
credentialId string¦null false The ID of the credential holding information about a user with read access to the JDBC data source.
dataStoreId string true ID of the data store to connect to
fetchSize integer false maximum: 1000000
minimum: 1
A user specified fetch size. Changing it can be used to balance throughput and memory usage. Deprecated and ignored since v2.21.
query string false A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false The name of the specified database schema to read input data from.
table string false The name of the specified database table to read input data from.
type string true Type name for this intake type

Enumerated Values

Property Value
type jdbc

JDBCOutput

{
  "catalog": "string",
  "commitInterval": 600,
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "schema": "string",
  "statementType": "createTable",
  "table": "string",
  "type": "jdbc",
  "updateColumns": [
    "string"
  ],
  "whereColumns": [
    "string"
  ]
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to write output data to.
commitInterval integer false maximum: 86400
minimum: 0
Defines a time interval in seconds between each commit is done to the JDBC source. If set to 0, the batch prediction operation will write the entire job before committing.
createTableIfNotExists boolean false Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId string¦null false The ID of the credential holding information about a user with write access to the JDBC data source.
dataStoreId string true ID of the data store to connect to
schema string false The name of the specified database schema to write the results to.
statementType string true The statement type to use when writing the results. Deprecation Warning: Use of create_table is now discouraged. Use one of the other possibilities along with the parameter createTableIfNotExists set to true.
table string true The name of the specified database table to write the results to.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
type string true Type name for this intake type
updateColumns [string] false maxItems: 100
The column names to be updated if statementType is set to either update or upsert.
whereColumns [string] false maxItems: 100
The column names to be used in the where clause if statementType is set to update or upsert.

Enumerated Values

Property Value
statementType [createTable, create_table, insert, insertUpdate, insert_update, update]
type jdbc

JdbcOutputAdaptor

{
  "catalog": "string",
  "commitInterval": 600,
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "schema": "string",
  "statementType": "createTable",
  "table": "string",
  "type": "jdbc",
  "updateColumns": [
    "string"
  ],
  "whereColumns": [
    "string"
  ]
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to write output data to.
commitInterval integer false maximum: 86400
minimum: 0
Defines a time interval in seconds between each commit is done to the JDBC source. If set to 0, the batch prediction operation will write the entire job before committing.
createTableIfNotExists boolean false Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with write access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataStoreId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
schema string false The name of the specified database schema to write the results to.
statementType string true The statement type to use when writing the results. Deprecation Warning: Use of create_table is now discouraged. Use one of the other possibilities along with the parameter createTableIfNotExists set to true.
table string true The name of the specified database table to write the results to.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
type string true Type name for this intake type
updateColumns [string] false maxItems: 100
The column names to be updated if statementType is set to either update or upsert.
whereColumns [string] false maxItems: 100
The column names to be used in the where clause if statementType is set to update or upsert.

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
statementType [createTable, create_table, insert, insertUpdate, insert_update, update]
type jdbc

JobRunTimeBasedForecastPointPolicy

{
  "configuration": {
    "offset": "string"
  },
  "type": "jobRunTimeBased"
}

Properties

Name Type Required Restrictions Description
configuration JobRunTimeBasedForecastPointPolicySettings false Customize if forecast point based on job run time needs to be shifted.
type string true Type of the forecast point policy. Forecast point will be based on the scheduled run time of the job or the current moment in UTC if job was launched manually. Run time can be adjusted backwards or forwards.

Enumerated Values

Property Value
type jobRunTimeBased

JobRunTimeBasedForecastPointPolicySettings

{
  "offset": "string"
}

Properties

Name Type Required Restrictions Description
offset string(offset) true Offset to apply to scheduled run time of the job in a ISO-8601 format toobtain a relative forecast point. Example of the positive offset 'P2DT5H3M', example of the negative offset '-P2DT5H4M'

LocalFileDataStreamer

{
  "async": true,
  "multipart": true,
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
async boolean¦null false The default behavior (async: true) will still submit the job to the queue and start processing as soon as the upload is started.Setting it to false will postpone submitting the job to the queue until all data has been uploaded.This is helpful if the user is on a bad connection and bottlednecked by the upload speed. Instead of blocking the queue this will allow others to submit to the queue until the upload has finished.
multipart boolean false specify if the data will be uploaded in multiple parts instead of a single file
type string true Type name for this intake type

Enumerated Values

Property Value
type [local_file, localFile]

LocalFileIntake

{
  "async": true,
  "multipart": true,
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
async boolean¦null false The default behavior (async: true) will still submit the job to the queue and start processing as soon as the upload is started.Setting it to false will postpone submitting the job to the queue until all data has been uploaded.This is helpful if the user is on a bad connection and bottlednecked by the upload speed. Instead of blocking the queue this will allow others to submit to the queue until the upload has finished.
multipart boolean false specify if the data will be uploaded in multiple parts instead of a single file
type string true Type name for this intake type

Enumerated Values

Property Value
type [local_file, localFile]

LocalFileOutput

{
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
type string true Type name for this output type

Enumerated Values

Property Value
type [local_file, localFile]

LocalFileOutputAdaptor

{
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
type string true Type name for this output type

Enumerated Values

Property Value
type [local_file, localFile]

MonitoringAggregation

{
  "retentionPolicy": "samples",
  "retentionValue": 0
}

Properties

Name Type Required Restrictions Description
retentionPolicy string false Monitoring jobs retention policy for aggregation.
retentionValue integer false Amount/percentage of samples to retain.

Enumerated Values

Property Value
retentionPolicy [samples, percentage]

MonitoringColumnsMapping

{
  "actedUponColumn": "string",
  "actualsTimestampColumn": "string",
  "actualsValueColumn": "string",
  "associationIdColumn": "string",
  "customMetricId": "string",
  "customMetricTimestampColumn": "string",
  "customMetricTimestampFormat": "string",
  "customMetricValueColumn": "string",
  "monitoredStatusColumn": "string",
  "predictionsColumns": [
    {
      "className": "string",
      "columnName": "string"
    }
  ],
  "uniqueRowIdentifierColumns": [
    "string"
  ]
}

Properties

Name Type Required Restrictions Description
actedUponColumn string false Name of column that contains value for acted_on.
actualsTimestampColumn string false Name of column that contains actual timestamps.
actualsValueColumn string false Name of column that contains actuals value.
associationIdColumn string false Name of column that contains association Id.
customMetricId string false Id of custom metric to process values for.
customMetricTimestampColumn string false Name of column that contains custom metric values timestamps.
customMetricTimestampFormat string false Format of timestamps from customMetricTimestampColumn.
customMetricValueColumn string false Name of column that contains values for custom metric.
monitoredStatusColumn string false Column name used to mark monitored rows.
predictionsColumns any false Name of the column(s) which contain prediction values.

oneOf

Name Type Required Restrictions Description
» anonymous [PredictionColumMap] false Map containing column name(s) and class name(s) for multiclass problem.

xor

Name Type Required Restrictions Description
» anonymous string false Column name that contains the prediction for regressions problem.

continued

Name Type Required Restrictions Description
uniqueRowIdentifierColumns [string] false Column(s) name of unique row identifiers.

MonitoringOutputSettings

{
  "monitoredStatusColumn": "string",
  "uniqueRowIdentifierColumns": [
    "string"
  ]
}

Properties

Name Type Required Restrictions Description
monitoredStatusColumn string true Column name used to mark monitored rows.
uniqueRowIdentifierColumns [string] true Column(s) name of unique row identifiers.

OAuthCredentials

{
  "credentialType": "oauth",
  "oauthAccessToken": null,
  "oauthClientId": null,
  "oauthClientSecret": null,
  "oauthRefreshToken": "string"
}

Properties

Name Type Required Restrictions Description
credentialType string true The type of these credentials, 'oauth' here.
oauthAccessToken string¦null false The oauth access token.
oauthClientId string¦null false The oauth client ID.
oauthClientSecret string¦null false The oauth client secret.
oauthRefreshToken string true The oauth refresh token.

Enumerated Values

Property Value
credentialType oauth

PasswordCredentials

{
  "catalogVersionId": "string",
  "password": "string",
  "url": "string",
  "user": "string"
}

Properties

Name Type Required Restrictions Description
catalogVersionId string false The ID of the latest version of the catalog entry.
password string true The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.
url string false The link to retrieve more detailed information about the entity that uses this catalog dataset.
user string true The username for database authentication.

PredictJobDetailsResponse

{
  "id": "string",
  "isBlocked": true,
  "message": "string",
  "modelId": "string",
  "projectId": "string",
  "status": "queue"
}

Properties

Name Type Required Restrictions Description
id string true the job ID of the job
isBlocked boolean true True if a job is waiting for its dependencies to be resolved first.
message string true An optional message about the job
modelId string true The ID of the model
projectId string true the project the job belongs to
status string true the status of the job

Enumerated Values

Property Value
status [queue, inprogress, error, ABORTED, COMPLETED]

PredictionArrayObjectValues

{
  "label": "string",
  "threshold": 1,
  "value": 0
}

Properties

Name Type Required Restrictions Description
label any true For regression problems this will be the name of the target column, 'Anomaly score' or ignored field. For classification projects this will be the name of the class.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous number false none

continued

Name Type Required Restrictions Description
threshold number false maximum: 1
minimum: 0
Threshold used in multilabel classification for this class.
value number true The predicted probability of the class identified by the label.

PredictionColumMap

{
  "className": "string",
  "columnName": "string"
}

Properties

Name Type Required Restrictions Description
className string true Class name.
columnName string true Column name that contains the prediction for a specific class.

PredictionDataSource

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "dataSourceId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false The actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset.
credentialData any false The credentials to authenticate with the database, to use instead of user/password or credential ID.

oneOf

Name Type Required Restrictions Description
» anonymous BasicCredentials false none

xor

Name Type Required Restrictions Description
» anonymous S3Credentials false none

xor

Name Type Required Restrictions Description
» anonymous OAuthCredentials false none

continued

Name Type Required Restrictions Description
credentialId string false The credential ID to use for database authentication.
credentials [oneOf] false maxItems: 30
A list of credentials for the secondary datasets used in feature discovery project.

oneOf

Name Type Required Restrictions Description
» anonymous PasswordCredentials false none

xor

Name Type Required Restrictions Description
» anonymous CredentialId false none

continued

Name Type Required Restrictions Description
dataSourceId string true The ID of DataSource.
forecastPoint string(date-time) false For time series projects only. The time in the dataset relative to which predictions are generated. This value is optional. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
password string false The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored. DEPRECATED: please use credentialId or credentialData instead.
predictionsEndDate string(date-time) false The end date for bulk predictions, exclusive. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false The start date for bulk predictions. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck boolean false For time series projects only. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. This value is optional. If omitted or false, missing values are not allowed.
secondaryDatasetsConfigId string false For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction.
useKerberos boolean false If true, use kerberos authentication for database authentication. Default is false.
user string false The username for database authentication. DEPRECATED: please use credentialId or credentialData instead.

PredictionDatasetListControllerResponse

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "catalogId": "string",
      "catalogVersionId": "string",
      "containsTargetValues": true,
      "created": "2019-08-24T14:15:22Z",
      "dataEndDate": "2019-08-24T14:15:22Z",
      "dataQualityWarnings": {
        "hasKiaMissingValuesInForecastWindow": true,
        "insufficientRowsForEvaluatingModels": true,
        "singleClassActualValueColumn": true
      },
      "dataStartDate": "2019-08-24T14:15:22Z",
      "detectedActualValueColumns": [
        {
          "missingCount": 0,
          "name": "string"
        }
      ],
      "forecastPoint": "string",
      "forecastPointRange": [
        "2019-08-24T14:15:22Z"
      ],
      "id": "string",
      "maxForecastDate": "2019-08-24T14:15:22Z",
      "name": "string",
      "numColumns": 0,
      "numRows": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "secondaryDatasetsConfigId": "string"
    }
  ],
  "next": "string",
  "previous": "string"
}

Properties

Name Type Required Restrictions Description
count integer true minimum: 0
The number of items returned on this page.
data [PredictionDatasetRetrieveResponse] true Each has the same schema as if retrieving the dataset individually from GET /api/v2/projects/{projectId}/predictionDatasets/{datasetId}/
next string¦null true A URL pointing to the next page (if null, there is no next page).
previous string¦null true A URL pointing to the previous page (if null, there is no previous page).

PredictionDatasetRetrieveResponse

{
  "actualValueColumn": "string",
  "catalogId": "string",
  "catalogVersionId": "string",
  "containsTargetValues": true,
  "created": "2019-08-24T14:15:22Z",
  "dataEndDate": "2019-08-24T14:15:22Z",
  "dataQualityWarnings": {
    "hasKiaMissingValuesInForecastWindow": true,
    "insufficientRowsForEvaluatingModels": true,
    "singleClassActualValueColumn": true
  },
  "dataStartDate": "2019-08-24T14:15:22Z",
  "detectedActualValueColumns": [
    {
      "missingCount": 0,
      "name": "string"
    }
  ],
  "forecastPoint": "string",
  "forecastPointRange": [
    "2019-08-24T14:15:22Z"
  ],
  "id": "string",
  "maxForecastDate": "2019-08-24T14:15:22Z",
  "name": "string",
  "numColumns": 0,
  "numRows": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "secondaryDatasetsConfigId": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string¦null false Optional, only available for unsupervised projects, in case dataset was uploaded with actual value column specified. Name of the column which will be used to calculate the classification metrics and insights.
catalogId string¦null true The ID of the AI catalog entry used to create the prediction, dataset or None if not created from the AI catalog.
catalogVersionId string¦null true The ID of the AI catalog version used to create the prediction dataset, or None if not created from the AI catalog.
containsTargetValues boolean¦null false If True, dataset contains target values and can be used to calculate the classification metrics and insights. Only applies for supervised projects.
created string(date-time) true The date string of when the dataset was created, of the formatYYYY-mm-ddTHH:MM:SS.ssssssZ, like 2016-06-09T11:32:34.170338Z.
dataEndDate string(date-time) false Only available for time series projects, a date string representing the maximum primary date of the prediction dataset.
dataQualityWarnings DataQualityWarningsRecord true A Json object of available warnings about potential problems in this prediction dataset. Empty if no warnings.
dataStartDate string(date-time) false Only available for time series projects, a date string representing the minimum primary date of the prediction dataset.
detectedActualValueColumns [ActualValueColumnInfo] false Only available for unsupervised projects, a list of detected actualValueColumnInfo objects which can be used to calculate the classification metrics and insights.
forecastPoint string¦null true The date string of the forecastPoint of this prediction dataset. Only non-null for time series projects.
forecastPointRange [string] false Only available for time series projects, the start and end of the range of dates available for use as the forecast point, detected based on the uploaded prediction dataset.
id string true The ID of this dataset.
maxForecastDate string(date-time) false Only available for time series projects, a date string representing the maximum forecast date of this prediction dataset.
name string true The name of the dataset when it was uploaded.
numColumns integer true The number of columns in this dataset.
numRows integer true The number of rows in this dataset.
predictionsEndDate string(date-time)¦null true The date string of the prediction end date of this prediction dataset. Used for bulk predictions. Note that this parameter is for generating historical predictions using the training data. Only non-null for time series projects.
predictionsStartDate string(date-time)¦null true The date string of the prediction start date of this prediction dataset. Used for bulk predictions. Note that this parameter is for generating historical predictions using the training data. Only non-null for time series projects.
projectId string true The project ID that owns this dataset.
secondaryDatasetsConfigId string false Only available for Feature discovery projects. Id of the secondary dataset config used by the dataset for the prediction.

PredictionExplanationsMetadataValues

{
  "shapRemainingTotal": 0
}

Properties

Name Type Required Restrictions Description
shapRemainingTotal integer false Will be present only if explanationAlgorithm = 'shap' and maxExplanations is nonzero. The total of SHAP values for features beyond the maxExplanations. This can be identically 0 in all rows, if maxExplanations is greater than the number of features and thus all features are returned.

PredictionExplanationsObject

{
  "feature": "string",
  "featureValue": 0,
  "label": "string",
  "strength": 0
}

Properties

Name Type Required Restrictions Description
feature string true The name of the feature contributing to the prediction.
featureValue any true The value the feature took on for this row. The type corresponds to the feature (bool, int, float, str, etc.).

oneOf

Name Type Required Restrictions Description
» anonymous integer false none

xor

Name Type Required Restrictions Description
» anonymous boolean false none

xor

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous number false none

continued

Name Type Required Restrictions Description
label any true Describes what output was driven by this prediction explanation. For regression projects, it is the name of the target feature. For classification projects, it is the class whose probability increasing would correspond to a positive strength of this prediction explanation. For predictions made using anomaly detection models, it is the Anomaly Score.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous number false none

continued

Name Type Required Restrictions Description
strength number¦null false Algorithm-specific explanation value attributed to feature in this row. If explanationAlgorithm = shap, this is the SHAP value.

PredictionFileUpload

{
  "actualValueColumn": "string",
  "credentials": "string",
  "file": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": "false",
  "secondaryDatasetsConfigId": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset.
credentials string false A list of credentials for the secondary datasets used in feature discovery project
file string(binary) true The dataset file to upload for prediction.
forecastPoint string(date-time) false For time series projects only. The time in the dataset relative to which predictions are generated. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
predictionsEndDate string(date-time) false Used for time series projects only. The end date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false Used for time series projects only. The start date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck string false A boolean flag. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or false, missing values are not allowed. For time series projects only.
secondaryDatasetsConfigId string false Optional, for feature discovery projects only. The Id of the alternative secondary dataset config to use during prediction.

Enumerated Values

Property Value
relaxKnownInAdvanceFeaturesCheck [false, False, true, True]

PredictionFromCatalogDataset

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "datasetId": "string",
  "datasetVersionId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset.
credentialData any false The credentials to authenticate with the database, to be used instead of credential ID.

oneOf

Name Type Required Restrictions Description
» anonymous BasicCredentials false none

xor

Name Type Required Restrictions Description
» anonymous S3Credentials false none

xor

Name Type Required Restrictions Description
» anonymous OAuthCredentials false none

xor

Name Type Required Restrictions Description
» anonymous SnowflakeKeyPairCredentials false none

xor

Name Type Required Restrictions Description
» anonymous GoogleServiceAccountCredentials false none

xor

Name Type Required Restrictions Description
» anonymous DatabricksAccessTokenCredentials false none

xor

Name Type Required Restrictions Description
» anonymous AzureServicePrincipalCredentials false none

continued

Name Type Required Restrictions Description
credentialId string false The ID of the set of credentials to authenticate with the database.
credentials [oneOf] false maxItems: 30
List of credentials for the secondary datasets used in feature discovery project.

oneOf

Name Type Required Restrictions Description
» anonymous PasswordCredentials false none

xor

Name Type Required Restrictions Description
» anonymous CredentialId false none

continued

Name Type Required Restrictions Description
datasetId string true The ID of the dataset entry to use for prediction dataset.
datasetVersionId string false The ID of the dataset version to use for the prediction dataset. If not specified - uses latest version associated with datasetId.
forecastPoint string(date-time) false For time series projects only. The time in the dataset relative to which predictions are generated. This value is optional. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
password string false The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.DEPRECATED: please use credentialId or credentialData instead.
predictionsEndDate string(date-time) false The end date for bulk predictions, exclusive. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false The start date for bulk predictions. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck boolean false For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.
secondaryDatasetsConfigId string false For feature discovery projects only. The Id of the alternative secondary dataset config to use during prediction.
useKerberos boolean false If true, use kerberos authentication for database authentication. Default is false.
user string false The username for database authentication. DEPRECATED: please use credentialId or credentialData instead.

PredictionObject

{
  "actualValue": "string",
  "forecastDistance": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "originalFormatTimestamp": "string",
  "positiveProbability": 0,
  "prediction": 0,
  "predictionExplanationMetadata": [
    {
      "shapRemainingTotal": 0
    }
  ],
  "predictionExplanations": [
    {
      "feature": "string",
      "featureValue": 0,
      "label": "string",
      "strength": 0
    }
  ],
  "predictionIntervalLowerBound": 0,
  "predictionIntervalUpperBound": 0,
  "predictionThreshold": 1,
  "predictionValues": [
    {
      "label": "string",
      "threshold": 1,
      "value": 0
    }
  ],
  "rowId": 0,
  "segmentId": "string",
  "seriesId": "string",
  "target": "string",
  "timestamp": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
actualValue string¦null false In the case of an unsupervised time series project with a dataset using predictionsStartDate and predictionsEndDate for bulk predictions and a specified actual value column, the predictions will be a json array in the same format as with a forecast point with one additional element - actualValues. It is the actual value in the row.
forecastDistance integer¦null false (if time series project) The number of time units this prediction is away from the forecastPoint. The unit of time is determined by the timeUnit of the datetime partition column.
forecastPoint string(date-time)¦null false (if time series project) The forecastPoint of the predictions. Either provided or inferred.
originalFormatTimestamp string false The timestamp of this row in the prediction dataset. Unlike the timestamp field, this field will keep the same DateTime formatting as the uploaded prediction dataset. (This column is shown if enabled by your administrator.)
positiveProbability number¦null false minimum: 0
For binary classification, the probability the row belongs to the positive class.
prediction any true The prediction of the model.

oneOf

Name Type Required Restrictions Description
» anonymous number false If using a regressor model, will be the numeric value of the target.

xor

Name Type Required Restrictions Description
» anonymous string false If using a binary or muliclass classifier model, will be the predicted class.

xor

Name Type Required Restrictions Description
» anonymous [string] false If using a multilabel classifier model, will be a list of predicted classes.

continued

Name Type Required Restrictions Description
predictionExplanationMetadata [PredictionExplanationsMetadataValues] false Array containing algorithm-specific values. Varies depending on the value of explanationAlgorithm.
predictionExplanations [PredictionExplanationsObject]¦null false Array contains predictionExplanation objects. The total elements in the array are bounded by maxExplanations and feature count. It will be present only if explanationAlgorithm is not null (prediction explanations were requested).
predictionIntervalLowerBound number false Present if includePredictionIntervals is True. Indicates a lower bound of the estimate of error based on test data.
predictionIntervalUpperBound number false Present if includePredictionIntervals is True. Indicates an upper bound of the estimate of error based on test data.
predictionThreshold number false maximum: 1
minimum: 0
Threshold used for binary classification in predictions.
predictionValues [PredictionArrayObjectValues] false A list of predicted values for this row.
rowId integer true minimum: 0
The row in the prediction dataset this prediction corresponds to.
segmentId string false The ID of the segment value for a segmented project.
seriesId string¦null false The ID of the series value for a multiseries project. For time series projects that are not a multiseries this will be a NaN.
target string¦null false In the case of a time series project with a dataset using predictionsStartDate and predictionsEndDate for bulk predictions, the predictions will be a json array in the same format as with a forecast point with one additional element - target. It is the target value in the row.
timestamp string(date-time)¦null false (if time series project) The timestamp of this row in the prediction dataset.

PredictionRetrieveResponse

{
  "actualValueColumn": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "positiveClass": "string",
  "predictionIntervalsSize": 0,
  "predictions": [
    {
      "actualValue": "string",
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "originalFormatTimestamp": "string",
      "positiveProbability": 0,
      "prediction": 0,
      "predictionExplanationMetadata": [
        {
          "shapRemainingTotal": 0
        }
      ],
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionIntervalLowerBound": 0,
      "predictionIntervalUpperBound": 0,
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "segmentId": "string",
      "seriesId": "string",
      "target": "string",
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "shapBaseValue": 0,
  "shapWarnings": [
    {
      "maxNormalizedMismatch": 0,
      "mismatchRowCount": 0
    }
  ],
  "task": "Regression"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string¦null false For time series unsupervised projects only. Will be present only if the prediction dataset has an actual value column. The name of the column with actuals that was used to calculate the scores and insights.
explanationAlgorithm string¦null false The selected algorithm to use for prediction explanations. At present, the only acceptable value is 'shap', which selects the SHapley Additive exPlanations (SHAP) explainer. Defaults to null (no prediction explanations).
featureDerivationWindowCounts integer¦null false For time series projects with partial history only. Indicates how many points were used during feature derivation in feature derivation window.
includesPredictionIntervals boolean false For time series projects only. Indicates if prediction intervals will be part of the response. Defaults to False.
maxExplanations integer¦null false The maximum number of prediction explanations values to be returned with each row in the predictions json array. Null indicates 'no limit'. Will be present only if explanationAlgorithm was set.
positiveClass any true For binary classification, the class of the target deemed the positive class. For all other project types this field will be null.

oneOf

Name Type Required Restrictions Description
» anonymous string false none

xor

Name Type Required Restrictions Description
» anonymous integer false none

xor

Name Type Required Restrictions Description
» anonymous number false none

continued

Name Type Required Restrictions Description
predictionIntervalsSize integer¦null false For time series projects only. Will be present only if includePredictionIntervals is True. Indicates the percentile used for prediction intervals calculation. Defaults to 80.
predictions [PredictionObject] true The json array of predictions. The predictions in the response will have slightly different formats, depending on the project type.
shapBaseValue number¦null false Will be present only if explanationAlgorithm = 'shap'. The model's average prediction over the training data. SHAP values are deviations from the base value.
shapWarnings [ShapWarningValues]¦null false Will be present if explanationAlgorithm was set to shap and there were additivity failures during SHAP values calculation.
task string true The prediction task.

Enumerated Values

Property Value
task [Regression, Binary, Multiclass, Multilabel]

PredictionURLUpload

{
  "actualValueColumn": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. This value is optional.
credentials [oneOf] false maxItems: 30
A list of credentials for the secondary datasets used in feature discovery project

oneOf

Name Type Required Restrictions Description
» anonymous PasswordCredentials false none

xor

Name Type Required Restrictions Description
» anonymous CredentialId false none

continued

Name Type Required Restrictions Description
forecastPoint string(date-time) false For time series projects only. The time in the dataset relative to which predictions are generated. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
predictionsEndDate string(date-time) false Used for time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false Used for time series projects only. The start date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck boolean false For time series projects only. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. This value is optional. If omitted or false, missing values are not allowed.
secondaryDatasetsConfigId string false For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction.
url string(url) true The URL to download the dataset from.

RetrieveListPredictionMetadataObjectsResponse

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "datasetId": "string",
      "explanationAlgorithm": "string",
      "featureDerivationWindowCounts": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "id": "string",
      "includesPredictionIntervals": true,
      "maxExplanations": 0,
      "modelId": "string",
      "predictionDatasetId": "string",
      "predictionIntervalsSize": 0,
      "predictionThreshold": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "shapWarnings": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      },
      "url": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Properties

Name Type Required Restrictions Description
count integer true The number of items returned on this page.
data [RetrievePredictionMetadataObject] true An array of the metadata records.
next string(uri)¦null true URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true URL pointing to the previous page (if null, there is no previous page).

RetrievePredictionMetadataObject

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "id": "string",
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "modelId": "string",
  "predictionDatasetId": "string",
  "predictionIntervalsSize": 0,
  "predictionThreshold": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "shapWarnings": {
    "maxNormalizedMismatch": 0,
    "mismatchRowCount": 0
  },
  "url": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string¦null false For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights.
datasetId string¦null false Deprecated alias for predictionDatasetId.
explanationAlgorithm string¦null false The selected algorithm to use for prediction explanations. At present, the only acceptable value is shap, which selects the SHapley Additive exPlanations (SHAP) explainer. Defaults to null (no prediction explanations).
featureDerivationWindowCounts integer¦null false For time series projects with partial history only. Indicates how many points were used in during feature derivation.
forecastPoint string(date-time)¦null false For time series projects only. The time in the dataset relative to which predictions were generated.
id string true The id of the prediction record.
includesPredictionIntervals boolean true Whether the predictions include prediction intervals.
maxExplanations integer¦null false The maximum number of prediction explanations values to be returned with each row in the predictions json array. Null indicates no limit. Will be present only if explanationAlgorithm was set.
modelId string true The model id used for predictions.
predictionDatasetId string¦null false The dataset id where the prediction data comes from. The field is available via /api/v2/projects/<projectId>/predictionsMetadata/ route and replaced on datasetIdin deprecated /api/v2/projects/<projectId>/predictions/ endpoint.
predictionIntervalsSize integer¦null true For time series projects only. If prediction intervals were computed, what percentile they represent. Will be None if includePredictionIntervals is False.
predictionThreshold number¦null false Threshold used for binary classification in predictions.
predictionsEndDate string(date-time)¦null false For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter was used for generating historical predictions using the training data, not for future predictions.
predictionsStartDate string(date-time)¦null false For time series projects only. The start date for bulk predictions. Note that this parameter was used for generating historical predictions using the training data, not for future predictions.
projectId string true The project id of the predictions.
shapWarnings ShapWarnings false Will be present if explanationAlgorithm was set to shap and there were additivity failures during SHAP values calculation.
url string true The url at which you can download the predictions.

S3Credentials

{
  "awsAccessKeyId": "string",
  "awsSecretAccessKey": "string",
  "awsSessionToken": null,
  "configId": "string",
  "credentialType": "s3"
}

Properties

Name Type Required Restrictions Description
awsAccessKeyId string false The S3 AWS access key ID. Required if configId is not specified.Cannot include this parameter if configId is specified.
awsSecretAccessKey string false The S3 AWS secret access key. Required if configId is not specified.Cannot include this parameter if configId is specified.
awsSessionToken string¦null false The S3 AWS session token for AWS temporary credentials.Cannot include this parameter if configId is specified.
configId string false ID of Secure configurations of credentials shared by admin.If specified, cannot include awsAccessKeyId, awsSecretAccessKey or awsSessionToken
credentialType string true The type of these credentials, 's3' here.

Enumerated Values

Property Value
credentialType s3

S3DataStreamer

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
endpointUrl string(url) false Endpoint URL for the S3 connection (omit to use the default)
format string false Type of input file format
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format [csv, parquet]
type s3

S3Intake

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false Use the specified credential to access the url
endpointUrl string(url) false Endpoint URL for the S3 connection (omit to use the default)
format string false Type of input file format
type string true Type name for this intake type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
format [csv, parquet]
type s3

S3Output

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "serverSideEncryption": {
    "algorithm": "string",
    "customerAlgorithm": "string",
    "customerKey": "string",
    "kmsEncryptionContext": "string",
    "kmsKeyId": "string"
  },
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false Use the specified credential to access the url
endpointUrl string(url) false Endpoint URL for the S3 connection (omit to use the default)
format string false Type of output file format
partitionColumns [string] false maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
serverSideEncryption ServerSideEncryption false Configure Server-Side Encryption for S3 output
type string true Type name for this output type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
format [csv, parquet]
type s3

S3OutputAdaptor

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "serverSideEncryption": {
    "algorithm": "string",
    "customerAlgorithm": "string",
    "customerKey": "string",
    "kmsEncryptionContext": "string",
    "kmsKeyId": "string"
  },
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
endpointUrl string(url) false Endpoint URL for the S3 connection (omit to use the default)
format string false Type of output file format
partitionColumns [string] false maxItems: 100
For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
serverSideEncryption ServerSideEncryption false Configure Server-Side Encryption for S3 output
type string true Type name for this output type
url string(url) true URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format [csv, parquet]
type s3

Schedule

{
  "dayOfMonth": [
    "*"
  ],
  "dayOfWeek": [
    "*"
  ],
  "hour": [
    "*"
  ],
  "minute": [
    "*"
  ],
  "month": [
    "*"
  ]
}

Properties

Name Type Required Restrictions Description
dayOfMonth [anyOf] true The date(s) of the month that the job will run. Allowed values are either [1 ... 31] or ["*"] for all days of the month. This field is additive with dayOfWeek, meaning the job will run both on the date(s) defined in this field and the day specified by dayOfWeek (for example, dates 1st, 2nd, 3rd, plus every Tuesday). If dayOfMonth is set to ["*"] and dayOfWeek is defined, the scheduler will trigger on every day of the month that matches dayOfWeek (for example, Tuesday the 2nd, 9th, 16th, 23rd, 30th). Invalid dates such as February 31st are ignored.

anyOf

Name Type Required Restrictions Description
» anonymous number false none

or

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dayOfWeek [anyOf] true The day(s) of the week that the job will run. Allowed values are [0 .. 6], where (Sunday=0), or ["*"], for all days of the week. Strings, either 3-letter abbreviations or the full name of the day, can be used interchangeably (e.g., "sunday", "Sunday", "sun", or "Sun", all map to [0]. This field is additive with dayOfMonth, meaning the job will run both on the date specified by dayOfMonth and the day defined in this field.

anyOf

Name Type Required Restrictions Description
» anonymous number false none

or

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
hour [anyOf] true The hour(s) of the day that the job will run. Allowed values are either ["*"] meaning every hour of the day or [0 ... 23].

anyOf

Name Type Required Restrictions Description
» anonymous number false none

or

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
minute [anyOf] true The minute(s) of the day that the job will run. Allowed values are either ["*"] meaning every minute of the day or[0 ... 59].

anyOf

Name Type Required Restrictions Description
» anonymous number false none

or

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
month [anyOf] true The month(s) of the year that the job will run. Allowed values are either [1 ... 12] or ["*"] for all months of the year. Strings, either 3-letter abbreviations or the full name of the month, can be used interchangeably (e.g., "jan" or "october"). Months that are not compatible with dayOfMonth are ignored, for example {"dayOfMonth": [31], "month":["feb"]}.

anyOf

Name Type Required Restrictions Description
» anonymous number false none

or

Name Type Required Restrictions Description
» anonymous string false none

ScheduledJobResponse

{
  "createdBy": "string",
  "deploymentId": "string",
  "enabled": true,
  "id": "string",
  "name": "string",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "scheduledJobId": "string",
  "status": {
    "lastFailedRun": "2019-08-24T14:15:22Z",
    "lastSuccessfulRun": "2019-08-24T14:15:22Z",
    "nextRunTime": "2019-08-24T14:15:22Z",
    "queuePosition": 0,
    "running": true
  },
  "typeId": "string",
  "updatedAt": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
createdBy string¦null false User name of the creator
deploymentId string¦null false ID of the deployment this scheduled job is created from.
enabled boolean true True if the job is enabled and false if the job is disabled.
id string true ID of scheduled prediction job
name string¦null false Name of the scheduled job.
schedule Schedule true Schedule describing when to refresh the dataset, the smallest schedule allowed is daily. Can be null is job was created without a schedule.
scheduledJobId string true ID of this scheduled job.
status ScheduledJobStatus true Object containing status information about the scheduled job.
typeId string true Job type of the scheduled job
updatedAt string(date-time)¦null false Time of last modification

ScheduledJobStatus

{
  "lastFailedRun": "2019-08-24T14:15:22Z",
  "lastSuccessfulRun": "2019-08-24T14:15:22Z",
  "nextRunTime": "2019-08-24T14:15:22Z",
  "queuePosition": 0,
  "running": true
}

Properties

Name Type Required Restrictions Description
lastFailedRun string(date-time)¦null false Date and time of the last failed run.
lastSuccessfulRun string(date-time)¦null false Date and time of the last successful run.
nextRunTime string(date-time)¦null false Date and time of the next run.
queuePosition integer¦null false minimum: 0
Position of the job in the queue Job. The value will show 0 if the job is about to run, otherwise, the number will be greater than 0 if currently queued, or None if the job is not currently running.
running boolean true true or false depending on whether the job is currently running.

ScheduledJobsListResponse

{
  "count": 0,
  "data": [
    {
      "createdBy": "string",
      "deploymentId": "string",
      "enabled": true,
      "id": "string",
      "name": "string",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "scheduledJobId": "string",
      "status": {
        "lastFailedRun": "2019-08-24T14:15:22Z",
        "lastSuccessfulRun": "2019-08-24T14:15:22Z",
        "nextRunTime": "2019-08-24T14:15:22Z",
        "queuePosition": 0,
        "running": true
      },
      "typeId": "string",
      "updatedAt": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0,
  "updatedAt": "2019-08-24T14:15:22Z",
  "updatedBy": "string"
}

Properties

Name Type Required Restrictions Description
count integer false Number of items returned on this page.
data [ScheduledJobResponse] true maxItems: 100
List of scheduled jobs
next string(uri)¦null true URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true URL pointing to the previous page (if null, there is no previous page).
totalCount integer true The total number of items across all pages.
updatedAt string(date-time) false Time of last modification
updatedBy string false User ID of last modifier

ServerSideEncryption

{
  "algorithm": "string",
  "customerAlgorithm": "string",
  "customerKey": "string",
  "kmsEncryptionContext": "string",
  "kmsKeyId": "string"
}

Properties

Name Type Required Restrictions Description
algorithm string false The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
customerAlgorithm string false Specifies the algorithm to use to when encrypting the object (for example, AES256).
customerKey string false Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in customerAlgorithm. The key must be sent as an base64 encoded string.
kmsEncryptionContext string false Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.
kmsKeyId string false Specifies the ID of the symmetric customer managed key to use for object encryption.

ShapWarning

{
  "partitionName": "string",
  "value": {
    "maxNormalizedMismatch": 0,
    "mismatchRowCount": 0
  }
}

Properties

Name Type Required Restrictions Description
partitionName string true The partition used for the prediction record.
value ShapWarningItems true The warnings related to this partition

ShapWarningItems

{
  "maxNormalizedMismatch": 0,
  "mismatchRowCount": 0
}

Properties

Name Type Required Restrictions Description
maxNormalizedMismatch number true The maximal relative normalized mismatch value
mismatchRowCount integer true The count of rows for which additivity check failed

ShapWarningValues

{
  "maxNormalizedMismatch": 0,
  "mismatchRowCount": 0
}

Properties

Name Type Required Restrictions Description
maxNormalizedMismatch number true The maximal relative normalized mismatch value.
mismatchRowCount integer true The count of rows for which additivity check failed.

ShapWarnings

{
  "maxNormalizedMismatch": 0,
  "mismatchRowCount": 0
}

Properties

Name Type Required Restrictions Description
maxNormalizedMismatch number true The maximal relative normalized mismatch value.
mismatchRowCount integer true The count of rows for which additivity check failed.

SnowflakeDataStreamer

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to read input data from.
cloudStorageCredentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with read access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
cloudStorageType string false Type name for cloud storage
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with read access to the Snowflake data source.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataStoreId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
externalStage string true External storage
query string false A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false The name of the specified database schema to read input data from.
table string false The name of the specified database table to read input data from.
type string true Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
cloudStorageType [azure, gcp, s3]
anonymous [redacted]
anonymous [redacted]
type snowflake

SnowflakeIntake

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to read input data from.
cloudStorageCredentialId string¦null false The ID of the credential holding information about a user with read access to the cloud storage.
cloudStorageType string false Type name for cloud storage
credentialId string¦null false The ID of the credential holding information about a user with read access to the Snowflake data source.
dataStoreId string true ID of the data store to connect to
externalStage string true External storage
query string false A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false The name of the specified database schema to read input data from.
table string false The name of the specified database table to read input data from.
type string true Type name for this intake type

Enumerated Values

Property Value
cloudStorageType [azure, gcp, s3]
type snowflake

SnowflakeKeyPairCredentials

{
  "configId": "string",
  "credentialType": "snowflake_key_pair_user_account",
  "passphrase": "string",
  "privateKeyStr": "string",
  "user": "string"
}

Properties

Name Type Required Restrictions Description
configId string false The ID of the saved shared credentials. If specified, cannot include user, privateKeyStr or passphrase.
credentialType string true The type of these credentials, 'snowflake_key_pair_user_account' here.
passphrase string false Optional passphrase to decrypt private key. Cannot include this parameter if configId is specified.
privateKeyStr string false Private key for key pair authentication. Required if configId is not specified. Cannot include this parameter if configId is specified.
user string false Username for this credential. Required if configId is not specified. Cannot include this parameter if configId is specified.

Enumerated Values

Property Value
credentialType snowflake_key_pair_user_account

SnowflakeOutput

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to write output data to.
cloudStorageCredentialId string¦null false The ID of the credential holding information about a user with write access to the cloud storage.
cloudStorageType string false Type name for cloud storage
createTableIfNotExists boolean false Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId string¦null false The ID of the credential holding information about a user with write access to the Snowflake data source.
dataStoreId string true ID of the data store to connect to
externalStage string true External storage
schema string false The name of the specified database schema to write results to.
statementType string true The statement type to use when writing the results.
table string true The name of the specified database table to write results to.
type string true Type name for this output type

Enumerated Values

Property Value
cloudStorageType [azure, gcp, s3]
statementType [insert, create_table, createTable]
type snowflake

SnowflakeOutputAdaptor

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false The name of the specified database catalog to write output data to.
cloudStorageCredentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with write access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
cloudStorageType string false Type name for cloud storage
createTableIfNotExists boolean false Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with write access to the Snowflake data source.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataStoreId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
externalStage string true External storage
schema string false The name of the specified database schema to write results to.
statementType string true The statement type to use when writing the results.
table string true The name of the specified database table to write results to.
type string true Type name for this output type

Enumerated Values

Property Value
anonymous [redacted]
cloudStorageType [azure, gcp, s3]
anonymous [redacted]
anonymous [redacted]
statementType [insert, create_table, createTable]
type snowflake

SynapseDataStreamer

{
  "cloudStorageCredentialId": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the Azure credential holding information about a user with read access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with read access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataStoreId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
externalDataSource string true External datasource name
query string false A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false The name of the specified database schema to read input data from.
table string false The name of the specified database table to read input data from.
type string true Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
anonymous [redacted]
type synapse

SynapseIntake

{
  "cloudStorageCredentialId": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId string¦null false The ID of the Azure credential holding information about a user with read access to the cloud storage.
credentialId string¦null false The ID of the credential holding information about a user with read access to the JDBC data source.
dataStoreId string true ID of the data store to connect to
externalDataSource string true External datasource name
query string false A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false The name of the specified database schema to read input data from.
table string false The name of the specified database table to read input data from.
type string true Type name for this intake type

Enumerated Values

Property Value
type synapse

SynapseOutput

{
  "cloudStorageCredentialId": "string",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId string¦null false The ID of the credential holding information about a user with write access to the cloud storage.
createTableIfNotExists boolean false Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId string¦null false The ID of the credential holding information about a user with write access to the JDBC data source.
dataStoreId string true ID of the data store to connect to
externalDataSource string true External data source name
schema string false The name of the specified database schema to write results to.
statementType string true The statement type to use when writing the results.
table string true The name of the specified database table to write results to.
type string true Type name for this output type

Enumerated Values

Property Value
statementType [insert, create_table, createTable]
type synapse

SynapseOutputAdaptor

{
  "cloudStorageCredentialId": "string",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with write access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
createTableIfNotExists boolean false Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false The ID of the credential holding information about a user with write access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataStoreId any true Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
externalDataSource string true External data source name
schema string false The name of the specified database schema to write results to.
statementType string true The statement type to use when writing the results.
table string true The name of the specified database table to write results to.
type string true Type name for this output type

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
anonymous [redacted]
statementType [insert, create_table, createTable]
type synapse

Tableau

{
  "contentUrl": "string",
  "credentialId": "string",
  "dataSourceId": "string",
  "overwrite": true,
  "siteName": "string",
  "type": "tableau",
  "url": "http://example.com"
}

Properties

Name Type Required Restrictions Description
contentUrl string false Deprecated, use siteName instead
credentialId string false Use the specified credential to access the url
dataSourceId string true The ID of your Tableau data source
overwrite boolean true Should the dataset be overwritten or appended to
siteName string false Your Tableau site name
type string true Type name for this output type
url string(uri) true The URL to your online Tableau server

Enumerated Values

Property Value
type tableau

TableauOutputAdaptor

{
  "contentUrl": "string",
  "credentialId": "string",
  "dataSourceId": "string",
  "overwrite": true,
  "siteName": "string",
  "type": "tableau",
  "url": "http://example.com"
}

Properties

Name Type Required Restrictions Description
contentUrl string false Deprecated, use siteName instead
credentialId any false Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none

continued

Name Type Required Restrictions Description
dataSourceId string true The ID of your Tableau data source
overwrite boolean true Should the dataset be overwritten or appended to
siteName string false Your Tableau site name
type string true Type name for this output type
url string(uri) true The URL to your online Tableau server

Enumerated Values

Property Value
anonymous [redacted]
type tableau

TrainingPredictionsListResponse

{
  "count": 0,
  "data": [
    {
      "dataSubset": "all",
      "explanationAlgorithm": "shap",
      "id": "string",
      "maxExplanations": 100,
      "modelId": "string",
      "shapWarnings": [
        {
          "partitionName": "string",
          "value": {
            "maxNormalizedMismatch": 0,
            "mismatchRowCount": 0
          }
        }
      ],
      "url": "http://example.com"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Properties

Name Type Required Restrictions Description
count integer false Number of items returned on this page.
data [TraningPredictions] true A list of training prediction jobs
next string(uri)¦null true URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true URL pointing to the previous page (if null, there is no previous page).

TrainingPredictionsRetrieveResponse

{
  "count": 0,
  "data": [
    {
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "partitionId": "string",
      "prediction": 0,
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "seriesId": "string",
      "shapMetadata": {
        "shapBaseValue": 0,
        "shapRemainingTotal": 0,
        "warnings": [
          {
            "maxNormalizedMismatch": 0,
            "mismatchRowCount": 0
          }
        ]
      },
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Properties

Name Type Required Restrictions Description
count integer false Number of items returned on this page.
data [TraningPredictionRow] true A list of training prediction rows
next string(uri)¦null true URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true URL pointing to the previous page (if null, there is no previous page).

TraningPredictionRow

{
  "forecastDistance": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "partitionId": "string",
  "prediction": 0,
  "predictionExplanations": [
    {
      "feature": "string",
      "featureValue": 0,
      "label": "string",
      "strength": 0
    }
  ],
  "predictionThreshold": 1,
  "predictionValues": [
    {
      "label": "string",
      "threshold": 1,
      "value": 0
    }
  ],
  "rowId": 0,
  "seriesId": "string",
  "shapMetadata": {
    "shapBaseValue": 0,
    "shapRemainingTotal": 0,
    "warnings": [
      {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      }
    ]
  },
  "timestamp": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
forecastDistance integer¦null false (if time series project) The number of time units this prediction is away from the forecastPoint. The unit of time is determined by the timeUnit of the datetime partition column.
forecastPoint string(date-time)¦null false (if time series project) The forecastPoint of the predictions. Either provided or inferred.
partitionId string true The partition used for the prediction record
prediction any true The prediction of the model.

oneOf

Name Type Required Restrictions Description
» anonymous number false If using a regressor model, will be the numeric value of the target.

xor

Name Type Required Restrictions Description
» anonymous string false If using a binary or muliclass classifier model, will be the predicted class.

xor

Name Type Required Restrictions Description
» anonymous [string] false If using a multilabel classifier model, will be a list of predicted classes.

continued

Name Type Required Restrictions Description
predictionExplanations [PredictionExplanationsObject]¦null false Array contains predictionExplanation objects. The total elements in the array are bounded by maxExplanations and feature count. It will be present only if explanationAlgorithm is not null (prediction explanations were requested).
predictionThreshold number false maximum: 1
minimum: 0
Threshold used for binary classification in predictions.
predictionValues [PredictionArrayObjectValues] false A list of predicted values for this row.
rowId integer true minimum: 0
The row in the prediction dataset this prediction corresponds to.
seriesId string¦null false The ID of the series value for a multiseries project. For time series projects that are not a multiseries this will be a NaN.
shapMetadata TraningPredictionShapMetadata false The additional information necessary to understand shap based prediction explanations. Only present if explanationAlgorithm="shap" was added in compute request.
timestamp string(date-time)¦null false (if time series project) The timestamp of this row in the prediction dataset.

TraningPredictionShapMetadata

{
  "shapBaseValue": 0,
  "shapRemainingTotal": 0,
  "warnings": [
    {
      "maxNormalizedMismatch": 0,
      "mismatchRowCount": 0
    }
  ]
}

Properties

Name Type Required Restrictions Description
shapBaseValue number true The model's average prediction over the training data. SHAP values are deviations from the base value.
shapRemainingTotal integer true The total of SHAP values for features beyond the maxExplanations. This can be identically 0 in all rows, if maxExplanations is greater than the number of features and thus all features are returned.
warnings [ShapWarningItems] true SHAP values calculation warnings

TraningPredictions

{
  "dataSubset": "all",
  "explanationAlgorithm": "shap",
  "id": "string",
  "maxExplanations": 100,
  "modelId": "string",
  "shapWarnings": [
    {
      "partitionName": "string",
      "value": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      }
    }
  ],
  "url": "http://example.com"
}

Properties

Name Type Required Restrictions Description
dataSubset string true Subset of data predicted on
explanationAlgorithm string¦null false The method used for calculating prediction explanations
id string true ID of the training prediction job
maxExplanations integer¦null false maximum: 100
minimum: 0
the number of top contributors that are included in prediction explanations. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns
modelId string true ID of the model
shapWarnings [ShapWarning]¦null false Will be present if "explanationAlgorithm" was set to "shap" and there were additivity failures during SHAP values calculation
url string(uri) true The location of these predictions

Enumerated Values

Property Value
dataSubset [all, validationAndHoldout, holdout, allBacktests, validation, crossValidation]
explanationAlgorithm shap

Updated March 18, 2024