Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Predictions

This page outlines the operations, endpoints, parameters, and example requests and responses for the Predictions.

GET /api/v2/batchPredictionJobDefinitions/

List all Batch Prediction jobs definitions available

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/batchPredictionJobDefinitions/?offset=0&limit=100 \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
deploymentId query string false Includes only definitions for this particular deployment

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "batchPredictionJob": {
        "abortOnError": true,
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringBatchPrefix": "string",
        "numConcurrent": 1,
        "outputSettings": {
          "type": "localFile"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "enabled": false,
      "id": "string",
      "lastFailedRunTime": "2019-08-24T14:15:22Z",
      "lastScheduledRunTime": "2019-08-24T14:15:22Z",
      "lastStartedJobStatus": "INITIALIZING",
      "lastStartedJobTime": "2019-08-24T14:15:22Z",
      "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
      "name": "string",
      "nextScheduledRunTime": "2019-08-24T14:15:22Z",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "updated": "2019-08-24T14:15:22Z",
      "updatedBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      }
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Responses

Status Meaning Description Schema
200 OK List of all available jobs BatchPredictionJobDefinitionsListResponse
422 Unprocessable Entity Your input data or query arguments did not work together None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictionJobDefinitions/

Create a Batch Prediction Job definition. A configuration for a Batch Prediction job which can either be executed manually upon request or on scheduled intervals, if enabled. The API payload is the same as for /batchPredictions along with optional enabled and schedule items.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/batchPredictionJobDefinitions/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Parameters

Name In Type Required Description
body body BatchPredictionJobDefinitionsCreate false none

Example responses

202 Response

{
  "batchPredictionJob": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job definition BatchPredictionJobDefinitionsResponse
403 Forbidden You are not authorized to create a job definition on this deployment due to your permissions role None
422 Unprocessable Entity You tried to create a job definition with uncompatible or missing parameters to create a fully functioning job definition None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/

Delete a Batch Prediction job definition

Code samples

# You can also use wget
curl -X DELETE http://10.97.68.125/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition

Responses

Status Meaning Description Schema
204 No Content none None
403 Forbidden You are not authorized to delete this job definition due to your permissions role None
404 Not Found Job was deleted, never existed or you do not have access to it None
409 Conflict Job could not be deleted, as there are currently running jobs in the queue. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/

Retrieve a Batch Prediction job definition

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition

Example responses

200 Response

{
  "batchPredictionJob": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Responses

Status Meaning Description Schema
200 OK Job details for the requested Batch Prediction job definition BatchPredictionJobDefinitionsResponse
404 Not Found Job was deleted, never existed or you do not have access to it None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PATCH /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/

Update a Batch Prediction job definition

Code samples

# You can also use wget
curl -X PATCH http://10.97.68.125/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition
body body BatchPredictionJobDefinitionsUpdate false none

Example responses

200 Response

{
  "batchPredictionJob": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Responses

Status Meaning Description Schema
200 OK Job details for the updated Batch Prediction job definition BatchPredictionJobDefinitionsResponse
403 Forbidden You are not authorized to alter the contents of this job definition due to your permissions role None
404 Not Found Job was deleted, never existed or you do not have access to it None
409 Conflict You chose a name of your job definition that was already existing within your organization None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/portable/

Retrieve a Batch Prediction job definition for Portable Batch Predictions

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/batchPredictionJobDefinitions/{jobDefinitionId}/portable/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
jobDefinitionId path string true ID of the Batch Prediction job definition

Responses

Status Meaning Description Schema
200 OK Snippet for Portable Batch Predictions None
404 Not Found Job was deleted, never existed or you do not have access to it None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictions/

Get a collection of batch prediction jobs by statuses

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/batchPredictions/?offset=0&limit=100&allJobs=false \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
status query any false Includes only jobs that have the status value that matches this flag. Repeat the parameter for filtering on multiple statuses.
source query any false Includes only jobs that have the source value that matches this flag. Repeat the parameter for filtering on multiple statuses.Prefix values with a dash (-) to exclude those sources.
deploymentId query string false Includes only jobs for this particular deployment
modelId query string false ID of leaderboard model which is used in job for processing predictions dataset
jobId query string false Includes only job by specific id
orderBy query string false Sort order which will be applied to batch prediction list. Prefix the attribute name with a dash to sort in descending order, e.g. "-created".
allJobs query boolean true (For organization admins) Include jobs for all users in the organization.
cutoffHours query integer false Only list jobs created at most this amount of hours ago.
startDateTime query string(date-time) false ISO-formatted datetime of the earliest time the job was added (inclusive). For example "2008-08-24T12:00:00Z". Will ignore cutoffHours if set.
endDateTime query string(date-time) false ISO-formatted datetime of the latest time the job was added (inclusive). For example "2008-08-24T12:00:00Z".
batchPredictionJobDefinitionId query string false Includes only jobs for this particular definition
hostname query any false Includes only jobs for this particular prediction instance hostname
intakeType query any false Includes only jobs for these particular intakes type
outputType query any false Includes only jobs for these particular outputs type

Enumerated Values

Parameter Value
orderBy created
orderBy -created
orderBy status
orderBy -status

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "batchPredictionJobDefinition": {
        "createdBy": "string",
        "id": "string",
        "name": "string"
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "elapsedTimeSec": 0,
      "failedRows": 0,
      "hidden": "2019-08-24T14:15:22Z",
      "id": "string",
      "intakeDatasetDisplayName": "string",
      "jobIntakeSize": 0,
      "jobOutputSize": 0,
      "jobSpec": {
        "abortOnError": true,
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringBatchPrefix": "string",
        "numConcurrent": 1,
        "outputSettings": {
          "type": "localFile"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "links": {
        "csvUpload": "string",
        "download": "string",
        "self": "string"
      },
      "logs": [
        "string"
      ],
      "percentageCompleted": 100,
      "queuePosition": 0,
      "queued": true,
      "resultsDeleted": true,
      "scoredRows": 0,
      "skippedRows": 0,
      "source": "string",
      "status": "INITIALIZING",
      "statusDetails": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Responses

Status Meaning Description Schema
200 OK A list of Batch Prediction job objects BatchPredictionJobListResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/

Submit the configuration for the job and it will be submitted to the queue

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/batchPredictions/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Parameters

Name In Type Required Description
body body BatchPredictionJobCreate false none

Example responses

202 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job BatchPredictionJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/fromExisting/

Copies an existing job and submits it to the queue.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/batchPredictions/fromExisting/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "partNumber": 0,
  "predictionJobId": "string"
}

Parameters

Name In Type Required Description
body body BatchPredictionJobId false none

Example responses

202 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job BatchPredictionJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/fromJobDefinition/

Launches a one-time batch prediction job based off of the previously supplied definition referring to the job definition ID and puts it on the queue.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/batchPredictions/fromJobDefinition/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "jobDefinitionId": "string"
}

Parameters

Name In Type Required Description
body body BatchPredictionJobDefinitionId false none

Example responses

202 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Job details for the created Batch Prediction job BatchPredictionJobResponse
404 Not Found Job was deleted, never existed or you do not have access to it None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/batchPredictions/{predictionJobId}/

If the job is running, it will be aborted. Then it will be removed, meaning all underlying data will be deleted and the job is removed from the list of jobs.

Code samples

# You can also use wget
curl -X DELETE http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Job cancelled None
404 Not Found Job does not exist or was not submitted to the queue. None
409 Conflict Job cannot be aborted for some reason. Possible reasons: job is already aborted or completed. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictions/{predictionJobId}/

Retrieve a Batch Prediction job.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Example responses

200 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
200 OK Job details for the requested Batch Prediction job BatchPredictionJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PATCH /api/v2/batchPredictions/{predictionJobId}/

If a job has finished execution regardless of the result, it can have parameters changed to ensure better filtering in the job list upon retrieval. Another case: updating job scoring status externally.

Code samples

# You can also use wget
curl -X PATCH http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "aborted": "2019-08-24T14:15:22Z",
  "completed": "2019-08-24T14:15:22Z",
  "failedRows": 0,
  "hidden": true,
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "logs": [
    "string"
  ],
  "scoredRows": 0,
  "skippedRows": 0,
  "started": "2019-08-24T14:15:22Z",
  "status": "INITIALIZING"
}

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload
body body BatchPredictionJobUpdate false none

Example responses

200 Response

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Responses

Status Meaning Description Schema
200 OK Job updated BatchPredictionJobResponse
404 Not Found Job does not exist or was not submitted to the queue. None
409 Conflict Job cannot be hidden for some reason. Possible reasons: job is not in a deletable state. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PUT /api/v2/batchPredictions/{predictionJobId}/csvUpload/

Stream CSV data to the prediction job. Only available for jobs thatuses the localFile intake option.

Code samples

# You can also use wget
curl -X PUT http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/csvUpload/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Job data was successfully submitted None
404 Not Found Job does not exist or does not require data None
406 Not Acceptable Not acceptable MIME type None
409 Conflict Dataset upload has already begun None
422 Unprocessable Entity Job was "ABORTED" due to too many errors in the data None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/batchPredictions/{predictionJobId}/csvUpload/finalizeMultipart/

Finalize a multipart upload, indicating that no further chunks will be sent

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/csvUpload/finalizeMultipart/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Acknowledgement that the request was accepted or an error message None
404 Not Found Job was deleted, never existed or you do not have access to it None
409 Conflict Only multipart jobs can be finalized. None
422 Unprocessable Entity No data was uploaded None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PUT /api/v2/batchPredictions/{predictionJobId}/csvUpload/part/{partNumber}/

Stream CSV data to the prediction job in many parts.Only available for jobs that uses the localFile intake option.

Code samples

# You can also use wget
curl -X PUT http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/csvUpload/part/{partNumber}/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
202 Accepted Job data was successfully submitted None
404 Not Found Job does not exist or does not require data None
406 Not Acceptable Not acceptable MIME type None
409 Conflict Dataset upload has already begun None
422 Unprocessable Entity Job was "ABORTED" due to too many errors in the data None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/batchPredictions/{predictionJobId}/download/

This is only valid for jobs scored using the "localFile" output option

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/batchPredictions/{predictionJobId}/download/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionJobId path string true ID of the Batch Prediction job
partNumber path integer true The number of which csv part is being uploaded when using multipart upload

Responses

Status Meaning Description Schema
200 OK Job was downloaded correctly None
404 Not Found Job does not exist or is not completed None
406 Not Acceptable Not acceptable MIME type None
422 Unprocessable Entity Job was "ABORTED" due to too many errors in the data None

Response Headers

Status Header Type Format Description
200 Content-Disposition string Contains an auto generated filename for this download ("attachment;filename=result-.csv").
200 Content-Type string MIME type of the returned data

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictJobs/

List all prediction jobs for a project

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictJobs/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
status query string false If provided, only jobs with the same status will be included in the results; otherwise, queued and inprogress jobs (but not errored jobs) will be returned.
projectId path string true The project ID.

Enumerated Values

Parameter Value
status queue
status inprogress
status error

Example responses

200 Response

[
  {
    "id": "string",
    "isBlocked": true,
    "message": "string",
    "modelId": "string",
    "projectId": "string",
    "status": "queue"
  }
]

Responses

Status Meaning Description Schema
200 OK A list of prediction jobs for a project Inline
404 Not Found Job was not found None

Response Schema

Status Code 200

Name Type Required Restrictions Description
anonymous [PredictJobDetailsResponse] false none none
» id string true none the job ID of the job
» isBlocked boolean true none True if a job is waiting for its dependencies to be resolved first.
» message string true none An optional message about the job
» modelId string true none The ID of the model
» projectId string true none the project the job belongs to
» status string true none the status of the job

Enumerated Values

Property Value
status queue
status inprogress
status error
status ABORTED
status COMPLETED

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/projects/{projectId}/predictJobs/{jobId}/

Cancel a queued prediction job

Code samples

# You can also use wget
curl -X DELETE http://10.97.68.125/api/v2/projects/{projectId}/predictJobs/{jobId}/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
projectId path string true The project ID.
jobId path string true The job ID

Responses

Status Meaning Description Schema
204 No Content The job has been successfully cancelled None
404 Not Found Job was not found or the job has already completed None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictJobs/{jobId}/

Look up a particular prediction job

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictJobs/{jobId}/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
projectId path string true The project ID.
jobId path string true The job ID

Example responses

200 Response

{
  "id": "string",
  "isBlocked": true,
  "message": "string",
  "modelId": "string",
  "projectId": "string",
  "status": "queue"
}

Responses

Status Meaning Description Schema
200 OK The job has been successfully retrieved and has not yet finished. PredictJobDetailsResponse
303 See Other The job has been successfully retrieved and has been completed. See Location header. The response json is also included. None

Response Headers

Status Header Type Format Description
200 Location string url present only when the requested job has finished - contains a url from which the completed predictions may be retrieved as with GET /api/v2/projects/{projectId}/predictions/{predictionId}/

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionDatasets/

List predictions datasets uploaded to a project.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/?offset=0&limit=0 \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped.
limit query integer true At most this many results are returned. If 0, all results.
projectId path string true The project ID to query.

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "catalogId": "string",
      "catalogVersionId": "string",
      "containsTargetValues": true,
      "created": "2019-08-24T14:15:22Z",
      "dataEndDate": "2019-08-24T14:15:22Z",
      "dataQualityWarnings": {
        "hasKiaMissingValuesInForecastWindow": true,
        "insufficientRowsForEvaluatingModels": true,
        "singleClassActualValueColumn": true
      },
      "dataStartDate": "2019-08-24T14:15:22Z",
      "detectedActualValueColumns": [
        {
          "missingCount": 0,
          "name": "string"
        }
      ],
      "forecastPoint": "string",
      "forecastPointRange": [
        "2019-08-24T14:15:22Z"
      ],
      "id": "string",
      "maxForecastDate": "2019-08-24T14:15:22Z",
      "name": "string",
      "numColumns": 0,
      "numRows": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "secondaryDatasetsConfigId": "string"
    }
  ],
  "next": "string",
  "previous": "string"
}

Responses

Status Meaning Description Schema
200 OK Request to list the uploaded predictions datasets was successful. PredictionDatasetListControllerResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/dataSourceUploads/

Upload a dataset for predictions from a DataSource.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/dataSourceUploads/ \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "dataSourceId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID to which the data source will be uploaded to.
body body PredictionDataSource false none

Responses

Status Meaning Description Schema
202 Accepted Upload successfully started. See the Location header. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/datasetUploads/

Create a prediction dataset from a Dataset Asset referenced by AI Catalog item/version ID.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/datasetUploads/ \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "datasetId": "string",
  "datasetVersionId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID.
body body PredictionFromCatalogDataset false none

Example responses

202 Response

{
  "datasetId": "string"
}

Responses

Status Meaning Description Schema
202 Accepted Creation has successfully started. See the Location header. CreatePredictionDatasetResponse
422 Unprocessable Entity Target not set yet or cannot specify time series options with a non time series project. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/fileUploads/

Upload a file for predictions from an attached file.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/fileUploads/ \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "actualValueColumn": "string",
  "credentials": "string",
  "file": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": "false",
  "secondaryDatasetsConfigId": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID to which the data will be uploaded for prediction.
body body PredictionFileUpload false none

Responses

Status Meaning Description Schema
202 Accepted Upload successfully started. See the Location header. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictionDatasets/urlUploads/

Upload a file for predictions from a URL.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/urlUploads/ \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "actualValueColumn": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "url": "string"
}

Parameters

Name In Type Required Description
projectId path string true The project ID to which the data will be uploaded for prediction.
body body PredictionURLUpload false none

Responses

Status Meaning Description Schema
202 Accepted Upload successfully started. See the Location header. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/projects/{projectId}/predictionDatasets/{datasetId}/

Delete a dataset that was uploaded for prediction.

Code samples

# You can also use wget
curl -X DELETE http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/{datasetId}/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
projectId path string true The project ID that owns the data.
datasetId path string true The dataset ID to delete.

Responses

Status Meaning Description Schema
204 No Content The dataset has been successfully deleted. None
404 Not Found No dataset with the specified datasetId found. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionDatasets/{datasetId}/

Get the metadata of a specific dataset. This only works for datasets uploaded to an existing project for prediction.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictionDatasets/{datasetId}/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
projectId path string true The project ID that owns the data.
datasetId path string true The dataset ID to query for.

Example responses

200 Response

{
  "actualValueColumn": "string",
  "catalogId": "string",
  "catalogVersionId": "string",
  "containsTargetValues": true,
  "created": "2019-08-24T14:15:22Z",
  "dataEndDate": "2019-08-24T14:15:22Z",
  "dataQualityWarnings": {
    "hasKiaMissingValuesInForecastWindow": true,
    "insufficientRowsForEvaluatingModels": true,
    "singleClassActualValueColumn": true
  },
  "dataStartDate": "2019-08-24T14:15:22Z",
  "detectedActualValueColumns": [
    {
      "missingCount": 0,
      "name": "string"
    }
  ],
  "forecastPoint": "string",
  "forecastPointRange": [
    "2019-08-24T14:15:22Z"
  ],
  "id": "string",
  "maxForecastDate": "2019-08-24T14:15:22Z",
  "name": "string",
  "numColumns": 0,
  "numRows": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "secondaryDatasetsConfigId": "string"
}

Responses

Status Meaning Description Schema
200 OK Request to retrieve the metadata of a specified dataset was successful. PredictionDatasetRetrieveResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictions/

Get a list of prediction records.

.. deprecated:: v2.21 Use GET /api/v2/projects/{projectId}/predictionsMetadata/ instead. The only difference is that parameter datasetId is renamed to predictionDatasetId both in request and response.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictions/?offset=0&limit=1000 \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned. To specify no limit, use 0. The default may change and a maximum limit may be imposed without notice.
datasetId query string false Dataset id used to create the predictions
modelId query string false Model id
projectId path string true The project of the predictions.

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "datasetId": "string",
      "explanationAlgorithm": "string",
      "featureDerivationWindowCounts": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "id": "string",
      "includesPredictionIntervals": true,
      "maxExplanations": 0,
      "modelId": "string",
      "predictionDatasetId": "string",
      "predictionIntervalsSize": 0,
      "predictionThreshold": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "shapWarnings": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      },
      "url": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK The json array of prediction metadata objects. RetrieveListPredictionMetadataObjectsResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/predictions/

There are two ways of making predictions. The recommended way is to first upload your dataset to the project, and then using the corresponding datasetId, predict against that dataset. To follow that pattern, send the json request body.

Note that requesting prediction intervals will automatically trigger backtesting if backtests were not already completed for this model.

The legacy method which is deprecated is to send the file directly with the predictions request. If you need to predict against a file 10MB in size or larger, you will be required to use the above workflow for uploaded datasets. However, the following multipart/form-data can be used with small files:

:form file: a dataset to make predictions on :form modelId: the model to use to make predictions

.. note:: If using the legacy method of uploading data to this endpoint, a new dataset will be created behind the scenes. For performance reasons, it would be much better to utilize the workflow of creating the dataset first and using the supported method of making predictions of this endpoint. However, to preserve the functionality of existing workflows, the legacy method still exists.

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/projects/{projectId}/predictions/ \
  -H 'Content-Type: application/json' \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "shap",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "includeFdwCounts": false,
  "includePredictionIntervals": true,
  "maxExplanations": 1,
  "modelId": "string",
  "predictionIntervalsSize": 1,
  "predictionThreshold": 1,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z"
}

Parameters

Name In Type Required Description
projectId path string true The project to make predictions within.
Content-Type header string true Content types available for making request. multipart/form-data is the legacy deprecated method to send the small file with the prediction request.
body body CreatePredictionFromDataset false none

Enumerated Values

Parameter Value
Content-Type application/json
Content-Type multipart/form-data

Responses

Status Meaning Description Schema
202 Accepted Prediction has successfully been requested. See Location header. None
422 Unprocessable Entity The request cannot be processed. None

Response Headers

Status Header Type Format Description
202 Location string A url that can be polled to check the status of the predictions as with GET /api/v2/projects/{projectId}/predictJobs/{jobId}/

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictions/{predictionId}/

Retrieve predictions that have previously been computed. Training predictions encoded either as JSON or CSV. If CSV output was requested, the returned CSV data will contain the following columns:

  • For regression projects: row_id and prediction.
  • For binary classification projects: row_id, prediction, class_<positive_class_label> and class_<negative_class_label>.
  • For multiclass projects: row_id, prediction and a class_<class_label> for each class.
  • For multilabel projects: row_id and for each class prediction_<class_label> and class_<class_label>.
  • For time-series, these additional columns will be added: forecast_point, forecast_distance, timestamp, and series_id.

.. minversion:: v2.21

* If `explanationAlgorithm` = 'shap', these additional columns will be added:
  triplets of (`Explanation_<i>_feature_name`,
  `Explanation_<i>_feature_value`, and `Explanation_<i>_strength`) for `i` ranging
  from 1 to `maxExplanations`, `shap_remaining_total` and `shap_base_value`. Binary
  classification projects will also have `explained_class`, the class for which
  positive SHAP values imply an increased probability.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictions/{predictionId}/ \
  -H 'Accept: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
shapMulticlassLevel query string false Required in multiclass projects with SHAP prediction explanations. This parameter specifies which of the target classes (levels) you would like to retrieve explanations for. This will NOT affect a non-multiclass project.
predictionId path string true The id of the prediction record to retrieve. If you have the jobId, you can retrieve the predictionId using GET /api/v2/projects/{projectId}/predictJobs/{jobId}/.
projectId path string true The id of the project the prediction belongs to.
Accept header string false Requested MIME type for the returned data

Enumerated Values

Parameter Value
Accept application/json
Accept text/csv

Example responses

200 Response

{
  "actualValueColumn": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "positiveClass": "string",
  "predictionIntervalsSize": 0,
  "predictions": [
    {
      "actualValue": "string",
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "originalFormatTimestamp": "string",
      "positiveProbability": 0,
      "prediction": 0,
      "predictionExplanationMetadata": [
        {
          "shapRemainingTotal": 0
        }
      ],
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionIntervalLowerBound": 0,
      "predictionIntervalUpperBound": 0,
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "segmentId": "string",
      "seriesId": "string",
      "target": "string",
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "shapBaseValue": 0,
  "shapWarnings": [
    {
      "maxNormalizedMismatch": 0,
      "mismatchRowCount": 0
    }
  ],
  "task": "Regression"
}

Responses

Status Meaning Description Schema
200 OK Predictions that have previously been computed. PredictionRetrieveResponse
404 Not Found No prediction data found. None

Response Headers

Status Header Type Format Description
200 Content-Type string MIME type of the returned data

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionsMetadata/

Use the ID of a metadata object to get the complete set of predictions.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictionsMetadata/?offset=0&limit=1000 \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned. To specify no limit, use 0. The default may change and a maximum limit may be imposed without notice.
predictionDatasetId query string false Dataset id used to create the predictions
modelId query string false Model id
projectId path string true The project of the predictions.

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "datasetId": "string",
      "explanationAlgorithm": "string",
      "featureDerivationWindowCounts": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "id": "string",
      "includesPredictionIntervals": true,
      "maxExplanations": 0,
      "modelId": "string",
      "predictionDatasetId": "string",
      "predictionIntervalsSize": 0,
      "predictionThreshold": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "shapWarnings": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      },
      "url": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK The json array of prediction metadata objects. RetrieveListPredictionMetadataObjectsResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/predictionsMetadata/{predictionId}/

Use the ID of a metadata object to get the complete set of predictions.

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/predictionsMetadata/{predictionId}/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
predictionId path string true The id of the prediction record to retrieve. If you have the jobId, you can retrieve the predictionId using GET /api/v2/projects/{projectId}/predictJobs/{jobId}/.
projectId path string true The id of the project the prediction belongs to.

Example responses

200 Response

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "id": "string",
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "modelId": "string",
  "predictionDatasetId": "string",
  "predictionIntervalsSize": 0,
  "predictionThreshold": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "shapWarnings": {
    "maxNormalizedMismatch": 0,
    "mismatchRowCount": 0
  },
  "url": "string"
}

Responses

Status Meaning Description Schema
200 OK Prediction metadata object. RetrievePredictionMetadataObject
404 Not Found Training predictions not found. None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/trainingPredictions/

Get a list of training prediction records

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/trainingPredictions/?offset=0&limit=0 \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
projectId path string true Project ID to retrieve training predictions for

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "dataSubset": "all",
      "explanationAlgorithm": "shap",
      "id": "string",
      "maxExplanations": 100,
      "modelId": "string",
      "shapWarnings": [
        {
          "partitionName": "string",
          "value": {
            "maxNormalizedMismatch": 0,
            "mismatchRowCount": 0
          }
        }
      ],
      "url": "http://example.com"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK A list of training prediction jobs TrainingPredictionsListResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

POST /api/v2/projects/{projectId}/trainingPredictions/

Create training data predictions

Code samples

# You can also use wget
curl -X POST http://10.97.68.125/api/v2/projects/{projectId}/trainingPredictions/ \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "dataSubset": "all",
  "explanationAlgorithm": "string",
  "maxExplanations": 1,
  "modelId": "string"
}

Parameters

Name In Type Required Description
projectId path string true Project ID to compute training predictions for
body body CreateTrainingPrediction false none

Responses

Status Meaning Description Schema
200 OK Submitted successfully. See Location header. None

Response Headers

Status Header Type Format Description
200 Location string URL for tracking async job status.

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/projects/{projectId}/trainingPredictions/{predictionId}/

Retrieve training predictions that have previously been computed

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/projects/{projectId}/trainingPredictions/{predictionId}/?offset=0&limit=0 \
  -H 'Accept: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true This many results will be skipped
limit query integer true At most this many results are returned
projectId path string true Project ID to retrieve training predictions for
predictionId path string true Prediction ID to retrieve training predictions for
Accept header string false Requested MIME type for the returned data

Enumerated Values

Parameter Value
Accept application/json
Accept text/csv

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "partitionId": "string",
      "prediction": 0,
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "seriesId": "string",
      "shapMetadata": {
        "shapBaseValue": 0,
        "shapRemainingTotal": 0,
        "warnings": [
          {
            "maxNormalizedMismatch": 0,
            "mismatchRowCount": 0
          }
        ]
      },
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Responses

Status Meaning Description Schema
200 OK Training predictions encoded either as JSON or CSV string
404 Not Found Job does not exist or is not completed None

Response Headers

Status Header Type Format Description
200 Content-Type string MIME type of the returned data

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/scheduledJobs/

Get a list of scheduled batch prediction jobs a user can view

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/scheduledJobs/?offset=0&limit=20 \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
offset query integer true The number of scheduled jobs to skip. Defaults to 0.
limit query integer true The number of scheduled jobs (max 100) to return. Defaults to 20
orderBy query string false The order to sort the scheduled jobs. Defaults to order by last successful run timestamp in descending order.
search query string false Case insensitive search against scheduled jobs name or type name.
deploymentId query string false Filter by the prediction integration deployment ID. Ignored for non prediction integration type ID.
typeId query string false filter by scheduled job type ID.
integrationTypeName query string false filter by integration type name.
queryByUser query string false Which user field to filter with.
filterEnabled query string false Filter jobs using the enabled field. If true, only enabled jobs are returned, otherwise if false, only disabled jobs are returned. The default returns both enabled and disabled jobs.

Enumerated Values

Parameter Value
typeId predictionIntegration
typeId datasetRefresh
integrationTypeName sql
integrationTypeName tableau
integrationTypeName snowflake
integrationTypeName kdb
queryByUser createdBy
queryByUser updatedBy
filterEnabled false
filterEnabled False
filterEnabled true
filterEnabled True

Example responses

200 Response

{
  "count": 0,
  "data": [
    {
      "createdBy": "string",
      "deploymentId": "string",
      "enabled": true,
      "id": "string",
      "integrationTypeId": "string",
      "integrationTypeName": "sql",
      "name": "string",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "scheduledJobId": "string",
      "status": {
        "lastFailedRun": "2019-08-24T14:15:22Z",
        "lastSuccessfulRun": "2019-08-24T14:15:22Z",
        "nextRunTime": "2019-08-24T14:15:22Z",
        "queuePosition": 0,
        "running": true
      },
      "typeId": "string",
      "updatedAt": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0,
  "updatedAt": "2019-08-24T14:15:22Z",
  "updatedBy": "string"
}

Responses

Status Meaning Description Schema
200 OK A list of scheduled batch prediction jobs ScheduledJobsListResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

DELETE /api/v2/scheduledJobs/{jobId}/

Delete scheduled job

Code samples

# You can also use wget
curl -X DELETE http://10.97.68.125/api/v2/scheduledJobs/{jobId}/ \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
jobId path string true The ID of the job being requested.

Responses

Status Meaning Description Schema
204 No Content Job deleted None
404 Not Found Job was not found None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

GET /api/v2/scheduledJobs/{jobId}/

Get a scheduled batch prediction job

Code samples

# You can also use wget
curl -X GET http://10.97.68.125/api/v2/scheduledJobs/{jobId}/ \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer {access-token}'

Parameters

Name In Type Required Description
jobId path string true The ID of the job being requested.

Example responses

200 Response

{
  "createdBy": "string",
  "deploymentId": "string",
  "enabled": true,
  "id": "string",
  "integrationTypeId": "string",
  "integrationTypeName": "sql",
  "name": "string",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "scheduledJobId": "string",
  "status": {
    "lastFailedRun": "2019-08-24T14:15:22Z",
    "lastSuccessfulRun": "2019-08-24T14:15:22Z",
    "nextRunTime": "2019-08-24T14:15:22Z",
    "queuePosition": 0,
    "running": true
  },
  "typeId": "string",
  "updatedAt": "2019-08-24T14:15:22Z"
}

Responses

Status Meaning Description Schema
200 OK A scheduled batch prediction job ScheduledJobResponse

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

PATCH /api/v2/scheduledJobs/{jobId}/

Run or stop a previously created scheduled integration job

Code samples

# You can also use wget
curl -X PATCH http://10.97.68.125/api/v2/scheduledJobs/{jobId}/ \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer {access-token}'

Body parameter

{
  "status": {
    "running": true
  }
}

Parameters

Name In Type Required Description
jobId path string true The ID of the job being requested.
body body ScheduledJobRunStop false none

Responses

Status Meaning Description Schema
204 No Content Job was either started or stopped None
403 Forbidden User does not have permission to run the job None
404 Not Found Scheduled job does not exist None
422 Unprocessable Entity Scheduled job is already stopped or is already running None

To perform this operation, you must be authenticated by means of one of the following methods:

BearerAuth

Schemas

ActualValueColumnInfo

{
  "missingCount": 0,
  "name": "string"
}

Properties

Name Type Required Restrictions Description
missingCount integer true none Count of the missing values in the column.
name string true none Name of the column.

AzureDataStreamer

{
  "credentialId": "string",
  "format": "csv",
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
format string false none Type of input file format
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format csv
format parquet
type azure

AzureIntake

{
  "credentialId": "string",
  "format": "csv",
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false none Use the specified credential to access the url
format string false none Type of input file format
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
format csv
format parquet
type azure

AzureOutput

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false none Use the specified credential to access the url
format string false none Type of output file format
partitionColumns [string] false none For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true none Type name for this output type
url string(url) true none URL for the file or directory

Enumerated Values

Property Value
format csv
format parquet
type azure

AzureOutputAdaptor

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "azure",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
format string false none Type of output file format
partitionColumns [string] false none For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true none Type name for this output type
url string(url) true none URL for the file or directory

Enumerated Values

Property Value
anonymous [redacted]
format csv
format parquet
type azure

BasicCredentials

{
  "credentialType": "basic",
  "password": "string",
  "user": "string"
}

Properties

Name Type Required Restrictions Description
credentialType string true none The type of these credentials, 'basic' here.
password string true none The password for database authentication. The password is encrypted at rest and never saved / stored.
user string true none The username for database authentication.

Enumerated Values

Property Value
credentialType basic

BatchPredictionCreatedBy

{
  "fullName": "string",
  "userId": "string",
  "username": "string"
}

Properties

Name Type Required Restrictions Description
fullName string¦null true none The full name of the user who created this job (if defined by the user)
userId string true none The User ID of the user who created this job
username string true none The username (e-mail address) of the user who created this job

BatchPredictionJobCSVSettings

{
  "delimiter": ",",
  "encoding": "utf-8",
  "quotechar": "\""
}

Properties

Name Type Required Restrictions Description
delimiter any true none CSV fields are delimited by this character. Use the string "tab" to denote TSV (TAB separated values).

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
encoding string true none The encoding to be used for intake and output. For example (but not limited to): "shift_jis", "latin_1" or "mskanji".
quotechar string true none Fields containing the delimiter or newlines must be quoted using this character.

Enumerated Values

Property Value
anonymous tab

BatchPredictionJobCreate

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true none Should this job abort if too many errors are encountered
chunkSize any false none Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous integer false none none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false none Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false none Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false none Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true none The CSV settings used for this job
deploymentId string false none ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true none Skip row by row error handling
explanationAlgorithm string false none Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false none List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false none Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true none Include prediction status column in the output
includeProbabilities boolean true none Include probabilities for all classes
includeProbabilitiesClasses [string] true none Include only probabilities for these specific class names.
intakeSettings any true none The intake option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureIntake false none Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous BigQueryIntake false none Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous DataStageIntake false none Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous Catalog false none Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous DSS false none Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemIntake false none none

xor

Name Type Required Restrictions Description
» anonymous GCPIntake false none Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPIntake false none Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCIntake false none Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileIntake false none Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Intake false none Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeIntake false none Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseIntake false none Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer true none Number of explanations requested. Will be ordered by strength.
modelId string false none ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false none ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false none Name of the batch to create with this job
numConcurrent integer false none Number of simultaneous requests to run against the prediction instance
outputSettings any true none The output option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutput false none Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutput false none Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutput false none none

xor

Name Type Required Restrictions Description
» anonymous GCPOutput false none Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPOutput false none Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JDBCOutput false none Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutput false none Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Output false none Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutput false none Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutput false none Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous Tableau false none Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false none Pass through columns from the original dataset
passthroughColumnsSet string false none Pass through all columns from the original dataset
pinnedModelId string false none Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false none Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled boolean¦null false none Enable prediction warnings.
skipDriftTracking boolean true none Skip drift tracking for this job.
thresholdHigh number false none Compute explanations for predictions above this threshold
thresholdLow number false none Compute explanations for predictions below this threshold
timeseriesSettings any false none Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none none

Enumerated Values

Property Value
anonymous auto
anonymous fixed
anonymous dynamic
explanationAlgorithm shap
explanationAlgorithm xemp
passthroughColumnsSet all

BatchPredictionJobDefinitionId

{
  "jobDefinitionId": "string"
}

Properties

Name Type Required Restrictions Description
jobDefinitionId string true none ID of the Batch Prediction job definition

BatchPredictionJobDefinitionJobSpecResponse

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "redactedFields": [
    "string"
  ],
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true none Should this job abort if too many errors are encountered
chunkSize any false none Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous integer false none none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false none Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false none Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false none Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true none The CSV settings used for this job
deploymentId string false none ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true none Skip row by row error handling
explanationAlgorithm string false none Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false none List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false none Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true none Include prediction status column in the output
includeProbabilities boolean true none Include probabilities for all classes
includeProbabilitiesClasses [string] true none Include only probabilities for these specific class names.
intakeSettings any true none The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous DataStageDataStreamer false none Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous DSSDataStreamer false none Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous CatalogDataStreamer false none Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemDataStreamer false none none

xor

Name Type Required Restrictions Description
» anonymous HTTPDataStreamer false none Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCDataStreamer false none Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileDataStreamer false none Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous AzureDataStreamer false none Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous GCPDataStreamer false none Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryDataStreamer false none Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous S3DataStreamer false none Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeDataStreamer false none Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseDataStreamer false none Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer true none Number of explanations requested. Will be ordered by strength.
modelId string false none ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false none ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false none Name of the batch to create with this job
numConcurrent integer false none Number of simultaneous requests to run against the prediction instance
outputSettings any true none The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous FileSystemOutputAdaptor false none none

xor

Name Type Required Restrictions Description
» anonymous HttpOutputAdaptor false none Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JdbcOutputAdaptor false none Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutputAdaptor false none Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous TableauOutputAdaptor false none Save CSV data chunks to local file storage as .hyper file

xor

Name Type Required Restrictions Description
» anonymous AzureOutputAdaptor false none Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous GCPOutputAdaptor false none Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutputAdaptor false none Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous S3OutputAdaptor false none Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutputAdaptor false none Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutputAdaptor false none Save CSV data chunks to Azure Synapse in bulk

continued

Name Type Required Restrictions Description
passthroughColumns [string] false none Pass through columns from the original dataset
passthroughColumnsSet string false none Pass through all columns from the original dataset
pinnedModelId string false none Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false none Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled boolean¦null false none Enable prediction warnings.
redactedFields [string] true none A list of qualified field names from intake- and/or outputSettings that was redacted due to permissions and sharing settings. For example: intakeSettings.dataStoreId
skipDriftTracking boolean true none Skip drift tracking for this job.
thresholdHigh number false none Compute explanations for predictions above this threshold
thresholdLow number false none Compute explanations for predictions below this threshold
timeseriesSettings any false none Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecastWithPolicy false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none none

Enumerated Values

Property Value
anonymous auto
anonymous fixed
anonymous dynamic
explanationAlgorithm shap
explanationAlgorithm xemp
passthroughColumnsSet all

BatchPredictionJobDefinitionResponse

{
  "createdBy": "string",
  "id": "string",
  "name": "string"
}

Properties

Name Type Required Restrictions Description
createdBy string true none The ID of creator of this job definition
id string true none The ID of the Batch Prediction job definition
name string true none A human-readable name for the definition, must be unique across organisations

BatchPredictionJobDefinitionsCreate

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true none Should this job abort if too many errors are encountered
chunkSize any false none Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous integer false none none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false none Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false none Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false none Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true none The CSV settings used for this job
deploymentId string false none ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true none Skip row by row error handling
enabled boolean false none If this job definition is enabled as a scheduled job. Optional if no schedule is supplied.
explanationAlgorithm string false none Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false none List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false none Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true none Include prediction status column in the output
includeProbabilities boolean true none Include probabilities for all classes
includeProbabilitiesClasses [string] true none Include only probabilities for these specific class names.
intakeSettings any true none The intake option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureIntake false none Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous BigQueryIntake false none Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous DataStageIntake false none Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous Catalog false none Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous DSS false none Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemIntake false none none

xor

Name Type Required Restrictions Description
» anonymous GCPIntake false none Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPIntake false none Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCIntake false none Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileIntake false none Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Intake false none Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeIntake false none Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseIntake false none Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer true none Number of explanations requested. Will be ordered by strength.
modelId string false none ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false none ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false none Name of the batch to create with this job
name string false none A human-readable name for the definition, must be unique across organisations, if left out the backend will generate one for you.
numConcurrent integer false none Number of simultaneous requests to run against the prediction instance
outputSettings any true none The output option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutput false none Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutput false none Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutput false none none

xor

Name Type Required Restrictions Description
» anonymous GCPOutput false none Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPOutput false none Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JDBCOutput false none Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutput false none Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Output false none Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutput false none Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutput false none Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous Tableau false none Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false none Pass through columns from the original dataset
passthroughColumnsSet string false none Pass through all columns from the original dataset
pinnedModelId string false none Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false none Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled boolean¦null false none Enable prediction warnings.
schedule Schedule false none The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
skipDriftTracking boolean true none Skip drift tracking for this job.
thresholdHigh number false none Compute explanations for predictions above this threshold
thresholdLow number false none Compute explanations for predictions below this threshold
timeseriesSettings any false none Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecastWithPolicy false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none none

Enumerated Values

Property Value
anonymous auto
anonymous fixed
anonymous dynamic
explanationAlgorithm shap
explanationAlgorithm xemp
passthroughColumnsSet all

BatchPredictionJobDefinitionsListResponse

{
  "count": 0,
  "data": [
    {
      "batchPredictionJob": {
        "abortOnError": true,
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringBatchPrefix": "string",
        "numConcurrent": 1,
        "outputSettings": {
          "type": "localFile"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "enabled": false,
      "id": "string",
      "lastFailedRunTime": "2019-08-24T14:15:22Z",
      "lastScheduledRunTime": "2019-08-24T14:15:22Z",
      "lastStartedJobStatus": "INITIALIZING",
      "lastStartedJobTime": "2019-08-24T14:15:22Z",
      "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
      "name": "string",
      "nextScheduledRunTime": "2019-08-24T14:15:22Z",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "updated": "2019-08-24T14:15:22Z",
      "updatedBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      }
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Properties

Name Type Required Restrictions Description
count integer false none Number of items returned on this page.
data [BatchPredictionJobDefinitionsResponse] true none An array of scheduled jobs
next string(uri)¦null true none URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true none URL pointing to the previous page (if null, there is no previous page).
totalCount integer true none The total number of items across all pages.

BatchPredictionJobDefinitionsResponse

{
  "batchPredictionJob": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "enabled": false,
  "id": "string",
  "lastFailedRunTime": "2019-08-24T14:15:22Z",
  "lastScheduledRunTime": "2019-08-24T14:15:22Z",
  "lastStartedJobStatus": "INITIALIZING",
  "lastStartedJobTime": "2019-08-24T14:15:22Z",
  "lastSuccessfulRunTime": "2019-08-24T14:15:22Z",
  "name": "string",
  "nextScheduledRunTime": "2019-08-24T14:15:22Z",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "updated": "2019-08-24T14:15:22Z",
  "updatedBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  }
}

Properties

Name Type Required Restrictions Description
batchPredictionJob BatchPredictionJobDefinitionJobSpecResponse true none The Batch Prediction Job specification to be put on the queue in intervals
created string(date-time) true none When was this job created
createdBy BatchPredictionCreatedBy true none Who created this job
enabled boolean true none If this job definition is enabled as a scheduled job.
id string true none The ID of the Batch Prediction job definition
lastFailedRunTime string(date-time)¦null false none Last time this job had a failed run
lastScheduledRunTime string(date-time)¦null false none Last time this job was scheduled to run (though not guaranteed it actually ran at that time)
lastStartedJobStatus string¦null true none The status of the latest job launched to the queue (if any).
lastStartedJobTime string(date-time)¦null true none The last time (if any) a job was launched.
lastSuccessfulRunTime string(date-time)¦null false none Last time this job had a successful run
name string true none A human-readable name for the definition, must be unique across organisations
nextScheduledRunTime string(date-time)¦null false none Next time this job is scheduled to run
schedule Schedule false none The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
updated string(date-time) true none When was this job last updated
updatedBy BatchPredictionCreatedBy true none Who updated this job last

Enumerated Values

Property Value
lastStartedJobStatus INITIALIZING
lastStartedJobStatus RUNNING
lastStartedJobStatus COMPLETED
lastStartedJobStatus ABORTED
lastStartedJobStatus FAILED

BatchPredictionJobDefinitionsUpdate

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "enabled": true,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "name": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean false none Should this job abort if too many errors are encountered
chunkSize any false none Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous integer false none none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false none Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false none Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false none Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings false none The CSV settings used for this job
deploymentId string false none ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean false none Skip row by row error handling
enabled boolean false none If this job definition is enabled as a scheduled job. Optional if no schedule is supplied.
explanationAlgorithm string false none Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false none List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false none Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean false none Include prediction status column in the output
includeProbabilities boolean false none Include probabilities for all classes
includeProbabilitiesClasses [string] false none Include only probabilities for these specific class names.
intakeSettings any false none The intake option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureIntake false none Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous BigQueryIntake false none Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous DataStageIntake false none Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous Catalog false none Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous DSS false none Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemIntake false none none

xor

Name Type Required Restrictions Description
» anonymous GCPIntake false none Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPIntake false none Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCIntake false none Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileIntake false none Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Intake false none Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeIntake false none Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseIntake false none Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer false none Number of explanations requested. Will be ordered by strength.
modelId string false none ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false none ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false none Name of the batch to create with this job
name string false none A human-readable name for the definition, must be unique across organisations, if left out the backend will generate one for you.
numConcurrent integer false none Number of simultaneous requests to run against the prediction instance
outputSettings any false none The output option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous AzureOutput false none Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutput false none Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous FileSystemOutput false none none

xor

Name Type Required Restrictions Description
» anonymous GCPOutput false none Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous HTTPOutput false none Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JDBCOutput false none Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutput false none Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous S3Output false none Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutput false none Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutput false none Save CSV data chunks to Azure Synapse in bulk

xor

Name Type Required Restrictions Description
» anonymous Tableau false none Save CSV data chunks to local file storage as .hyper file

continued

Name Type Required Restrictions Description
passthroughColumns [string] false none Pass through columns from the original dataset
passthroughColumnsSet string false none Pass through all columns from the original dataset
pinnedModelId string false none Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false none Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled boolean¦null false none Enable prediction warnings.
schedule Schedule false none The scheduling information defining how often and when to execute this job to the Job Scheduling service. Optional if enabled = False.
skipDriftTracking boolean false none Skip drift tracking for this job.
thresholdHigh number false none Compute explanations for predictions above this threshold
thresholdLow number false none Compute explanations for predictions below this threshold
timeseriesSettings any false none Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecastWithPolicy false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none none

Enumerated Values

Property Value
anonymous auto
anonymous fixed
anonymous dynamic
explanationAlgorithm shap
explanationAlgorithm xemp
passthroughColumnsSet all

BatchPredictionJobId

{
  "partNumber": 0,
  "predictionJobId": "string"
}

Properties

Name Type Required Restrictions Description
partNumber integer true none The number of which csv part is being uploaded when using multipart upload
predictionJobId string true none ID of the Batch Prediction job

{
  "csvUpload": "string",
  "download": "string",
  "self": "string"
}

Properties

Name Type Required Restrictions Description
csvUpload string(url) false none The URL used to upload the dataset for this job. Only available for localFile intake.
download string¦null false none The URL used to download the results from this job. Only available for localFile outputs. Will be null if the download is not yet available.
self string(url) true none The URL used access this job.

BatchPredictionJobListResponse

{
  "count": 0,
  "data": [
    {
      "batchPredictionJobDefinition": {
        "createdBy": "string",
        "id": "string",
        "name": "string"
      },
      "created": "2019-08-24T14:15:22Z",
      "createdBy": {
        "fullName": "string",
        "userId": "string",
        "username": "string"
      },
      "elapsedTimeSec": 0,
      "failedRows": 0,
      "hidden": "2019-08-24T14:15:22Z",
      "id": "string",
      "intakeDatasetDisplayName": "string",
      "jobIntakeSize": 0,
      "jobOutputSize": 0,
      "jobSpec": {
        "abortOnError": true,
        "chunkSize": "auto",
        "columnNamesRemapping": {},
        "csvSettings": {
          "delimiter": ",",
          "encoding": "utf-8",
          "quotechar": "\""
        },
        "deploymentId": "string",
        "disableRowLevelErrorHandling": false,
        "explanationAlgorithm": "shap",
        "explanationClassNames": [
          "string"
        ],
        "explanationNumTopClasses": 1,
        "includePredictionStatus": false,
        "includeProbabilities": true,
        "includeProbabilitiesClasses": [],
        "intakeSettings": {
          "type": "localFile"
        },
        "maxExplanations": 0,
        "modelId": "string",
        "modelPackageId": "string",
        "monitoringBatchPrefix": "string",
        "numConcurrent": 1,
        "outputSettings": {
          "type": "localFile"
        },
        "passthroughColumns": [
          "string"
        ],
        "passthroughColumnsSet": "all",
        "pinnedModelId": "string",
        "predictionInstance": {
          "apiKey": "string",
          "datarobotKey": "string",
          "hostName": "string",
          "sslEnabled": true
        },
        "predictionWarningEnabled": true,
        "redactedFields": [
          "string"
        ],
        "skipDriftTracking": false,
        "thresholdHigh": 0,
        "thresholdLow": 0,
        "timeseriesSettings": {
          "forecastPoint": "2019-08-24T14:15:22Z",
          "relaxKnownInAdvanceFeaturesCheck": false,
          "type": "forecast"
        }
      },
      "links": {
        "csvUpload": "string",
        "download": "string",
        "self": "string"
      },
      "logs": [
        "string"
      ],
      "percentageCompleted": 100,
      "queuePosition": 0,
      "queued": true,
      "resultsDeleted": true,
      "scoredRows": 0,
      "skippedRows": 0,
      "source": "string",
      "status": "INITIALIZING",
      "statusDetails": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0
}

Properties

Name Type Required Restrictions Description
count integer false none Number of items returned on this page.
data [BatchPredictionJobResponse] true none An array of jobs
next string(uri)¦null true none URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true none URL pointing to the previous page (if null, there is no previous page).
totalCount integer true none The total number of items across all pages.

BatchPredictionJobPredictionInstance

{
  "apiKey": "string",
  "datarobotKey": "string",
  "hostName": "string",
  "sslEnabled": true
}

Properties

Name Type Required Restrictions Description
apiKey string false none By default, prediction requests will use the API key of the user that created the job. This allows you to make requests on behalf of other users.
datarobotKey string false none If running a job against a prediction instance in the Managed AI Cloud, you must provide the organization level DataRobot-Key.
hostName string true none Override the default host name of the deployment with this.
sslEnabled boolean true none Use SSL (HTTPS) when communicating with the overriden prediction server.

BatchPredictionJobRemapping

{
  "inputName": "string",
  "outputName": "string"
}

Properties

Name Type Required Restrictions Description
inputName string true none Rename column with this name
outputName string¦null true none Rename column to this name (leave as null to remove from the output)

BatchPredictionJobResponse

{
  "batchPredictionJobDefinition": {
    "createdBy": "string",
    "id": "string",
    "name": "string"
  },
  "created": "2019-08-24T14:15:22Z",
  "createdBy": {
    "fullName": "string",
    "userId": "string",
    "username": "string"
  },
  "elapsedTimeSec": 0,
  "failedRows": 0,
  "hidden": "2019-08-24T14:15:22Z",
  "id": "string",
  "intakeDatasetDisplayName": "string",
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "jobSpec": {
    "abortOnError": true,
    "chunkSize": "auto",
    "columnNamesRemapping": {},
    "csvSettings": {
      "delimiter": ",",
      "encoding": "utf-8",
      "quotechar": "\""
    },
    "deploymentId": "string",
    "disableRowLevelErrorHandling": false,
    "explanationAlgorithm": "shap",
    "explanationClassNames": [
      "string"
    ],
    "explanationNumTopClasses": 1,
    "includePredictionStatus": false,
    "includeProbabilities": true,
    "includeProbabilitiesClasses": [],
    "intakeSettings": {
      "type": "localFile"
    },
    "maxExplanations": 0,
    "modelId": "string",
    "modelPackageId": "string",
    "monitoringBatchPrefix": "string",
    "numConcurrent": 1,
    "outputSettings": {
      "type": "localFile"
    },
    "passthroughColumns": [
      "string"
    ],
    "passthroughColumnsSet": "all",
    "pinnedModelId": "string",
    "predictionInstance": {
      "apiKey": "string",
      "datarobotKey": "string",
      "hostName": "string",
      "sslEnabled": true
    },
    "predictionWarningEnabled": true,
    "redactedFields": [
      "string"
    ],
    "skipDriftTracking": false,
    "thresholdHigh": 0,
    "thresholdLow": 0,
    "timeseriesSettings": {
      "forecastPoint": "2019-08-24T14:15:22Z",
      "relaxKnownInAdvanceFeaturesCheck": false,
      "type": "forecast"
    }
  },
  "links": {
    "csvUpload": "string",
    "download": "string",
    "self": "string"
  },
  "logs": [
    "string"
  ],
  "percentageCompleted": 100,
  "queuePosition": 0,
  "queued": true,
  "resultsDeleted": true,
  "scoredRows": 0,
  "skippedRows": 0,
  "source": "string",
  "status": "INITIALIZING",
  "statusDetails": "string"
}

Properties

Name Type Required Restrictions Description
batchPredictionJobDefinition BatchPredictionJobDefinitionResponse false none The Batch Prediction Job Definition linking to this job, if any.
created string(date-time) true none When was this job created
createdBy BatchPredictionCreatedBy true none Who created this job
elapsedTimeSec integer true none Number of seconds the job has been processing for
failedRows integer true none Number of rows that have failed scoring
hidden string(date-time) false none When was this job was hidden last, blank if visible
id string true none The ID of the Batch Prediction job
intakeDatasetDisplayName string¦null false none If applicable (e.g. for AI catalog), will contain the dataset name used for the intake dataset.
jobIntakeSize integer¦null true none Number of bytes in the intake dataset for this job
jobOutputSize integer¦null true none Number of bytes in the output dataset for this job
jobSpec BatchPredictionJobSpecResponse true none The job configuration used to create this job
links BatchPredictionJobLinks true none Links useful for this job
logs [string] true none The job log.
percentageCompleted number true none Indicates job progress which is based on number of already processed rows in dataset
queuePosition integer¦null false none To ensure a dedicated prediction instance is not overloaded, only one job will be run against it at a time. This is the number of jobs that are awaiting processing before this job start running. May not be available in all environments.
queued boolean true none The job has been put on the queue for execution.
resultsDeleted boolean false none Indicates if the job was subject to garbage collection and had its artifacts deleted (output files, if any, and scoring data on local storage)
scoredRows integer true none Number of rows that have been used in prediction computation
skippedRows integer true none Number of rows that have been skipped during scoring. May contain non-zero value only in time-series predictions case if provided dataset contains more than required historical rows.
source string false none Source from which batch job was started
status string true none The current job status
statusDetails string true none Explanation for current status

Enumerated Values

Property Value
status INITIALIZING
status RUNNING
status COMPLETED
status ABORTED
status FAILED

BatchPredictionJobSpecResponse

{
  "abortOnError": true,
  "chunkSize": "auto",
  "columnNamesRemapping": {},
  "csvSettings": {
    "delimiter": ",",
    "encoding": "utf-8",
    "quotechar": "\""
  },
  "deploymentId": "string",
  "disableRowLevelErrorHandling": false,
  "explanationAlgorithm": "shap",
  "explanationClassNames": [
    "string"
  ],
  "explanationNumTopClasses": 1,
  "includePredictionStatus": false,
  "includeProbabilities": true,
  "includeProbabilitiesClasses": [],
  "intakeSettings": {
    "type": "localFile"
  },
  "maxExplanations": 0,
  "modelId": "string",
  "modelPackageId": "string",
  "monitoringBatchPrefix": "string",
  "numConcurrent": 1,
  "outputSettings": {
    "type": "localFile"
  },
  "passthroughColumns": [
    "string"
  ],
  "passthroughColumnsSet": "all",
  "pinnedModelId": "string",
  "predictionInstance": {
    "apiKey": "string",
    "datarobotKey": "string",
    "hostName": "string",
    "sslEnabled": true
  },
  "predictionWarningEnabled": true,
  "redactedFields": [
    "string"
  ],
  "skipDriftTracking": false,
  "thresholdHigh": 0,
  "thresholdLow": 0,
  "timeseriesSettings": {
    "forecastPoint": "2019-08-24T14:15:22Z",
    "relaxKnownInAdvanceFeaturesCheck": false,
    "type": "forecast"
  }
}

Properties

Name Type Required Restrictions Description
abortOnError boolean true none Should this job abort if too many errors are encountered
chunkSize any false none Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous integer false none none

continued

Name Type Required Restrictions Description
columnNamesRemapping any false none Remap (rename or remove columns from) the output from this job

oneOf

Name Type Required Restrictions Description
» anonymous object false none Provide a dictionary with key/value pairs to remap (deprecated)

xor

Name Type Required Restrictions Description
» anonymous [BatchPredictionJobRemapping] false none Provide a list of items to remap

continued

Name Type Required Restrictions Description
csvSettings BatchPredictionJobCSVSettings true none The CSV settings used for this job
deploymentId string false none ID of deployment which is used in job for processing predictions dataset
disableRowLevelErrorHandling boolean true none Skip row by row error handling
explanationAlgorithm string false none Which algorithm will be used to calculate prediction explanations
explanationClassNames [string] false none List of class names that will be explained for each row for multiclass. Mutually exclusive with explanationNumTopClasses. If neither specified - we assume explanationNumTopClasses=1
explanationNumTopClasses integer false none Number of top predicted classes for each row that will be explained for multiclass. Mutually exclusive with explanationClassNames. If neither specified - we assume explanationNumTopClasses=1
includePredictionStatus boolean true none Include prediction status column in the output
includeProbabilities boolean true none Include probabilities for all classes
includeProbabilitiesClasses [string] true none Include only probabilities for these specific class names.
intakeSettings any true none The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous DataStageDataStreamer false none Stream CSV data chunks from data stage storage

xor

Name Type Required Restrictions Description
» anonymous DSSDataStreamer false none Stream CSV data chunks from DSS dataset

xor

Name Type Required Restrictions Description
» anonymous CatalogDataStreamer false none Stream CSV data chunks from AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous FileSystemDataStreamer false none none

xor

Name Type Required Restrictions Description
» anonymous HTTPDataStreamer false none Stream CSV data chunks from HTTP

xor

Name Type Required Restrictions Description
» anonymous JDBCDataStreamer false none Stream CSV data chunks from JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileDataStreamer false none Stream CSV data chunks from local file storage

xor

Name Type Required Restrictions Description
» anonymous AzureDataStreamer false none Stream CSV data chunks from Azure

xor

Name Type Required Restrictions Description
» anonymous GCPDataStreamer false none Stream CSV data chunks from Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryDataStreamer false none Stream CSV data chunks from Big Query using GCS

xor

Name Type Required Restrictions Description
» anonymous S3DataStreamer false none Stream CSV data chunks from Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeDataStreamer false none Stream CSV data chunks from Snowflake

xor

Name Type Required Restrictions Description
» anonymous SynapseDataStreamer false none Stream CSV data chunks from Azure Synapse

continued

Name Type Required Restrictions Description
maxExplanations integer true none Number of explanations requested. Will be ordered by strength.
modelId string false none ID of leaderboard model which is used in job for processing predictions dataset
modelPackageId string false none ID of model package from registry is used in job for processing predictions dataset
monitoringBatchPrefix string¦null false none Name of the batch to create with this job
numConcurrent integer false none Number of simultaneous requests to run against the prediction instance
outputSettings any true none The response option configured for this job

oneOf

Name Type Required Restrictions Description
» anonymous FileSystemOutputAdaptor false none none

xor

Name Type Required Restrictions Description
» anonymous HttpOutputAdaptor false none Save CSV data chunks to HTTP data endpoint

xor

Name Type Required Restrictions Description
» anonymous JdbcOutputAdaptor false none Save CSV data chunks via JDBC

xor

Name Type Required Restrictions Description
» anonymous LocalFileOutputAdaptor false none Save CSV data chunks to local file storage

xor

Name Type Required Restrictions Description
» anonymous TableauOutputAdaptor false none Save CSV data chunks to local file storage as .hyper file

xor

Name Type Required Restrictions Description
» anonymous AzureOutputAdaptor false none Save CSV data chunks to Azure Blob Storage

xor

Name Type Required Restrictions Description
» anonymous GCPOutputAdaptor false none Save CSV data chunks to Google Storage

xor

Name Type Required Restrictions Description
» anonymous BigQueryOutputAdaptor false none Save CSV data chunks to Google BigQuery in bulk

xor

Name Type Required Restrictions Description
» anonymous S3OutputAdaptor false none Saves CSV data chunks to Amazon Cloud Storage S3

xor

Name Type Required Restrictions Description
» anonymous SnowflakeOutputAdaptor false none Save CSV data chunks to Snowflake in bulk

xor

Name Type Required Restrictions Description
» anonymous SynapseOutputAdaptor false none Save CSV data chunks to Azure Synapse in bulk

continued

Name Type Required Restrictions Description
passthroughColumns [string] false none Pass through columns from the original dataset
passthroughColumnsSet string false none Pass through all columns from the original dataset
pinnedModelId string false none Specify a model ID used for scoring
predictionInstance BatchPredictionJobPredictionInstance false none Override the default prediction instance from the deployment when scoring this job.
predictionWarningEnabled boolean¦null false none Enable prediction warnings.
redactedFields [string] true none A list of qualified field names from intake- and/or outputSettings that was redacted due to permissions and sharing settings. For example: intakeSettings.dataStoreId
skipDriftTracking boolean true none Skip drift tracking for this job.
thresholdHigh number false none Compute explanations for predictions above this threshold
thresholdLow number false none Compute explanations for predictions below this threshold
timeseriesSettings any false none Time Series settings included of this job is a Time Series job.

oneOf

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsForecast false none none

xor

Name Type Required Restrictions Description
» anonymous BatchPredictionJobTimeSeriesSettingsHistorical false none none

Enumerated Values

Property Value
anonymous auto
anonymous fixed
anonymous dynamic
explanationAlgorithm shap
explanationAlgorithm xemp
passthroughColumnsSet all

BatchPredictionJobTimeSeriesSettingsForecast

{
  "forecastPoint": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "forecast"
}

Properties

Name Type Required Restrictions Description
forecastPoint string(date-time) false none Used for forecast predictions in order to override the inferred forecast point from the dataset.
relaxKnownInAdvanceFeaturesCheck boolean false none If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true none Forecast mode makes predictions using forecastPoint or rows in the dataset without target.

Enumerated Values

Property Value
type forecast

BatchPredictionJobTimeSeriesSettingsForecastWithPolicy

{
  "forecastPointPolicy": {
    "configuration": {
      "offset": "string"
    },
    "type": "jobRunTimeBased"
  },
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "forecast"
}

Properties

Name Type Required Restrictions Description
forecastPointPolicy JobRunTimeBasedForecastPointPolicy true none Forecast point policy
relaxKnownInAdvanceFeaturesCheck boolean false none If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true none Forecast mode makes predictions using forecastPoint or rows in the dataset without target.

Enumerated Values

Property Value
type forecast

BatchPredictionJobTimeSeriesSettingsHistorical

{
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": false,
  "type": "historical"
}

Properties

Name Type Required Restrictions Description
predictionsEndDate string(date-time) false none Used for historical predictions in order to override date to which predictions should be calculated. By default value will be inferred automatically from the dataset.
predictionsStartDate string(date-time) false none Used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset.
relaxKnownInAdvanceFeaturesCheck boolean false none If activated, missing values in the known in advance features are allowed in the forecast window at prediction time. If omitted or false, missing values are not allowed.
type string true none Historical mode enables bulk predictions which calculates predictions for all possible forecast points and forecast distances in the dataset within the predictionsStartDate/predictionsEndDate range.

Enumerated Values

Property Value
type historical

BatchPredictionJobUpdate

{
  "aborted": "2019-08-24T14:15:22Z",
  "completed": "2019-08-24T14:15:22Z",
  "failedRows": 0,
  "hidden": true,
  "jobIntakeSize": 0,
  "jobOutputSize": 0,
  "logs": [
    "string"
  ],
  "scoredRows": 0,
  "skippedRows": 0,
  "started": "2019-08-24T14:15:22Z",
  "status": "INITIALIZING"
}

Properties

Name Type Required Restrictions Description
aborted string(date-time)¦null false none Time when job abortion happened
completed string(date-time)¦null false none Time when job completed scoring
failedRows integer false none Number of rows that have failed scoring
hidden boolean false none Hides or unhides the job from the job list
jobIntakeSize integer¦null false none Number of bytes in the intake dataset for this job
jobOutputSize integer¦null false none Number of bytes in the output dataset for this job
logs [string] false none The job log.
scoredRows integer false none Number of rows that have been used in prediction computation
skippedRows integer false none Number of rows that have been skipped during scoring. May contain non-zero value only in time-series predictions case if provided dataset contains more than required historical rows.
started string(date-time)¦null false none Time when job scoring begin
status string false none The current job status

Enumerated Values

Property Value
status INITIALIZING
status RUNNING
status COMPLETED
status ABORTED
status FAILED

BigQueryDataStreamer

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true none The name of gcs bucket for data export
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the GCP credentials

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataset string true none The name of the specified big query dataset to read input data from
table string true none The name of the specified big query table to read input data from
type string true none Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
type bigquery

BigQueryIntake

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true none The name of gcs bucket for data export
credentialId string¦null false none The ID of the GCP credentials
dataset string true none The name of the specified big query dataset to read input data from
table string true none The name of the specified big query table to read input data from
type string true none Type name for this intake type

Enumerated Values

Property Value
type bigquery

BigQueryOutput

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true none The name of gcs bucket for data loading
credentialId string¦null false none The ID of the GCP credentials
dataset string true none The name of the specified big query dataset to write data back
table string true none The name of the specified big query table to write data back
type string true none Type name for this output type

Enumerated Values

Property Value
type bigquery

BigQueryOutputAdaptor

{
  "bucket": "string",
  "credentialId": "string",
  "dataset": "string",
  "table": "string",
  "type": "bigquery"
}

Properties

Name Type Required Restrictions Description
bucket string true none The name of gcs bucket for data loading
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the GCP credentials

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataset string true none The name of the specified big query dataset to write data back
table string true none The name of the specified big query table to write data back
type string true none Type name for this output type

Enumerated Values

Property Value
anonymous [redacted]
type bigquery

Catalog

{
  "datasetId": "string",
  "datasetVersionId": "string",
  "type": "dataset"
}

Properties

Name Type Required Restrictions Description
datasetId string true none The ID of the AI catalog dataset
datasetVersionId string false none The ID of the AI catalog dataset version
type string true none Type name for this intake type

Enumerated Values

Property Value
type dataset

CatalogDataStreamer

{
  "datasetId": "string",
  "datasetVersionId": "string",
  "type": "dataset"
}

Properties

Name Type Required Restrictions Description
datasetId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none The ID of the AI catalog dataset

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
datasetVersionId string false none The ID of the AI catalog dataset version
type string true none Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
type dataset

CreatePredictionDatasetResponse

{
  "datasetId": "string"
}

Properties

Name Type Required Restrictions Description
datasetId string true none The ID of the newly created prediction dataset.

CreatePredictionFromDataset

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "shap",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "includeFdwCounts": false,
  "includePredictionIntervals": true,
  "maxExplanations": 1,
  "modelId": "string",
  "predictionIntervalsSize": 1,
  "predictionThreshold": 1,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false none For time series projects only. Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. This value is optional.
datasetId string true none The dataset to compute predictions for - must have previously been uploaded.
explanationAlgorithm string false none If set to shap, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).
forecastPoint string(date-time) false none For time series projects only. The time in the dataset relative to which predictions are generated. This value is optional. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
includeFdwCounts boolean false none For time series projects with partial history only. Indicates if feature derivation window counts featureDerivationWindowCounts will be part of the response.
includePredictionIntervals boolean false none Specifies whether prediction intervals should be calculated for this request. Defaults to True if predictionIntervalsSize is specified, otherwise defaults to False.
maxExplanations integer false none Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of 'shap': If not set, explanations are returned for all features. If the number of features is greater than the 'maxExplanations', the sum of remaining values will also be returned as 'shapRemainingTotal'. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Cannot be set if 'explanationAlgorithm' is omitted.
modelId string true none The model to make predictions on.
predictionIntervalsSize integer false none Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if includePredictionIntervals is True.
predictionThreshold number false none Threshold used for binary classification in predictions. Accepts values from 0.0 to 1.0. If not specified, model default prediction threshold will be used.
predictionsEndDate string(date-time) false none The end date for bulk predictions, exclusive. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false none The start date for bulk predictions. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.

Enumerated Values

Property Value
explanationAlgorithm shap

CreateTrainingPrediction

{
  "dataSubset": "all",
  "explanationAlgorithm": "string",
  "maxExplanations": 1,
  "modelId": "string"
}

Properties

Name Type Required Restrictions Description
dataSubset string true none Subset of data predicted on: The value "all" returns predictions for all rows in the dataset including data used for training, validation, holdout and any rows discarded. This is not available for large datasets or projects created with Date/Time partitioning. The value "validationAndHoldout" returns predictions for the rows used to calculate the validation score and the holdout score. Not available for large projects or Date/Time projects for models trained into the validation set. The value "holdout" returns predictions for the rows used to calculate the holdout score. Not available for projects created without a holdout or for models trained into holdout for large datasets or created with Date/Time partitioning. The value "allBacktests" returns predictions for the rows used to calculate the backtesting scores for Date/Time projects. The value "validation" returns predictions for the rows used to calculate the validation score.
explanationAlgorithm string false none If set to "shap", the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations)
maxExplanations integer false none Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of "shap": If not set, explanations are returned for all features. If the number of features is greater than the "maxExplanations", the sum of remaining values will also be returned as "shapRemainingTotal". Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Cannot be set if "explanationAlgorithm" is omitted.
modelId string true none The model to make predictions on

Enumerated Values

Property Value
dataSubset all
dataSubset validationAndHoldout
dataSubset holdout
dataSubset allBacktests
dataSubset validation
dataSubset crossValidation

CredentialId

{
  "catalogVersionId": "string",
  "credentialId": "string",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
catalogVersionId string false none The ID of the latest version of the catalog entry.
credentialId string true none The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional.
url string false none The link to retrieve more detailed information about the entity that uses this catalog dataset.

DSS

{
  "datasetId": "string",
  "partition": "validation",
  "projectId": "string",
  "type": "dss"
}

Properties

Name Type Required Restrictions Description
datasetId string true none The ID of the dataset
partition string false none Partition used to predict
projectId string true none The ID of the project
type string true none Type name for this intake type

Enumerated Values

Property Value
partition validation
partition holdout
partition None
type dss

DSSDataStreamer

{
  "datasetId": "string",
  "partition": "validation",
  "projectId": "string",
  "type": "dss"
}

Properties

Name Type Required Restrictions Description
datasetId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none The ID of the dataset

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
partition string false none Partition used to predict
projectId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none The ID of the project

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
type string true none Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
partition validation
partition holdout
partition None
anonymous [redacted]
type dss

DataQualityWarningsRecord

{
  "hasKiaMissingValuesInForecastWindow": true,
  "insufficientRowsForEvaluatingModels": true,
  "singleClassActualValueColumn": true
}

Properties

Name Type Required Restrictions Description
hasKiaMissingValuesInForecastWindow boolean false none If true, known-in-advance features in this dataset have missing values in the forecast window. Absence of the known-in-advance values can negatively impact prediction quality. Only applies for time series projects.
insufficientRowsForEvaluatingModels boolean false none If true, the dataset has a target column present indicating it can be used to evaluate model performance but too few rows to be trustworthy in so doing. If false, either it has no target column at all or it has sufficient rows for model evaluation. Only applies for regression, binary classification, multiclass classification projects and time series unsupervised projects.
singleClassActualValueColumn boolean false none If true, actual value column has only one class and such insights as ROC curve can not be calculated. Only applies for binary classification projects or unsupervised projects.

DataStageDataStreamer

{
  "dataStageId": "string",
  "type": "dataStage"
}

Properties

Name Type Required Restrictions Description
dataStageId string true none The ID of the data stage
type string true none Type name for this intake type

Enumerated Values

Property Value
type dataStage

DataStageIntake

{
  "dataStageId": "string",
  "type": "dataStage"
}

Properties

Name Type Required Restrictions Description
dataStageId string true none The ID of the data stage
type string true none Type name for this intake type

Enumerated Values

Property Value
type dataStage

FileSystemDataStreamer

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true none Path to data on host filesystem
type string true none Type name for this intake type

Enumerated Values

Property Value
type filesystem

FileSystemIntake

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true none Path to data on host filesystem
type string true none Type name for this intake type

Enumerated Values

Property Value
type filesystem

FileSystemOutput

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true none Path to results on host filesystem
type string true none Type name for this output type

Enumerated Values

Property Value
type filesystem

FileSystemOutputAdaptor

{
  "path": "string",
  "type": "filesystem"
}

Properties

Name Type Required Restrictions Description
path string true none Path to results on host filesystem
type string true none Type name for this output type

Enumerated Values

Property Value
type filesystem

GCPDataStreamer

{
  "credentialId": "string",
  "format": "csv",
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
format string false none Type of input file format
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format csv
format parquet
type gcp

GCPIntake

{
  "credentialId": "string",
  "format": "csv",
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false none Use the specified credential to access the url
format string false none Type of input file format
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
format csv
format parquet
type gcp

GCPOutput

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false none Use the specified credential to access the url
format string false none Type of input file format
partitionColumns [string] false none For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true none Type name for this output type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
format csv
format parquet
type gcp

GCPOutputAdaptor

{
  "credentialId": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "type": "gcp",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
format string false none Type of input file format
partitionColumns [string] false none For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
type string true none Type name for this output type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format csv
format parquet
type gcp

HTTPDataStreamer

{
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
type http

HTTPIntake

{
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
type http

HTTPOutput

{
  "headers": {},
  "method": "POST",
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
headers object false none Extra headers to send with the request
method string true none Method to use when saving the CSV file
type string true none Type name for this output type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
method POST
method PUT
type http

HttpOutputAdaptor

{
  "headers": {},
  "method": "POST",
  "type": "http",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
headers object false none Extra headers to send with the request
method string true none Method to use when saving the CSV file
type string true none Type name for this output type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
method POST
method PUT
type http

JDBCDataStreamer

{
  "catalog": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "fetchSize": 1,
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "jdbc"
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to read input data from.
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with read access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataStoreId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
fetchSize integer false none A user specified fetch size. Changing it can be used to balance throughput and memory usage. Deprecated and ignored since v2.21.
query string false none A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false none The name of the specified database schema to read input data from.
table string false none The name of the specified database table to read input data from.
type string true none Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
type jdbc

JDBCIntake

{
  "catalog": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "fetchSize": 1,
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "jdbc"
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to read input data from.
credentialId string¦null false none The ID of the credential holding information about a user with read access to the JDBC data source.
dataStoreId string true none ID of the data store to connect to
fetchSize integer false none A user specified fetch size. Changing it can be used to balance throughput and memory usage. Deprecated and ignored since v2.21.
query string false none A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false none The name of the specified database schema to read input data from.
table string false none The name of the specified database table to read input data from.
type string true none Type name for this intake type

Enumerated Values

Property Value
type jdbc

JDBCOutput

{
  "catalog": "string",
  "commitInterval": 600,
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "schema": "string",
  "statementType": "createTable",
  "table": "string",
  "type": "jdbc",
  "updateColumns": [
    "string"
  ],
  "whereColumns": [
    "string"
  ]
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to write output data to.
commitInterval integer false none Defines a time interval in seconds between each commit is done to the JDBC source. If set to 0, the batch prediction operation will write the entire job before committing.
createTableIfNotExists boolean false none Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId string¦null false none The ID of the credential holding information about a user with write access to the JDBC data source.
dataStoreId string true none ID of the data store to connect to
schema string false none The name of the specified database schema to write the results to.
statementType string true none The statement type to use when writing the results. Deprecation Warning: Use of create_table is now discouraged. Use one of the other possibilities along with the parameter createTableIfNotExists set to true.
table string true none The name of the specified database table to write the results to.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
type string true none Type name for this intake type
updateColumns [string] false none The column names to be updated if statementType is set to either update or upsert.
whereColumns [string] false none The column names to be used in the where clause if statementType is set to update or upsert.

Enumerated Values

Property Value
statementType createTable
statementType create_table
statementType insert
statementType insertUpdate
statementType insert_update
statementType update
type jdbc

JdbcOutputAdaptor

{
  "catalog": "string",
  "commitInterval": 600,
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "schema": "string",
  "statementType": "createTable",
  "table": "string",
  "type": "jdbc",
  "updateColumns": [
    "string"
  ],
  "whereColumns": [
    "string"
  ]
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to write output data to.
commitInterval integer false none Defines a time interval in seconds between each commit is done to the JDBC source. If set to 0, the batch prediction operation will write the entire job before committing.
createTableIfNotExists boolean false none Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with write access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataStoreId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
schema string false none The name of the specified database schema to write the results to.
statementType string true none The statement type to use when writing the results. Deprecation Warning: Use of create_table is now discouraged. Use one of the other possibilities along with the parameter createTableIfNotExists set to true.
table string true none The name of the specified database table to write the results to.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
type string true none Type name for this intake type
updateColumns [string] false none The column names to be updated if statementType is set to either update or upsert.
whereColumns [string] false none The column names to be used in the where clause if statementType is set to update or upsert.

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
statementType createTable
statementType create_table
statementType insert
statementType insertUpdate
statementType insert_update
statementType update
type jdbc

JobRunTimeBasedForecastPointPolicy

{
  "configuration": {
    "offset": "string"
  },
  "type": "jobRunTimeBased"
}

Properties

Name Type Required Restrictions Description
configuration JobRunTimeBasedForecastPointPolicySettings false none Customize if forecast point based on job run time needs to be shifted.
type string true none Type of the forecast point policy. Forecast point will be based on the scheduled run time of the job or the current moment in UTC if job was launched manually. Run time can be adjusted backwards or forwards.

Enumerated Values

Property Value
type jobRunTimeBased

JobRunTimeBasedForecastPointPolicySettings

{
  "offset": "string"
}

Properties

Name Type Required Restrictions Description
offset string(offset) true none Offset to apply to scheduled run time of the job in a ISO-8601 format toobtain a relative forecast point. Example of the positive offset 'P2DT5H3M', example of the negative offset '-P2DT5H4M'

LocalFileDataStreamer

{
  "async": true,
  "multipart": true,
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
async boolean¦null false none The default behavior (async: true) will still submit the job to the queue and start processing as soon as the upload is started.Setting it to false will postpone submitting the job to the queue until all data has been uploaded.This is helpful if the user is on a bad connection and bottlednecked by the upload speed. Instead of blocking the queue this will allow others to submit to the queue until the upload has finished.
multipart boolean false none specify if the data will be uploaded in multiple parts instead of a single file
type string true none Type name for this intake type

Enumerated Values

Property Value
type local_file
type localFile

LocalFileIntake

{
  "async": true,
  "multipart": true,
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
async boolean¦null false none The default behavior (async: true) will still submit the job to the queue and start processing as soon as the upload is started.Setting it to false will postpone submitting the job to the queue until all data has been uploaded.This is helpful if the user is on a bad connection and bottlednecked by the upload speed. Instead of blocking the queue this will allow others to submit to the queue until the upload has finished.
multipart boolean false none specify if the data will be uploaded in multiple parts instead of a single file
type string true none Type name for this intake type

Enumerated Values

Property Value
type local_file
type localFile

LocalFileOutput

{
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
type string true none Type name for this output type

Enumerated Values

Property Value
type local_file
type localFile

LocalFileOutputAdaptor

{
  "type": "local_file"
}

Properties

Name Type Required Restrictions Description
type string true none Type name for this output type

Enumerated Values

Property Value
type local_file
type localFile

OAuthCredentials

{
  "credentialType": "oauth",
  "oauthAccessToken": null,
  "oauthClientId": null,
  "oauthClientSecret": null,
  "oauthRefreshToken": "string"
}

Properties

Name Type Required Restrictions Description
credentialType string true none The type of these credentials, 'oauth' here.
oauthAccessToken string¦null false none The oauth access token.
oauthClientId string¦null false none The oauth client ID.
oauthClientSecret string¦null false none The oauth client secret.
oauthRefreshToken string true none The oauth refresh token.

Enumerated Values

Property Value
credentialType oauth

PasswordCredentials

{
  "catalogVersionId": "string",
  "password": "string",
  "url": "string",
  "user": "string"
}

Properties

Name Type Required Restrictions Description
catalogVersionId string false none The ID of the latest version of the catalog entry.
password string true none The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.
url string false none The link to retrieve more detailed information about the entity that uses this catalog dataset.
user string true none The username for database authentication.

PredictJobDetailsResponse

{
  "id": "string",
  "isBlocked": true,
  "message": "string",
  "modelId": "string",
  "projectId": "string",
  "status": "queue"
}

Properties

Name Type Required Restrictions Description
id string true none the job ID of the job
isBlocked boolean true none True if a job is waiting for its dependencies to be resolved first.
message string true none An optional message about the job
modelId string true none The ID of the model
projectId string true none the project the job belongs to
status string true none the status of the job

Enumerated Values

Property Value
status queue
status inprogress
status error
status ABORTED
status COMPLETED

PredictionArrayObjectValues

{
  "label": "string",
  "threshold": 1,
  "value": 0
}

Properties

Name Type Required Restrictions Description
label any true none For regression problems this will be the name of the target column, 'Anomaly score' or ignored field. For classification projects this will be the name of the class.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous number false none none

continued

Name Type Required Restrictions Description
threshold number false none Threshold used in multilabel classification for this class.
value number true none The predicted probability of the class identified by the label.

PredictionDataSource

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "dataSourceId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false none The actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset.
credentialData any false none The credentials to authenticate with the database, to use instead of user/password or credential ID.

oneOf

Name Type Required Restrictions Description
» anonymous BasicCredentials false none none

xor

Name Type Required Restrictions Description
» anonymous S3Credentials false none none

xor

Name Type Required Restrictions Description
» anonymous OAuthCredentials false none none

continued

Name Type Required Restrictions Description
credentialId string false none The credential ID to use for database authentication.
credentials [oneOf] false none A list of credentials for the secondary datasets used in feature discovery project.

oneOf

Name Type Required Restrictions Description
» anonymous PasswordCredentials false none none

xor

Name Type Required Restrictions Description
» anonymous CredentialId false none none

continued

Name Type Required Restrictions Description
dataSourceId string true none The ID of DataSource.
forecastPoint string(date-time) false none For time series projects only. The time in the dataset relative to which predictions are generated. This value is optional. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
password string false none The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored. DEPRECATED: please use credentialId or credentialData instead.
predictionsEndDate string(date-time) false none The end date for bulk predictions, exclusive. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false none The start date for bulk predictions. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck boolean false none For time series projects only. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. This value is optional. If omitted or false, missing values are not allowed.
secondaryDatasetsConfigId string false none For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction.
useKerberos boolean false none If true, use kerberos authentication for database authentication. Default is false.
user string false none The username for database authentication. DEPRECATED: please use credentialId or credentialData instead.

PredictionDatasetListControllerResponse

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "catalogId": "string",
      "catalogVersionId": "string",
      "containsTargetValues": true,
      "created": "2019-08-24T14:15:22Z",
      "dataEndDate": "2019-08-24T14:15:22Z",
      "dataQualityWarnings": {
        "hasKiaMissingValuesInForecastWindow": true,
        "insufficientRowsForEvaluatingModels": true,
        "singleClassActualValueColumn": true
      },
      "dataStartDate": "2019-08-24T14:15:22Z",
      "detectedActualValueColumns": [
        {
          "missingCount": 0,
          "name": "string"
        }
      ],
      "forecastPoint": "string",
      "forecastPointRange": [
        "2019-08-24T14:15:22Z"
      ],
      "id": "string",
      "maxForecastDate": "2019-08-24T14:15:22Z",
      "name": "string",
      "numColumns": 0,
      "numRows": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "secondaryDatasetsConfigId": "string"
    }
  ],
  "next": "string",
  "previous": "string"
}

Properties

Name Type Required Restrictions Description
count integer true none The number of items returned on this page.
data [PredictionDatasetRetrieveResponse] true none Each has the same schema as if retrieving the dataset individually from GET /api/v2/projects/{projectId}/predictionDatasets/{datasetId}/
next string¦null true none A URL pointing to the next page (if null, there is no next page).
previous string¦null true none A URL pointing to the previous page (if null, there is no previous page).

PredictionDatasetRetrieveResponse

{
  "actualValueColumn": "string",
  "catalogId": "string",
  "catalogVersionId": "string",
  "containsTargetValues": true,
  "created": "2019-08-24T14:15:22Z",
  "dataEndDate": "2019-08-24T14:15:22Z",
  "dataQualityWarnings": {
    "hasKiaMissingValuesInForecastWindow": true,
    "insufficientRowsForEvaluatingModels": true,
    "singleClassActualValueColumn": true
  },
  "dataStartDate": "2019-08-24T14:15:22Z",
  "detectedActualValueColumns": [
    {
      "missingCount": 0,
      "name": "string"
    }
  ],
  "forecastPoint": "string",
  "forecastPointRange": [
    "2019-08-24T14:15:22Z"
  ],
  "id": "string",
  "maxForecastDate": "2019-08-24T14:15:22Z",
  "name": "string",
  "numColumns": 0,
  "numRows": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "secondaryDatasetsConfigId": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string¦null false none Optional, only available for unsupervised projects, in case dataset was uploaded with actual value column specified. Name of the column which will be used to calculate the classification metrics and insights.
catalogId string¦null true none The ID of the AI catalog entry used to create the prediction, dataset or None if not created from the AI catalog.
catalogVersionId string¦null true none The ID of the AI catalog version used to create the prediction dataset, or None if not created from the AI catalog.
containsTargetValues boolean¦null false none If True, dataset contains target values and can be used to calculate the classification metrics and insights. Only applies for supervised projects.
created string(date-time) true none The date string of when the dataset was created, of the formatYYYY-mm-ddTHH:MM:SS.ssssssZ, like 2016-06-09T11:32:34.170338Z.
dataEndDate string(date-time) false none Only available for time series projects, a date string representing the maximum primary date of the prediction dataset.
dataQualityWarnings DataQualityWarningsRecord true none A Json object of available warnings about potential problems in this prediction dataset. Empty if no warnings.
dataStartDate string(date-time) false none Only available for time series projects, a date string representing the minimum primary date of the prediction dataset.
detectedActualValueColumns [ActualValueColumnInfo] false none Only available for unsupervised projects, a list of detected actualValueColumnInfo objects which can be used to calculate the classification metrics and insights.
forecastPoint string¦null true none The date string of the forecastPoint of this prediction dataset. Only non-null for time series projects.
forecastPointRange [string] false none Only available for time series projects, the start and end of the range of dates available for use as the forecast point, detected based on the uploaded prediction dataset.
id string true none The ID of this dataset.
maxForecastDate string(date-time) false none Only available for time series projects, a date string representing the maximum forecast date of this prediction dataset.
name string true none The name of the dataset when it was uploaded.
numColumns integer true none The number of columns in this dataset.
numRows integer true none The number of rows in this dataset.
predictionsEndDate string(date-time)¦null true none The date string of the prediction end date of this prediction dataset. Used for bulk predictions. Note that this parameter is for generating historical predictions using the training data. Only non-null for time series projects.
predictionsStartDate string(date-time)¦null true none The date string of the prediction start date of this prediction dataset. Used for bulk predictions. Note that this parameter is for generating historical predictions using the training data. Only non-null for time series projects.
projectId string true none The project ID that owns this dataset.
secondaryDatasetsConfigId string false none Only available for Feature discovery projects. Id of the secondary dataset config used by the dataset for the prediction.

PredictionExplanationsMetadataValues

{
  "shapRemainingTotal": 0
}

Properties

Name Type Required Restrictions Description
shapRemainingTotal integer false none Will be present only if explanationAlgorithm = 'shap' and maxExplanations is nonzero. The total of SHAP values for features beyond the maxExplanations. This can be identically 0 in all rows, if maxExplanations is greater than the number of features and thus all features are returned.

PredictionExplanationsObject

{
  "feature": "string",
  "featureValue": 0,
  "label": "string",
  "strength": 0
}

Properties

Name Type Required Restrictions Description
feature string true none The name of the feature contributing to the prediction.
featureValue any true none The value the feature took on for this row. The type corresponds to the feature (bool, int, float, str, etc.).

oneOf

Name Type Required Restrictions Description
» anonymous integer false none none

xor

Name Type Required Restrictions Description
» anonymous boolean false none none

xor

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous number false none none

continued

Name Type Required Restrictions Description
label any true none Describes what output was driven by this prediction explanation. For regression projects, it is the name of the target feature. For classification projects, it is the class whose probability increasing would correspond to a positive strength of this prediction explanation. For predictions made using anomaly detection models, it is the Anomaly Score.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous number false none none

continued

Name Type Required Restrictions Description
strength number¦null false none Algorithm-specific explanation value attributed to feature in this row. If explanationAlgorithm = shap, this is the SHAP value.

PredictionFileUpload

{
  "actualValueColumn": "string",
  "credentials": "string",
  "file": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": "false",
  "secondaryDatasetsConfigId": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false none Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset.
credentials string false none A list of credentials for the secondary datasets used in feature discovery project
file string(binary) true none The dataset file to upload for prediction.
forecastPoint string(date-time) false none For time series projects only. The time in the dataset relative to which predictions are generated. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
predictionsEndDate string(date-time) false none Used for time series projects only. The end date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false none Used for time series projects only. The start date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck string false none A boolean flag. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or false, missing values are not allowed. For time series projects only.
secondaryDatasetsConfigId string false none Optional, for feature discovery projects only. The Id of the alternative secondary dataset config to use during prediction.

Enumerated Values

Property Value
relaxKnownInAdvanceFeaturesCheck false
relaxKnownInAdvanceFeaturesCheck False
relaxKnownInAdvanceFeaturesCheck true
relaxKnownInAdvanceFeaturesCheck True

PredictionFromCatalogDataset

{
  "actualValueColumn": "string",
  "credentialData": {
    "credentialType": "basic",
    "password": "string",
    "user": "string"
  },
  "credentialId": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "datasetId": "string",
  "datasetVersionId": "string",
  "forecastPoint": "2019-08-24T14:15:22Z",
  "password": "string",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "useKerberos": false,
  "user": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false none Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset.
credentialData any false none The credentials to authenticate with the database, to be used instead of credential ID.

oneOf

Name Type Required Restrictions Description
» anonymous BasicCredentials false none none

xor

Name Type Required Restrictions Description
» anonymous S3Credentials false none none

xor

Name Type Required Restrictions Description
» anonymous OAuthCredentials false none none

continued

Name Type Required Restrictions Description
credentialId string false none The ID of the set of credentials to authenticate with the database.
credentials [oneOf] false none List of credentials for the secondary datasets used in feature discovery project.

oneOf

Name Type Required Restrictions Description
» anonymous PasswordCredentials false none none

xor

Name Type Required Restrictions Description
» anonymous CredentialId false none none

continued

Name Type Required Restrictions Description
datasetId string true none The ID of the dataset entry to use for prediction dataset.
datasetVersionId string false none The ID of the dataset version to use for the prediction dataset. If not specified - uses latest version associated with datasetId.
forecastPoint string(date-time) false none For time series projects only. The time in the dataset relative to which predictions are generated. This value is optional. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
password string false none The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.DEPRECATED: please use credentialId or credentialData instead.
predictionsEndDate string(date-time) false none The end date for bulk predictions, exclusive. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false none The start date for bulk predictions. Used for time series projects only. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck boolean false none For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.
secondaryDatasetsConfigId string false none For feature discovery projects only. The Id of the alternative secondary dataset config to use during prediction.
useKerberos boolean false none If true, use kerberos authentication for database authentication. Default is false.
user string false none The username for database authentication. DEPRECATED: please use credentialId or credentialData instead.

PredictionObject

{
  "actualValue": "string",
  "forecastDistance": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "originalFormatTimestamp": "string",
  "positiveProbability": 0,
  "prediction": 0,
  "predictionExplanationMetadata": [
    {
      "shapRemainingTotal": 0
    }
  ],
  "predictionExplanations": [
    {
      "feature": "string",
      "featureValue": 0,
      "label": "string",
      "strength": 0
    }
  ],
  "predictionIntervalLowerBound": 0,
  "predictionIntervalUpperBound": 0,
  "predictionThreshold": 1,
  "predictionValues": [
    {
      "label": "string",
      "threshold": 1,
      "value": 0
    }
  ],
  "rowId": 0,
  "segmentId": "string",
  "seriesId": "string",
  "target": "string",
  "timestamp": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
actualValue string¦null false none In the case of an unsupervised time series project with a dataset using predictionsStartDate and predictionsEndDate for bulk predictions and a specified actual value column, the predictions will be a json array in the same format as with a forecast point with one additional element - actualValues. It is the actual value in the row.
forecastDistance integer¦null false none (if time series project) The number of time units this prediction is away from the forecastPoint. The unit of time is determined by the timeUnit of the datetime partition column.
forecastPoint string(date-time)¦null false none (if time series project) The forecastPoint of the predictions. Either provided or inferred.
originalFormatTimestamp string false none The timestamp of this row in the prediction dataset. Unlike the timestamp field, this field will keep the same DateTime formatting as the uploaded prediction dataset. (This column is shown if enabled by your administrator.)
positiveProbability number¦null false none For binary classification, the probability the row belongs to the positive class.
prediction any true none The prediction of the model.

oneOf

Name Type Required Restrictions Description
» anonymous number false none If using a regressor model, will be the numeric value of the target.

xor

Name Type Required Restrictions Description
» anonymous string false none If using a binary or muliclass classifier model, will be the predicted class.

xor

Name Type Required Restrictions Description
» anonymous [string] false none If using a multilabel classifier model, will be a list of predicted classes.

continued

Name Type Required Restrictions Description
predictionExplanationMetadata [PredictionExplanationsMetadataValues] false none Array containing algorithm-specific values. Varies depending on the value of explanationAlgorithm.
predictionExplanations [PredictionExplanationsObject]¦null false none Array contains predictionExplanation objects. The total elements in the array are bounded by maxExplanations and feature count. It will be present only if explanationAlgorithm is not null (prediction explanations were requested).
predictionIntervalLowerBound number false none Present if includePredictionIntervals is True. Indicates a lower bound of the estimate of error based on test data.
predictionIntervalUpperBound number false none Present if includePredictionIntervals is True. Indicates an upper bound of the estimate of error based on test data.
predictionThreshold number false none Threshold used for binary classification in predictions.
predictionValues [PredictionArrayObjectValues] false none A list of predicted values for this row.
rowId integer true none The row in the prediction dataset this prediction corresponds to.
segmentId string false none The ID of the segment value for a segmented project.
seriesId string¦null false none The ID of the series value for a multiseries project. For time series projects that are not a multiseries this will be a NaN.
target string¦null false none In the case of a time series project with a dataset using predictionsStartDate and predictionsEndDate for bulk predictions, the predictions will be a json array in the same format as with a forecast point with one additional element - target. It is the target value in the row.
timestamp string(date-time)¦null false none (if time series project) The timestamp of this row in the prediction dataset.

PredictionRetrieveResponse

{
  "actualValueColumn": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "positiveClass": "string",
  "predictionIntervalsSize": 0,
  "predictions": [
    {
      "actualValue": "string",
      "forecastDistance": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "originalFormatTimestamp": "string",
      "positiveProbability": 0,
      "prediction": 0,
      "predictionExplanationMetadata": [
        {
          "shapRemainingTotal": 0
        }
      ],
      "predictionExplanations": [
        {
          "feature": "string",
          "featureValue": 0,
          "label": "string",
          "strength": 0
        }
      ],
      "predictionIntervalLowerBound": 0,
      "predictionIntervalUpperBound": 0,
      "predictionThreshold": 1,
      "predictionValues": [
        {
          "label": "string",
          "threshold": 1,
          "value": 0
        }
      ],
      "rowId": 0,
      "segmentId": "string",
      "seriesId": "string",
      "target": "string",
      "timestamp": "2019-08-24T14:15:22Z"
    }
  ],
  "shapBaseValue": 0,
  "shapWarnings": [
    {
      "maxNormalizedMismatch": 0,
      "mismatchRowCount": 0
    }
  ],
  "task": "Regression"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string¦null false none For time series unsupervised projects only. Will be present only if the prediction dataset has an actual value column. The name of the column with actuals that was used to calculate the scores and insights.
explanationAlgorithm string¦null false none The selected algorithm to use for prediction explanations. At present, the only acceptable value is 'shap', which selects the SHapley Additive exPlanations (SHAP) explainer. Defaults to null (no prediction explanations).
featureDerivationWindowCounts integer¦null false none For time series projects with partial history only. Indicates how many points were used during feature derivation in feature derivation window.
includesPredictionIntervals boolean false none For time series projects only. Indicates if prediction intervals will be part of the response. Defaults to False.
maxExplanations integer¦null false none The maximum number of prediction explanations values to be returned with each row in the predictions json array. Null indicates 'no limit'. Will be present only if explanationAlgorithm was set.
positiveClass any true none For binary classification, the class of the target deemed the positive class. For all other project types this field will be null.

oneOf

Name Type Required Restrictions Description
» anonymous string false none none

xor

Name Type Required Restrictions Description
» anonymous integer false none none

xor

Name Type Required Restrictions Description
» anonymous number false none none

continued

Name Type Required Restrictions Description
predictionIntervalsSize integer¦null false none For time series projects only. Will be present only if includePredictionIntervals is True. Indicates the percentile used for prediction intervals calculation. Defaults to 80.
predictions [PredictionObject] true none The json array of predictions. The predictions in the response will have slightly different formats, depending on the project type.
shapBaseValue number¦null false none Will be present only if explanationAlgorithm = 'shap'. The model's average prediction over the training data. SHAP values are deviations from the base value.
shapWarnings [ShapWarningValues]¦null false none Will be present if explanationAlgorithm was set to shap and there were additivity failures during SHAP values calculation.
task string true none The prediction task.

Enumerated Values

Property Value
task Regression
task Binary
task Multiclass
task Multilabel

PredictionURLUpload

{
  "actualValueColumn": "string",
  "credentials": [
    {
      "catalogVersionId": "string",
      "password": "string",
      "url": "string",
      "user": "string"
    }
  ],
  "forecastPoint": "2019-08-24T14:15:22Z",
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "relaxKnownInAdvanceFeaturesCheck": true,
  "secondaryDatasetsConfigId": "string",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string false none Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. This value is optional.
credentials [oneOf] false none A list of credentials for the secondary datasets used in feature discovery project

oneOf

Name Type Required Restrictions Description
» anonymous PasswordCredentials false none none

xor

Name Type Required Restrictions Description
» anonymous CredentialId false none none

continued

Name Type Required Restrictions Description
forecastPoint string(date-time) false none For time series projects only. The time in the dataset relative to which predictions are generated. If not specified the default value is the value in the row with the latest specified timestamp. Specifying this value for a project that is not a time series project will result in an error.
predictionsEndDate string(date-time) false none Used for time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsStartDate, and cannot be provided with the forecastPoint parameter.
predictionsStartDate string(date-time) false none Used for time series projects only. The start date for bulk predictions. Note that this parameter is used for generating historical predictions using the training data, not for future predictions. If not specified, the dataset is not considered as a bulk predictions dataset. This parameter should be provided in conjunction with a predictionsEndDate, and cannot be provided with the forecastPoint parameter.
relaxKnownInAdvanceFeaturesCheck boolean false none For time series projects only. If true, missing values in the known in advance features are allowed in the forecast window at the prediction time. This value is optional. If omitted or false, missing values are not allowed.
secondaryDatasetsConfigId string false none For feature discovery projects only. The ID of the alternative secondary dataset config to use during prediction.
url string(url) true none The URL to download the dataset from.

RetrieveListPredictionMetadataObjectsResponse

{
  "count": 0,
  "data": [
    {
      "actualValueColumn": "string",
      "datasetId": "string",
      "explanationAlgorithm": "string",
      "featureDerivationWindowCounts": 0,
      "forecastPoint": "2019-08-24T14:15:22Z",
      "id": "string",
      "includesPredictionIntervals": true,
      "maxExplanations": 0,
      "modelId": "string",
      "predictionDatasetId": "string",
      "predictionIntervalsSize": 0,
      "predictionThreshold": 0,
      "predictionsEndDate": "2019-08-24T14:15:22Z",
      "predictionsStartDate": "2019-08-24T14:15:22Z",
      "projectId": "string",
      "shapWarnings": {
        "maxNormalizedMismatch": 0,
        "mismatchRowCount": 0
      },
      "url": "string"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com"
}

Properties

Name Type Required Restrictions Description
count integer true none The number of items returned on this page.
data [RetrievePredictionMetadataObject] true none An array of the metadata records.
next string(uri)¦null true none URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true none URL pointing to the previous page (if null, there is no previous page).

RetrievePredictionMetadataObject

{
  "actualValueColumn": "string",
  "datasetId": "string",
  "explanationAlgorithm": "string",
  "featureDerivationWindowCounts": 0,
  "forecastPoint": "2019-08-24T14:15:22Z",
  "id": "string",
  "includesPredictionIntervals": true,
  "maxExplanations": 0,
  "modelId": "string",
  "predictionDatasetId": "string",
  "predictionIntervalsSize": 0,
  "predictionThreshold": 0,
  "predictionsEndDate": "2019-08-24T14:15:22Z",
  "predictionsStartDate": "2019-08-24T14:15:22Z",
  "projectId": "string",
  "shapWarnings": {
    "maxNormalizedMismatch": 0,
    "mismatchRowCount": 0
  },
  "url": "string"
}

Properties

Name Type Required Restrictions Description
actualValueColumn string¦null false none For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights.
datasetId string¦null false none Deprecated alias for predictionDatasetId.
explanationAlgorithm string¦null false none The selected algorithm to use for prediction explanations. At present, the only acceptable value is shap, which selects the SHapley Additive exPlanations (SHAP) explainer. Defaults to null (no prediction explanations).
featureDerivationWindowCounts integer¦null false none For time series projects with partial history only. Indicates how many points were used in during feature derivation.
forecastPoint string(date-time)¦null false none For time series projects only. The time in the dataset relative to which predictions were generated.
id string true none The id of the prediction record.
includesPredictionIntervals boolean true none Whether the predictions include prediction intervals.
maxExplanations integer¦null false none The maximum number of prediction explanations values to be returned with each row in the predictions json array. Null indicates no limit. Will be present only if explanationAlgorithm was set.
modelId string true none The model id used for predictions.
predictionDatasetId string¦null false none The dataset id where the prediction data comes from. The field is available via /api/v2/projects/<projectId>/predictionsMetadata/ route and replaced on datasetIdin deprecated /api/v2/projects/<projectId>/predictions/ endpoint.
predictionIntervalsSize integer¦null true none For time series projects only. If prediction intervals were computed, what percentile they represent. Will be None if includePredictionIntervals is False.
predictionThreshold number¦null false none Threshold used for binary classification in predictions.
predictionsEndDate string(date-time)¦null false none For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter was used for generating historical predictions using the training data, not for future predictions.
predictionsStartDate string(date-time)¦null false none For time series projects only. The start date for bulk predictions. Note that this parameter was used for generating historical predictions using the training data, not for future predictions.
projectId string true none The project id of the predictions.
shapWarnings ShapWarnings false none Will be present if explanationAlgorithm was set to shap and there were additivity failures during SHAP values calculation.
url string true none The url at which you can download the predictions.

S3Credentials

{
  "awsAccessKeyId": null,
  "awsSecretAccessKey": null,
  "awsSessionToken": null,
  "credentialType": "s3"
}

Properties

Name Type Required Restrictions Description
awsAccessKeyId string¦null false none The S3 AWS access key ID.
awsSecretAccessKey string¦null false none The S3 AWS secret access key.
awsSessionToken string¦null false none The S3 AWS session token.
credentialType string true none The type of these credentials, 's3' here.

Enumerated Values

Property Value
credentialType s3

S3DataStreamer

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
endpointUrl string(url) false none Endpoint URL for the S3 connection (omit to use the default)
format string false none Type of input file format
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format csv
format parquet
type s3

S3Intake

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false none Use the specified credential to access the url
endpointUrl string(url) false none Endpoint URL for the S3 connection (omit to use the default)
format string false none Type of input file format
type string true none Type name for this intake type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
format csv
format parquet
type s3

S3Output

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "serverSideEncryption": {
    "algorithm": "string",
    "customerAlgorithm": "string",
    "customerKey": "string",
    "kmsEncryptionContext": "string",
    "kmsKeyId": "string"
  },
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId string¦null false none Use the specified credential to access the url
endpointUrl string(url) false none Endpoint URL for the S3 connection (omit to use the default)
format string false none Type of output file format
partitionColumns [string] false none For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
serverSideEncryption ServerSideEncryption false none Configure Server-Side Encryption for S3 output
type string true none Type name for this output type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
format csv
format parquet
type s3

S3OutputAdaptor

{
  "credentialId": "string",
  "endpointUrl": "string",
  "format": "csv",
  "partitionColumns": [
    "string"
  ],
  "serverSideEncryption": {
    "algorithm": "string",
    "customerAlgorithm": "string",
    "customerKey": "string",
    "kmsEncryptionContext": "string",
    "kmsKeyId": "string"
  },
  "type": "s3",
  "url": "string"
}

Properties

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none Use the specified credential to access the url

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
endpointUrl string(url) false none Endpoint URL for the S3 connection (omit to use the default)
format string false none Type of output file format
partitionColumns [string] false none For Parquet directory-scoring only. The column names of the intake data of which to partition the dataset. Columns are partitioned in the order they are given. At least one value is required if scoring to a directory (meaning the output url ends with a slash ("/").
serverSideEncryption ServerSideEncryption false none Configure Server-Side Encryption for S3 output
type string true none Type name for this output type
url string(url) true none URL for the CSV file

Enumerated Values

Property Value
anonymous [redacted]
format csv
format parquet
type s3

Schedule

{
  "dayOfMonth": [
    "*"
  ],
  "dayOfWeek": [
    "*"
  ],
  "hour": [
    "*"
  ],
  "minute": [
    "*"
  ],
  "month": [
    "*"
  ]
}

Properties

Name Type Required Restrictions Description
dayOfMonth [anyOf] true none The date(s) of the month that the job will run. Allowed values are either [1 ... 31] or ["*"] for all days of the month. This field is additive with dayOfWeek, meaning the job will run both on the date(s) defined in this field and the day specified by dayOfWeek (for example, dates 1st, 2nd, 3rd, plus every Tuesday). If dayOfMonth is set to ["*"] and dayOfWeek is defined, the scheduler will trigger on every day of the month that matches dayOfWeek (for example, Tuesday the 2nd, 9th, 16th, 23rd, 30th). Invalid dates such as February 31st are ignored.

anyOf

Name Type Required Restrictions Description
» anonymous number false none none

or

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dayOfWeek [anyOf] true none The day(s) of the week that the job will run. Allowed values are [0 .. 6], where (Sunday=0), or ["*"], for all days of the week. Strings, either 3-letter abbreviations or the full name of the day, can be used interchangeably (e.g., "sunday", "Sunday", "sun", or "Sun", all map to [0]. This field is additive with dayOfMonth, meaning the job will run both on the date specified by dayOfMonth and the day defined in this field.

anyOf

Name Type Required Restrictions Description
» anonymous number false none none

or

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
hour [anyOf] true none The hour(s) of the day that the job will run. Allowed values are either ["*"] meaning every hour of the day or [0 ... 23].

anyOf

Name Type Required Restrictions Description
» anonymous number false none none

or

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
minute [anyOf] true none The minute(s) of the day that the job will run. Allowed values are either ["*"] meaning every minute of the day or[0 ... 59].

anyOf

Name Type Required Restrictions Description
» anonymous number false none none

or

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
month [anyOf] true none The month(s) of the year that the job will run. Allowed values are either [1 ... 12] or ["*"] for all months of the year. Strings, either 3-letter abbreviations or the full name of the month, can be used interchangeably (e.g., "jan" or "october"). Months that are not compatible with dayOfMonth are ignored, for example {"dayOfMonth": [31], "month":["feb"]}.

anyOf

Name Type Required Restrictions Description
» anonymous number false none none

or

Name Type Required Restrictions Description
» anonymous string false none none

ScheduledJobResponse

{
  "createdBy": "string",
  "deploymentId": "string",
  "enabled": true,
  "id": "string",
  "integrationTypeId": "string",
  "integrationTypeName": "sql",
  "name": "string",
  "schedule": {
    "dayOfMonth": [
      "*"
    ],
    "dayOfWeek": [
      "*"
    ],
    "hour": [
      "*"
    ],
    "minute": [
      "*"
    ],
    "month": [
      "*"
    ]
  },
  "scheduledJobId": "string",
  "status": {
    "lastFailedRun": "2019-08-24T14:15:22Z",
    "lastSuccessfulRun": "2019-08-24T14:15:22Z",
    "nextRunTime": "2019-08-24T14:15:22Z",
    "queuePosition": 0,
    "running": true
  },
  "typeId": "string",
  "updatedAt": "2019-08-24T14:15:22Z"
}

Properties

Name Type Required Restrictions Description
createdBy string¦null false none User name of the creator
deploymentId string¦null false none ID of the deployment this scheduled job is created from.
enabled boolean true none True if the job is enabled and false if the job is disabled.
id string true none ID of scheduled prediction job
integrationTypeId string¦null false none The specific type of of prediction integration.
integrationTypeName string false none filter by integration type name.
name string¦null false none Name of the scheduled job.
schedule Schedule true none Schedule describing when to refresh the dataset, the smallest schedule allowed is daily. Can be null is job was created without a schedule.
scheduledJobId string true none ID of this scheduled job.
status ScheduledJobStatus true none Object containing status information about the scheduled job.
typeId string true none Job type of the scheduled job
updatedAt string(date-time)¦null false none Time of last modification

Enumerated Values

Property Value
integrationTypeName sql
integrationTypeName tableau
integrationTypeName snowflake
integrationTypeName kdb

ScheduledJobRunStop

{
  "status": {
    "running": true
  }
}

Properties

Name Type Required Restrictions Description
status ScheduledJobStatusElement true none The status that of the a job that you are looking to update

ScheduledJobStatus

{
  "lastFailedRun": "2019-08-24T14:15:22Z",
  "lastSuccessfulRun": "2019-08-24T14:15:22Z",
  "nextRunTime": "2019-08-24T14:15:22Z",
  "queuePosition": 0,
  "running": true
}

Properties

Name Type Required Restrictions Description
lastFailedRun string(date-time)¦null false none Date and time of the last failed run.
lastSuccessfulRun string(date-time)¦null false none Date and time of the last successful run.
nextRunTime string(date-time)¦null false none Date and time of the next run.
queuePosition integer¦null false none Position of the job in the queue Job. The value will show 0 if the job is about to run, otherwise, the number will be greater than 0 if currently queued, or None if the job is not currently running.
running boolean true none true or false depending on whether the job is currently running.

ScheduledJobStatusElement

{
  "running": true
}

Properties

Name Type Required Restrictions Description
running boolean true none Indicates if the job is running or not.

ScheduledJobsListResponse

{
  "count": 0,
  "data": [
    {
      "createdBy": "string",
      "deploymentId": "string",
      "enabled": true,
      "id": "string",
      "integrationTypeId": "string",
      "integrationTypeName": "sql",
      "name": "string",
      "schedule": {
        "dayOfMonth": [
          "*"
        ],
        "dayOfWeek": [
          "*"
        ],
        "hour": [
          "*"
        ],
        "minute": [
          "*"
        ],
        "month": [
          "*"
        ]
      },
      "scheduledJobId": "string",
      "status": {
        "lastFailedRun": "2019-08-24T14:15:22Z",
        "lastSuccessfulRun": "2019-08-24T14:15:22Z",
        "nextRunTime": "2019-08-24T14:15:22Z",
        "queuePosition": 0,
        "running": true
      },
      "typeId": "string",
      "updatedAt": "2019-08-24T14:15:22Z"
    }
  ],
  "next": "http://example.com",
  "previous": "http://example.com",
  "totalCount": 0,
  "updatedAt": "2019-08-24T14:15:22Z",
  "updatedBy": "string"
}

Properties

Name Type Required Restrictions Description
count integer false none Number of items returned on this page.
data [ScheduledJobResponse] true none List of scheduled jobs
next string(uri)¦null true none URL pointing to the next page (if null, there is no next page).
previous string(uri)¦null true none URL pointing to the previous page (if null, there is no previous page).
totalCount integer true none The total number of items across all pages.
updatedAt string(date-time) false none Time of last modification
updatedBy string false none User ID of last modifier

ServerSideEncryption

{
  "algorithm": "string",
  "customerAlgorithm": "string",
  "customerKey": "string",
  "kmsEncryptionContext": "string",
  "kmsKeyId": "string"
}

Properties

Name Type Required Restrictions Description
algorithm string false none The server-side encryption algorithm used when storing this object in Amazon S3 (for example, AES256, aws:kms).
customerAlgorithm string false none Specifies the algorithm to use to when encrypting the object (for example, AES256).
customerKey string false none Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in customerAlgorithm. The key must be sent as an base64 encoded string.
kmsEncryptionContext string false none Specifies the Amazon Web Services KMS Encryption Context to use for object encryption. The value of this header is a base64-encoded UTF-8 string holding JSON with the encryption context key-value pairs.
kmsKeyId string false none Specifies the ID of the symmetric customer managed key to use for object encryption.

ShapWarning

{
  "partitionName": "string",
  "value": {
    "maxNormalizedMismatch": 0,
    "mismatchRowCount": 0
  }
}

Properties

Name Type Required Restrictions Description
partitionName string true none The partition used for the prediction record.
value ShapWarningItems true none The warnings related to this partition

ShapWarningItems

{
  "maxNormalizedMismatch": 0,
  "mismatchRowCount": 0
}

Properties

Name Type Required Restrictions Description
maxNormalizedMismatch number true none The maximal relative normalized mismatch value
mismatchRowCount integer true none The count of rows for which additivity check failed

ShapWarningValues

{
  "maxNormalizedMismatch": 0,
  "mismatchRowCount": 0
}

Properties

Name Type Required Restrictions Description
maxNormalizedMismatch number true none The maximal relative normalized mismatch value.
mismatchRowCount integer true none The count of rows for which additivity check failed.

ShapWarnings

{
  "maxNormalizedMismatch": 0,
  "mismatchRowCount": 0
}

Properties

Name Type Required Restrictions Description
maxNormalizedMismatch number true none The maximal relative normalized mismatch value.
mismatchRowCount integer true none The count of rows for which additivity check failed.

SnowflakeDataStreamer

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to read input data from.
cloudStorageCredentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with read access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
cloudStorageType string false none Type name for cloud storage
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with read access to the Snowflake data source.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataStoreId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
externalStage string true none External storage
query string false none A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false none The name of the specified database schema to read input data from.
table string false none The name of the specified database table to read input data from.
type string true none Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
cloudStorageType azure
cloudStorageType gcp
cloudStorageType s3
anonymous [redacted]
anonymous [redacted]
type snowflake

SnowflakeIntake

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to read input data from.
cloudStorageCredentialId string¦null false none The ID of the credential holding information about a user with read access to the cloud storage.
cloudStorageType string false none Type name for cloud storage
credentialId string¦null false none The ID of the credential holding information about a user with read access to the Snowflake data source.
dataStoreId string true none ID of the data store to connect to
externalStage string true none External storage
query string false none A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false none The name of the specified database schema to read input data from.
table string false none The name of the specified database table to read input data from.
type string true none Type name for this intake type

Enumerated Values

Property Value
cloudStorageType azure
cloudStorageType gcp
cloudStorageType s3
type snowflake

SnowflakeOutput

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to write output data to.
cloudStorageCredentialId string¦null false none The ID of the credential holding information about a user with write access to the cloud storage.
cloudStorageType string false none Type name for cloud storage
createTableIfNotExists boolean false none Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId string¦null false none The ID of the credential holding information about a user with write access to the Snowflake data source.
dataStoreId string true none ID of the data store to connect to
externalStage string true none External storage
schema string false none The name of the specified database schema to write results to.
statementType string true none The statement type to use when writing the results.
table string true none The name of the specified database table to write results to.
type string true none Type name for this output type

Enumerated Values

Property Value
cloudStorageType azure
cloudStorageType gcp
cloudStorageType s3
statementType insert
statementType create_table
statementType createTable
type snowflake

SnowflakeOutputAdaptor

{
  "catalog": "string",
  "cloudStorageCredentialId": "string",
  "cloudStorageType": "azure",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalStage": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "snowflake"
}

Properties

Name Type Required Restrictions Description
catalog string false none The name of the specified database catalog to write output data to.
cloudStorageCredentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with write access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
cloudStorageType string false none Type name for cloud storage
createTableIfNotExists boolean false none Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with write access to the Snowflake data source.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataStoreId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
externalStage string true none External storage
schema string false none The name of the specified database schema to write results to.
statementType string true none The statement type to use when writing the results.
table string true none The name of the specified database table to write results to.
type string true none Type name for this output type

Enumerated Values

Property Value
anonymous [redacted]
cloudStorageType azure
cloudStorageType gcp
cloudStorageType s3
anonymous [redacted]
anonymous [redacted]
statementType insert
statementType create_table
statementType createTable
type snowflake

SynapseDataStreamer

{
  "cloudStorageCredentialId": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the Azure credential holding information about a user with read access to the cloud storage.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
credentialId any false none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string¦null false none The ID of the credential holding information about a user with read access to the JDBC data source.

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
dataStoreId any true none Either the populated value of the field or [redacted] due to permission settings

oneOf

Name Type Required Restrictions Description
» anonymous string false none ID of the data store to connect to

xor

Name Type Required Restrictions Description
» anonymous string false none none

continued

Name Type Required Restrictions Description
externalDataSource string true none External datasource name
query string false none A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false none The name of the specified database schema to read input data from.
table string false none The name of the specified database table to read input data from.
type string true none Type name for this intake type

Enumerated Values

Property Value
anonymous [redacted]
anonymous [redacted]
anonymous [redacted]
type synapse

SynapseIntake

{
  "cloudStorageCredentialId": "string",
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "query": "string",
  "schema": "string",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId string¦null false none The ID of the Azure credential holding information about a user with read access to the cloud storage.
credentialId string¦null false none The ID of the credential holding information about a user with read access to the JDBC data source.
dataStoreId string true none ID of the data store to connect to
externalDataSource string true none External datasource name
query string false none A self-supplied SELECT statement of the dataset you wish to score. Helpful for supplying a more fine-grained selection of data not achievable through specification of "table" and/or "schema" parameters exclusively.If this job is executed with a job definition, then template variables are available which will be substituted for timestamps: {{ current_run_timestamp }}, {{ last_completed_run_time }}, {{ last_scheduled_run_time }}, {{ next_scheduled_run_time }}, {{ current_run_time }}
schema string false none The name of the specified database schema to read input data from.
table string false none The name of the specified database table to read input data from.
type string true none Type name for this intake type

Enumerated Values

Property Value
type synapse

SynapseOutput

{
  "cloudStorageCredentialId": "string",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",
  "externalDataSource": "string",
  "schema": "string",
  "statementType": "insert",
  "table": "string",
  "type": "synapse"
}

Properties

Name Type Required Restrictions Description
cloudStorageCredentialId string¦null false none The ID of the credential holding information about a user with write access to the cloud storage.
createTableIfNotExists boolean false none Attempt to create the table first if no existing one is detected, before writing data with the strategy defined in the statementType parameter.
credentialId string¦null false none The ID of the credential holding information about a user with write access to the JDBC data source.
dataStoreId string true none ID of the data store to connect to
externalDataSource string true none External data source name
schema string false none The name of the specified database schema to write results to.
statementType string true none The statement type to use when writing the results.
table string true none The name of the specified database table to write results to.
type string true none Type name for this output type

Enumerated Values

Property Value
statementType insert
statementType create_table
statementType createTable
type synapse

SynapseOutputAdaptor

{
  "cloudStorageCredentialId": "string",
  "createTableIfNotExists": false,
  "credentialId": "string",
  "dataStoreId": "string",