Projects¶
This page outlines the operations, endpoints, parameters, and example requests and responses for the Projects.
GET /api/v2/calendarCountryCodes/¶
Retrieve the list of allowed country codes to request preloaded calendars generation for.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/calendarCountryCodes/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
offset | query | integer | false | Number of results to skip. |
limit | query | integer | false | At most this many results are returned. The default may change without notice. |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"code": "string",
"name": "string"
}
],
"next": "string",
"previous": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Request for the list of allowed country codes that have the generated preloaded calendars. | PreloadedCalendarListResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/calendars/¶
List all the calendars which the user has access to.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/calendars/?offset=0&limit=0 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | query | string | false | Optional, if provided will filter returned calendars to those being used in the specified project. |
offset | query | integer | true | Optional (default: 0 ), this many results will be skipped. |
limit | query | integer | true | Optional (default: 0 ), at most this many results will be returned. If 0 , all results will be returned. |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"created": "2019-08-24T14:15:22Z",
"datetimeFormat": "%m/%d/%Y",
"earliestEvent": "2019-08-24T14:15:22Z",
"id": "string",
"latestEvent": "2019-08-24T14:15:22Z",
"multiseriesIdColumns": [
"string"
],
"name": "string",
"numEventTypes": 0,
"numEvents": 0,
"projectId": [
"string"
],
"role": "ADMIN",
"source": "string"
}
],
"next": "string",
"previous": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | A list of Calendar objects. | CalendarListResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/calendars/fileUpload/¶
Create a calendar from a file in a csv or xlsx format. The calendar file specifies the dates or events in a dataset such that DataRobot automatically derives and creates special features based on the calendar events (e.g., time until the next event, labeling the most recent event).
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/calendars/fileUpload/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"file": "string",
"multiseriesIdColumns": "string",
"name": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | CalendarFileUpload | false | none |
Example responses¶
202 Response
{}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Request for calendar generation was submitted. See Location header. | Empty |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/calendars/fromCountryCode/¶
Initialize generation of preloaded calendars. Preloaded calendars are available only for time series projects. Preloaded calendars do not support multiseries calendars.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/calendars/fromCountryCode/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"countryCode": "AR",
"endDate": "2019-08-24T14:15:22Z",
"startDate": "2019-08-24T14:15:22Z"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | PreloadedCalendar | false | none |
Example responses¶
202 Response
{}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Request for calendar generation was submitted. See Location header. | Empty |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/calendars/fromDataset/¶
Create a calendar from the dataset.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/calendars/fromDataset/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"datasetId": "string",
"datasetVersionId": "string",
"deleteOnError": true,
"multiseriesIdColumns": [
"string"
],
"name": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | CalendarFromDataset | false | none |
Example responses¶
202 Response
{
"statusId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Successfully created a calendar from the dataset. | CreatedCalendarDatasetResponse |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/calendars/{calendarId}/¶
Delete a calendar. This can only be done if all projects and deployments using the calendar have been deleted.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/calendars/{calendarId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
calendarId | path | string | true | The ID of this calendar. |
Example responses¶
204 Response
{}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Calendar successfully deleted. | Empty |
404 | Not Found | Invalid calendarId provided, or user does not have permissions to delete calendar. |
None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/calendars/{calendarId}/¶
List all the information about a calendar such as the total number of event dates, the earliest calendar event date, the IDs of projects currently using this calendar and the others.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/calendars/{calendarId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
calendarId | path | string | true | The ID of this calendar. |
Example responses¶
200 Response
{
"created": "2019-08-24T14:15:22Z",
"datetimeFormat": "%m/%d/%Y",
"earliestEvent": "2019-08-24T14:15:22Z",
"id": "string",
"latestEvent": "2019-08-24T14:15:22Z",
"multiseriesIdColumns": [
"string"
],
"name": "string",
"numEventTypes": 0,
"numEvents": 0,
"projectId": [
"string"
],
"role": "ADMIN",
"source": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Request for a Calendar object was successful. | CalendarRecord |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/calendars/{calendarId}/¶
Update a calendar's name
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/calendars/{calendarId}/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"name": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
calendarId | path | string | true | The ID of this calendar. |
body | body | CalendarNameUpdate | false | none |
Example responses¶
200 Response
{}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Calendar name successfully updated. | Empty |
404 | Not Found | Invalid calendarId provided, or user does not have permissions to update calendar. |
None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/calendars/{calendarId}/accessControl/¶
Get a list of users who have access to this calendar and their roles on the calendar.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/calendars/{calendarId}/accessControl/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
username | query | string | false | Optional, only return the access control information for a user with this username. Should not be specified if userId is specified. |
userId | query | string | false | Optional, only return the access control information for a user with this user ID. Should not be specified if username is specified. |
offset | query | integer | false | Optional (default: 0 ), this many results will be skipped. |
limit | query | integer | false | Optional (default: 0 ), at most this many results will be returned. If 0 , all results will be returned. |
calendarId | path | string | true | The ID of this calendar. |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"canShare": true,
"role": "ADMIN",
"userId": "string",
"username": "string"
}
],
"next": "string",
"previous": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Request for the list of users who have access to this calendar and their roles on the calendar was successful. | CalendarAccessControlListResponse |
400 | Bad Request | Both username and userId were specified. |
None |
404 | Not Found | Entity not found. Either the calendar does not exist or the user does not have permissions to view the calendar. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/calendars/{calendarId}/accessControl/¶
Update the access control for this calendar. See the entity sharing documentation <sharing>
for more information.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/calendars/{calendarId}/accessControl/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"users": [
{
"role": "ADMIN",
"username": "string"
}
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
calendarId | path | string | true | The ID of this calendar. |
body | body | CalendarAccessControlUpdate | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Request to update the accesss control for this calendar was sussessful. | None |
404 | Not Found | Invalid calendarId provided, or user has no access whatsoever on the specified calendar. |
None |
422 | Unprocessable Entity | Invalid username provided to modify access for the specified calendar. |
None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/deletedProjects/¶
Retrieve a list of soft-deleted projects matching search criteria
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/deletedProjects/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
searchFor | query | string | false | Project or dataset name to filter by |
creator | query | string | false | Creator ID to filter projects by |
organization | query | string | false | ID of organization that projects should belong to. Given project belongs to the organization the user who created the project is part of that organization.If there are no users in organization, then no projects will match the query. |
deletedBefore | query | string(date-time) | false | ISO-8601 formatted date projects were deleted before |
deletedAfter | query | string(date-time) | false | ISO-8601 formatted date projects were deleted after |
projectId | query | string | false | Project ID to search |
limit | query | integer | false | At most this many results are returned. |
offset | query | integer | false | This many results will be skipped. |
orderBy | query | string | false | Order deleted projects by |
Enumerated Values¶
Parameter | Value |
---|---|
orderBy | [projectId , projectName , datasetName , deletedOn , deletedBy , creator , -projectId , -projectName , -datasetName , -deletedOn , -deletedBy , -creator ] |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"createdBy": {
"email": "string",
"id": "string"
},
"deletedBy": {
"email": "string",
"id": "string"
},
"deletionTime": "2019-08-24T14:15:22Z",
"fileName": "string",
"id": "string",
"organization": {
"id": "string",
"name": "string"
},
"projectName": "Untitled Project",
"scheduledForDeletion": true
}
],
"next": "http://example.com",
"previous": "http://example.com"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | List of soft-deleted projects | DeletedProjectListResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/deletedProjects/{projectId}/¶
Recover (undelete) soft-deleted project
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/deletedProjects/{projectId}/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"action": "undelete"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
body | body | ProjectRecover | false | none |
Example responses¶
200 Response
{
"message": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Recovery operation result description | ProjectRecoverResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/deletedProjectsCount/¶
Get current number of deleted projects matching search criteria. Value is limited by DELETED_PROJECTS_BATCH_LIMIT system setting. That means that the actual amount of deleted projects can be greater than the limit, but counting will stop when reaching it.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/deletedProjectsCount/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
searchFor | query | string | false | Project or dataset name to filter by |
creator | query | string | false | Creator ID to filter projects by |
organization | query | string | false | ID of organization that projects should belong to. Given project belongs to the organization the user who created the project is part of that organization.If there are no users in organization, then no projects will match the query. |
deletedBefore | query | string(date-time) | false | ISO-8601 formatted date projects were deleted before |
deletedAfter | query | string(date-time) | false | ISO-8601 formatted date projects were deleted after |
projectId | query | string | false | Project ID to search |
limit | query | integer | false | Count deleted projects until specified value reached. |
Example responses¶
200 Response
{
"deletedProjectsCount": 0,
"projectCountLimit": 0,
"valueExceedsLimit": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Soft-deleted projects amount, current counting limit value and boolean flag to notify if an actual amount of soft-deleted projects in the system exceeds the limit value. | DeletedProjectCountResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/hdfsProjects/¶
Create a project from an HDFS file via WebHDFS API. Represent the file using URL, optionally, port, and optionally, user/password credentials. For example, {"url": "hdfs://<ip>/path/to/file.csv", "port": "50070"}
.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/hdfsProjects/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"password": "string",
"port": 0,
"projectName": "string",
"url": "http://example.com",
"user": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | HdfsProjectCreate | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projectCleanupJobs/¶
Get async status of the project permadelete job
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projectCleanupJobs/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Example responses¶
200 Response
{
"jobs": [
{
"created": "2019-08-24T14:15:22Z",
"data": [
{
"message": "string",
"projectId": "string",
"status": "ABORTED"
}
],
"message": "string",
"status": "ABORTED",
"statusId": "string"
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Permadelete Job Status with details per project | ProjectNukeJobListStatus |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projectCleanupJobs/¶
Add list of projects to permadelete and returns async status
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projectCleanupJobs/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"creator": "string",
"deletedAfter": "2019-08-24T14:15:22Z",
"deletedBefore": "2019-08-24T14:15:22Z",
"limit": 1000,
"offset": 0,
"organization": "string",
"projectIds": [
"string"
],
"searchFor": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | ProjectNuke | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Location URL to check permadelete status per project | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/projectCleanupJobs/{statusId}/¶
Stop permadelete job, if possible
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projectCleanupJobs/{statusId}/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
statusId | path | string | true | The ID of the status object. |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projectCleanupJobs/{statusId}/¶
Get async status of the project permadelete job
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projectCleanupJobs/{statusId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
statusId | path | string | true | The ID of the status object. |
Example responses¶
200 Response
{
"created": "2019-08-24T14:15:22Z",
"data": [
{
"message": "string",
"projectId": "string",
"status": "ABORTED"
}
],
"message": "string",
"status": "ABORTED",
"statusId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Permadelete Job Status with details per project | ProjectNukeJobStatus |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projectCleanupJobs/{statusId}/download/¶
Get a file containing a per-project report of permanent deletion.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projectCleanupJobs/{statusId}/download/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
statusId | path | string | true | The ID of the status object. |
Example responses¶
200 Response
{
"created": "2019-08-24T14:15:22Z",
"data": [
{
"message": "string",
"projectId": "string",
"status": "ABORTED"
}
],
"message": "string",
"status": "ABORTED",
"statusId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | JSON-formatted project permadeletion report. | ProjectNukeJobStatus |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
200 | Content-Disposition | string | Contains an auto generated filename for this download ('attachment;filename="project_permadeletion_ |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projectCleanupJobs/{statusId}/summary/¶
Get number of projects whose deletion finished in particular state
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projectCleanupJobs/{statusId}/summary/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
statusId | path | string | true | The ID of the status object. |
Example responses¶
200 Response
{
"jobId": "string",
"summary": {
"aborted": 0,
"completed": 0,
"error": 0,
"expired": 0
}
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Project permanent deletion job status to occurrence count | ProjectNukeJobStatusSummary |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projectClones/¶
Create a clone of an existing project.
The resultant project will begin the initial exploratory
data analysis and will be ready to set the target of the new project shortly.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projectClones/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"copyOptions": false,
"projectId": "string",
"projectName": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | ProjectClone | false | none |
Example responses¶
200 Response
{
"pid": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Project cloning has successfully started. See the Location header. | ProjectCreateResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/¶
List all available projects.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectName | query | string | false | if provided will filter returned projects for projects with matching names |
projectId | query | any | false | if provided will filter returned projects with matching project IDs |
orderBy | query | string | false | if provided will order the results by this field |
featureDiscovery | query | string | false | Return only feature discovery projects |
offset | query | integer | false | This many results will be skipped. |
limit | query | integer | false | At most this many results are returned. |
Enumerated Values¶
Parameter | Value |
---|---|
orderBy | [projectName , -projectName ] |
featureDiscovery | [false , False , true , True ] |
Example responses¶
200 Response
[
{
"advancedOptions": {
"allowedPairwiseInteractionGroups": [
[
"string"
]
],
"blendBestModels": true,
"blueprintThreshold": 0,
"considerBlendersInRecommendation": true,
"defaultMonotonicDecreasingFeaturelistId": "string",
"defaultMonotonicIncreasingFeaturelistId": "string",
"downsampledMajorityRows": 0,
"downsampledMinorityRows": 0,
"eventsCount": "string",
"exposure": "string",
"majorityDownsamplingRate": 0,
"minSecondaryValidationModelCount": true,
"offset": [
"string"
],
"onlyIncludeMonotonicBlueprints": false,
"prepareModelForDeployment": true,
"responseCap": true,
"runLeakageRemovedFeatureList": true,
"scoringCodeOnly": true,
"seed": "string",
"shapOnlyMode": true,
"smartDownsampled": true,
"weights": "string"
},
"autopilotClusterList": [
2
],
"autopilotMode": 0,
"created": "2019-08-24T14:15:22Z",
"featureEngineeringPredictionPoint": "string",
"fileName": "string",
"holdoutUnlocked": true,
"id": "string",
"maxClusters": 2,
"maxTrainPct": 0,
"maxTrainRows": 0,
"metric": "string",
"minClusters": 2,
"partition": {
"cvHoldoutLevel": "string",
"cvMethod": "random",
"datetimeCol": "string",
"datetimePartitionColumn": "string",
"holdoutLevel": "string",
"holdoutPct": 0,
"partitionKeyCols": [
"string"
],
"reps": 0,
"trainingLevel": "string",
"useTimeSeries": true,
"userPartitionCol": "string",
"validationLevel": "string",
"validationPct": 0,
"validationType": "CV"
},
"positiveClass": 0,
"projectName": "string",
"stage": "string",
"target": "string",
"targetType": "Binary",
"unsupervisedMode": true,
"unsupervisedType": "anomaly",
"useFeatureDiscovery": true
}
]
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The list of projects | Inline |
Response Schema¶
Status Code 200
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
anonymous | [ProjectDetailsResponse] | false | none | |
» advancedOptions | ProjectAdvancedOptionsResponse | true | Information related to the current model of the deployment. | |
»» allowedPairwiseInteractionGroups | [array] | false | For GAM models - specify groups of columns for which pairwise interactions will be allowed. E.g. if set to [["A", "B", "C"], ["C", "D"]] then GAM models will allow interactions between columns AxB, BxC, AxC, CxD. All others (AxD, BxD) will not be considered. If not specified - all possible interactions will be considered by model. | |
»» blendBestModels | boolean | true | blend best models during Autopilot run [DEPRECATED] | |
»» blueprintThreshold | integer¦null | true | an upper bound on running time (in hours), such that models exceeding the bound will be excluded in subsequent autopilot runs | |
»» considerBlendersInRecommendation | boolean | false | Include blenders when selecting a model to prepare for deployment in an Autopilot Run.[DEPRECATED] | |
»» defaultMonotonicDecreasingFeaturelistId | string¦null | true | null or str, the ID of the featurelist specifying a set of features with a monotonically decreasing relationship to the target. All blueprints generated in the project use this as their default monotonic constraint, but it can be overriden at model submission time. | |
»» defaultMonotonicIncreasingFeaturelistId | string¦null | true | null or str, the ID of the featurelist specifying a set of features with a monotonically increasing relationship to the target. All blueprints generated in the project use this as their default monotonic constraint, but it can be overriden at model submission time. | |
»» downsampledMajorityRows | integer¦null | true | the total number of the majority rows available for modeling, or null for projects without smart downsampling | |
»» downsampledMinorityRows | integer¦null | true | the total number of the minority rows available for modeling, or null for projects without smart downsampling | |
»» eventsCount | string¦null | false | the name of the event count column, if specified, otherwise null. | |
»» exposure | string¦null | false | the name of the exposure column, if specified. | |
»» majorityDownsamplingRate | number¦null | true | the percentage between 0 and 100 of the majority rows that are kept, or null for projects without smart downsampling | |
»» minSecondaryValidationModelCount | boolean | false | Compute "All backtest" scores (datetime models) or cross validation scores for the specified number of highest ranking models on the Leaderboard, if over the Autopilot default. | |
»» offset | [string]¦null | false | the list of names of the offset columns, if specified, otherwise null. | |
»» onlyIncludeMonotonicBlueprints | boolean | true | whether the project only includes blueprints support enforcing monotonic constraints | |
»» prepareModelForDeployment | boolean¦null | true | Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning "RECOMMENDED FOR DEPLOYMENT" label. | |
»» responseCap | any | true | defaults to False, if specified used to cap the maximum response of a model |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | boolean | false | none |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | number | false | maximum: 1 minimum: 0.5 |
none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» runLeakageRemovedFeatureList | boolean | false | Run Autopilot on Leakage Removed feature list (if exists). | |
»» scoringCodeOnly | boolean | true | Keep only models that can be converted to scorable java code during Autopilot run. | |
»» seed | string¦null | true | defaults to null, the random seed to be used if specified | |
»» shapOnlyMode | boolean¦null | true | Keep only models that support SHAP values during Autopilot run. Use SHAP-based insights wherever possible. For pre SHAP-only mode projects this is always null . |
|
»» smartDownsampled | boolean | true | whether the project uses smart downsampling to throw away excess rows of the majority class. Smart downsampled projects express all sample percents in terms of percent of minority rows (as opposed to percent of all rows). | |
»» weights | string¦null | true | the name of the weight column, if specified, otherwise null. | |
» autopilotClusterList | [integer]¦null | false | maxItems: 10 |
Optional. A list of integers where each value will be used as the number of clusters in Autopilot model(s) for unsupervised clustering projects. Cannot be specified unless unsupervisedMode is true and unsupervisedType is set to 'clustering'. |
» autopilotMode | integer | true | The current autopilot mode, 0 for full autopilot, 2 for manual mode, 3 for quick mode, 4 for comprehensive mode | |
» created | string(date-time) | true | The time of project creation. | |
» featureEngineeringPredictionPoint | string¦null | false | The date column to be used as the prediction point for time-based feature engineering. | |
» fileName | string | true | The name of the dataset used to create the project. | |
» holdoutUnlocked | boolean | true | whether the holdout has been unlocked | |
» id | string | true | The ID of a project. | |
» maxClusters | integer¦null | false | maximum: 100 minimum: 2 |
Only valid when unsupervisedMode is True and unsupervisedType is 'clustering'. The maximum number of clusters allowed when training clustering models. If specified cannot be exceed the number of rows in a project's dataset divided by 50 and must be less than or equal to minClusters . If unsupervisedMode is True and unsupervisedType is 'clustering' then defaults to the number of rows in the project's dataset divided by 50 or 100 if that number of greater than 100. |
» maxTrainPct | number | true | the maximum percentage of the dataset that can be used to successfully train a model without going into the validation data. | |
» maxTrainRows | integer | true | the maximum number of rows of the dataset that can be used to successfully train a model without going into the validation data | |
» metric | string | true | the metric used to select the best-performing models. | |
» minClusters | integer¦null | false | maximum: 100 minimum: 2 |
Only valid when unsupervisedMode is True and unsupervisedType is 'clustering'. The minimum number of clusters allowed when training clustering models. If specified cannot be exceed the number of rows in a project's dataset divided by 50 and must be less than or equal to maxClusters . If unsupervisedMode is True and unsupervisedType is 'clustering' then defaults to 2. |
» partition | ProjectPartitionResponse | true | The partition object of a project indicates the settings used for partitioning. Depending on the partitioning selected, many of the options will be null. Note that for projects whose cvMethod is "datetime" , full specification of the partitioning method can be found at GET /api/v2/projects/{projectId}/datetimePartitioning/. |
|
»» cvHoldoutLevel | any | true | if a user partition column was used with cross validation, the value assigned to the holdout set |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | number | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | integer | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» cvMethod | string | true | the partitioning method used. Note that "date" partitioning is an old partitioning method no longer supported for new projects, as of API version v2.0. | |
»» datetimeCol | string¦null | true | if a date partition column was used, the name of the column. Note that datetimeCol applies to an old partitioning method no longer supported for new projects, as of API version v2.0. | |
»» datetimePartitionColumn | string | false | if a datetime partition column was used, the name of the column | |
»» holdoutLevel | any | true | if a user partition column was used with train-validation-holdout split, the value assigned to the holdout set |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | number | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | integer | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» holdoutPct | number | true | the percentage of the dataset reserved for the holdout set | |
»» partitionKeyCols | [string]¦null | true | An array containing a single string - the name of the group partition column | |
»» reps | number¦null | true | if cross validation was used, the number of folds to use | |
»» trainingLevel | any | true | if a user partition column was used with train-validation-holdout split, the value assigned to the training set |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | number | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | integer | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» useTimeSeries | boolean¦null | true | A boolean value indicating whether a time series project was created as opposed to a regular project using datetime partitioning. | |
»» userPartitionCol | string¦null | true | if a user partition column was used, the name of the column | |
»» validationLevel | any | true | if a user partition column was used with train-validation-holdout split, the value assigned to the validation set |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | number | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»»» anonymous | integer | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» validationPct | number¦null | true | if train-validation-holdout split was used, the percentage of the dataset used for the validation set | |
»» validationType | string | true | either CV for cross-validation or TVH for train-validation-holdout split | |
» positiveClass | number¦null | true | if the project uses binary classification, the class designated to be the positive class. Otherwise, null. | |
» projectName | string | true | The name of a project. | |
» stage | string | true | the stage of the project - if modeling, then the target is successfully set, and modeling or predictions can proceed. | |
» target | string | true | the target of the project, null if project is unsupervised. | |
» targetType | string¦null | true | The target type of the project. | |
» unsupervisedMode | boolean | true | indicates whether a project is unsupervised. | |
» unsupervisedType | string¦null | false | Only valid when unsupervisedMode is True. The type of unsupervised project, anomaly or clustering. If unsupervisedMode, defaults to 'anomaly'. | |
» useFeatureDiscovery | boolean | true | A boolean value indicating whether a feature discovery project was created as opposed to a regular project. |
Enumerated Values¶
Property | Value |
---|---|
cvMethod | [random , user , stratified , group , datetime ] |
validationType | [CV , TVH ] |
targetType | [Binary , Regression , Multiclass , minInflated , Multilabel , TextGeneration ] |
unsupervisedType | [anomaly , clustering ] |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/¶
Create a new project.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"credentialData": {
"credentialType": "basic",
"password": "string",
"user": "string"
},
"credentialId": "string",
"dataSourceId": "string",
"datasetId": "string",
"datasetVersionId": "string",
"password": "string",
"projectName": "string",
"recipeId": "string",
"url": "string",
"useKerberos": true,
"user": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | ProjectCreate | false | none |
Example responses¶
202 Response
{
"pid": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Creation has successfully started. See the Location header. | ProjectCreateResponse |
403 | Forbidden | User does not have permission to use specified dataset item for project. | None |
404 | Not Found | The dataset item with the given ID or version ID is not found. | None |
422 | Unprocessable Entity | Ingest not yet completed. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/projects/{projectId}/¶
Delete a project
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | The project has been successfully deleted. | None |
409 | Conflict | The project is in use and cannot be deleted. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/¶
Look up a particular project
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
Example responses¶
200 Response
{
"advancedOptions": {
"allowedPairwiseInteractionGroups": [
[
"string"
]
],
"blendBestModels": true,
"blueprintThreshold": 0,
"considerBlendersInRecommendation": true,
"defaultMonotonicDecreasingFeaturelistId": "string",
"defaultMonotonicIncreasingFeaturelistId": "string",
"downsampledMajorityRows": 0,
"downsampledMinorityRows": 0,
"eventsCount": "string",
"exposure": "string",
"majorityDownsamplingRate": 0,
"minSecondaryValidationModelCount": true,
"offset": [
"string"
],
"onlyIncludeMonotonicBlueprints": false,
"prepareModelForDeployment": true,
"responseCap": true,
"runLeakageRemovedFeatureList": true,
"scoringCodeOnly": true,
"seed": "string",
"shapOnlyMode": true,
"smartDownsampled": true,
"weights": "string"
},
"autopilotClusterList": [
2
],
"autopilotMode": "0",
"catalogId": "string",
"catalogVersionId": "string",
"created": "2019-08-24T14:15:22Z",
"externalTimeSeriesBaselineDatasetMetadata": {
"datasetId": "string",
"datasetName": "string"
},
"featureEngineeringPredictionPoint": "string",
"fileName": "string",
"holdoutUnlocked": true,
"id": "string",
"isScoringAvailableForModelsTrainedIntoValidationHoldout": true,
"maxClusters": 2,
"maxTrainPct": 0,
"maxTrainRows": 0,
"metric": "string",
"minClusters": 2,
"partition": {
"cvHoldoutLevel": "string",
"cvMethod": "random",
"datetimeCol": "string",
"datetimePartitionColumn": "string",
"holdoutLevel": "string",
"holdoutPct": 0,
"partitionKeyCols": [
"string"
],
"reps": 0,
"trainingLevel": "string",
"useTimeSeries": true,
"userPartitionCol": "string",
"validationLevel": "string",
"validationPct": 0,
"validationType": "CV"
},
"positiveClass": "string",
"primaryLocationColumn": "string",
"projectName": "string",
"queryGeneratorId": "string",
"quickrun": true,
"relationshipsConfigurationId": "string",
"segmentation": {
"parentProjectId": "string",
"segment": "string",
"segmentationTaskId": "string"
},
"stage": "modeling",
"target": "string",
"targetType": "Binary",
"unsupervisedMode": true,
"unsupervisedType": "anomaly",
"useFeatureDiscovery": true,
"useGpu": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The project. | ProjectRetrieveResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/projects/{projectId}/¶
Change project name, worker count, or unlock the holdout. If any of the optional json arguments are not provided, that aspect of the project will not be altered.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/projects/{projectId}/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"gpuWorkerCount": 0,
"holdoutUnlocked": "True",
"projectDescription": "string",
"projectName": "string",
"workerCount": 0
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
body | body | ProjectUpdate | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The project was successfully updated | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/accessControl/¶
Get a list of users who have access to this project and their roles on the project.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/accessControl/?offset=0&limit=0 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
offset | query | integer | true | This many results will be skipped |
limit | query | integer | true | At most this many results are returned |
username | query | string | false | Optional, only return the access control information for a user with this username. |
userId | query | string | false | Optional, only return the access control information for a user with this user ID. |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"canShare": true,
"role": "string",
"userId": "string",
"username": "string"
}
],
"next": "string",
"previous": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The project's access control list. | SharingListResponse |
404 | Not Found | Either the project does not exist or the user does not have permissions to view the project. | None |
422 | Unprocessable Entity | Both username and userId were specified | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/projects/{projectId}/accessControl/¶
Set roles for users on this project.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/projects/{projectId}/accessControl/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"data": [
{
"role": "ADMIN",
"username": "string"
}
],
"includeFeatureDiscoveryEntities": false,
"sendNotification": true
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | SharingUpdateOrRemove | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Roles updated successfully. | None |
409 | Conflict | The request would leave the project without an owner. | None |
422 | Unprocessable Entity | One of the users in the request does not exist, or the request is otherwise invalid | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/projects/{projectId}/aim/¶
Start the data modeling process.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/projects/{projectId}/aim/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"accuracyOptimizedMb": true,
"aggregationType": "total",
"allowPartialHistoryTimeSeriesPredictions": true,
"allowedPairwiseInteractionGroups": [
[
"string",
"string"
]
],
"allowedPairwiseInteractionGroupsFilename": "string",
"autopilotClusterList": [
2
],
"autopilotDataSamplingMethod": "random",
"autopilotDataSelectionMethod": "duration",
"autopilotWithFeatureDiscovery": true,
"backtests": [
{
"gapDuration": "string",
"index": 0,
"primaryTrainingEndDate": "2019-08-24T14:15:22Z",
"primaryTrainingStartDate": "2019-08-24T14:15:22Z",
"validationDuration": "string",
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
],
"biasMitigationFeatureName": "string",
"biasMitigationTechnique": "preprocessingReweighing",
"blendBestModels": true,
"blueprintThreshold": 1,
"calendarId": "string",
"chunkDefinitionId": "string",
"classMappingAggregationSettings": {
"aggregationClassName": "string",
"excludedFromAggregation": [],
"maxUnaggregatedClassValues": 1000,
"minClassSupport": 1
},
"considerBlendersInRecommendation": true,
"credentials": [
{
"catalogVersionId": "string",
"password": "string",
"url": "string",
"user": "string"
}
],
"crossSeriesGroupByColumns": [
"string"
],
"cvHoldoutLevel": "string",
"cvMethod": "random",
"dateRemoval": true,
"datetimePartitionColumn": "string",
"datetimePartitioningId": "string",
"defaultToAPriori": true,
"defaultToDoNotDerive": true,
"defaultToKnownInAdvance": true,
"differencingMethod": "auto",
"disableHoldout": false,
"eventsCount": "string",
"exponentiallyWeightedMovingAlpha": 1,
"exposure": "string",
"externalPredictions": [
"string"
],
"externalTimeSeriesBaselineDatasetId": "string",
"externalTimeSeriesBaselineDatasetName": "string",
"fairnessMetricsSet": "proportionalParity",
"fairnessThreshold": 1,
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDiscoverySupervisedFeatureReduction": true,
"featureEngineeringPredictionPoint": "string",
"featureSettings": [
{
"aPriori": true,
"doNotDerive": true,
"featureName": "string",
"knownInAdvance": true
}
],
"featurelistId": "string",
"forecastWindowEnd": 0,
"forecastWindowStart": 0,
"gapDuration": "string",
"holdoutDuration": "string",
"holdoutEndDate": "2019-08-24T14:15:22Z",
"holdoutLevel": "string",
"holdoutPct": 98,
"holdoutStartDate": "2019-08-24T14:15:22Z",
"includeBiasMitigationFeatureAsPredictorVariable": true,
"incrementalLearningEarlyStoppingRounds": 0,
"incrementalLearningOnBestModel": true,
"incrementalLearningOnlyMode": true,
"isHoldoutModified": true,
"majorityDownsamplingRate": 0,
"metric": "string",
"minSecondaryValidationModelCount": 10,
"mode": "0",
"modelSplits": 5,
"monotonicDecreasingFeaturelistId": "string",
"monotonicIncreasingFeaturelistId": "string",
"multiseriesIdColumns": [
"string"
],
"numberOfBacktests": 0,
"offset": [
"string"
],
"onlyIncludeMonotonicBlueprints": false,
"partitionKeyCols": [
"string"
],
"periodicities": [
{
"timeSteps": 0,
"timeUnit": "MILLISECOND"
}
],
"positiveClass": "string",
"preferableTargetValue": "string",
"prepareModelForDeployment": true,
"primaryLocationColumn": "string",
"protectedFeatures": [
"string"
],
"quantileLevel": 0,
"quickrun": true,
"rateTopPctThreshold": 100,
"relationshipsConfigurationId": "string",
"reps": 2,
"responseCap": 0.5,
"runLeakageRemovedFeatureList": true,
"sampleStepPct": 0,
"scoringCodeOnly": true,
"seed": 999999999,
"segmentationTaskId": "string",
"shapOnlyMode": true,
"smartDownsampled": true,
"stopWords": [
"string"
],
"target": "string",
"targetType": "Binary",
"trainingLevel": "string",
"treatAsExponential": "auto",
"unsupervisedMode": false,
"unsupervisedType": "anomaly",
"useCrossSeriesFeatures": true,
"useGpu": true,
"useProjectSettings": true,
"useSupervisedFeatureReduction": true,
"useTimeSeries": false,
"userPartitionCol": "string",
"validationDuration": "string",
"validationLevel": "string",
"validationPct": 99,
"validationType": "CV",
"weights": "string",
"windowsBasisUnit": "MILLISECOND"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
body | body | Aim | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Autopilot has successfully started. See the Location header. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/autopilot/¶
Pause or unpause the autopilot for a project.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/autopilot/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"command": "start"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
body | body | Autopilot | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Request received | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/autopilots/¶
Start autopilot on provided featurelist.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/autopilots/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"autopilotClusterList": [
2
],
"blendBestModels": true,
"considerBlendersInRecommendation": true,
"featurelistId": "string",
"mode": "auto",
"prepareModelForDeployment": true,
"runLeakageRemovedFeatureList": true,
"scoringCodeOnly": true,
"useGpu": true
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
body | body | AutopilotStart | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | Successfully started | None |
422 | Unprocessable Entity | Autopilot on this featurelist has already completed or is already in progress. This status code is also returned if target was not selected for specified project. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/batchTypeTransformFeatures/¶
Create multiple new features by changing the type of existing features.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/batchTypeTransformFeatures/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"parentNames": [
"string"
],
"prefix": "string",
"suffix": "string",
"variableType": "text"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project to create the feature in. |
body | body | BatchFeatureTransform | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Creation has successfully started. See the Location header. | None |
422 | Unprocessable Entity | Unable to process the request | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
200 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/batchTypeTransformFeaturesResult/{jobId}/¶
Retrieve the result of a batch variable type transformation.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/batchTypeTransformFeaturesResult/{jobId}/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"parentNames": [
"string"
],
"prefix": "string",
"suffix": "string",
"variableType": "text"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project containing transformed features. |
jobId | path | integer | true | ID of the batch variable type transformation job. |
body | body | BatchFeatureTransform | false | none |
Example responses¶
200 Response
{
"failures": {},
"newFeatureNames": [
"string"
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Names of successfully created features. | BatchFeatureTransformRetrieveResponse |
404 | Not Found | Could not find specified transformation report | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/calendarEvents/¶
List available calendar events for the project.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/calendarEvents/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
seriesId | query | string | false | The name of the series to retrieve specific series for. If specified, retrieves only series specific to the event and common events. |
startDate | query | string(date-time) | false | The start of the date range to return, inclusive. If not specified, start date for the first calendar event will be used. |
endDate | query | string(date-time) | false | The end of the date range to return, exclusive. If not specified, end date capturing the last calendar event will be used. |
offset | query | integer | false | Optional (default: 0 ), this many results will be skipped. |
limit | query | integer | false | Optional (default: 1000 ), at most this many results will be returned. |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"date": "2019-08-24T14:15:22Z",
"name": "string",
"seriesId": "string"
}
],
"next": "string",
"previous": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | A list of calendar events. | CalendarEventsResponseQuery |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/crossSeriesProperties/¶
Validate columns for potential use as the group-by column for cross-series functionality.
The group-by column is an optional setting that indicates how to further splitseries into related groups. For example, if each series represents sales of an individual product, the group-by column could be the product category, e.g., "clothing" or "sports equipment".
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/crossSeriesProperties/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"crossSeriesGroupByColumns": [
"string"
],
"datetimePartitionColumn": "string",
"multiseriesIdColumn": "string",
"userDefinedSegmentIdColumn": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | CrossSeriesGroupByColumnValidatePayload | false | none |
Example responses¶
202 Response
{
"message": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Cross-series group-by column validation job was successfully submitted. See Location header. | CrossSeriesGroupByColumnValidateResponse |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/discardedFeatures/¶
Get features which were discarded during feature reduction process.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/discardedFeatures/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
search | query | string | false | Case insensitive search against discarded feature names. |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"count": 0,
"features": [
"string"
],
"remainingRestoreLimit": 0,
"totalRestoreLimit": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Discarded features. | DiscardedFeaturesResponse |
422 | Unprocessable Entity | Unable to process the request | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/externalTimeSeriesBaselineDataValidationJobs/¶
This route validates if a provided catalog version id can be used as baseline for calculating metrics. This functionality is available only for time series projects.For a baseline dataset to be valid, the number of unique date amd multiseries_id columnrows must match the unique number of date and multiseries_id column rows in the uploadedtraining dataset. This functionality is limited to one forecast distance. Additionally, the catalog must be a snapshot.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/externalTimeSeriesBaselineDataValidationJobs/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"backtests": [
{
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
],
"catalogVersionId": "string",
"datetimePartitionColumn": "string",
"forecastWindowEnd": 0,
"forecastWindowStart": 0,
"holdoutEndDate": "2019-08-24T14:15:22Z",
"holdoutStartDate": "2019-08-24T14:15:22Z",
"multiseriesIdColumns": [
"string"
],
"target": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | ExternalTSBaselinePayload | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Validate baseline data that is provided in the form of a catalog version id. We willconfirm that the dataset contains the proper date, target column, and multiseries ID column. If the provided dataset meets the criteria, the job will be successful. | None |
403 | Forbidden | User does not have access to this functionality. | None |
422 | Unprocessable Entity | Unable to process external time sereis baseline validation job. | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/externalTimeSeriesBaselineDataValidationJobs/{baselineValidationJobId}/¶
Retrieve information to confirm if the validation job triggered via /api/v2/projects/(projectId)/externalTimeSeriesBaselineDataValidationJobs/ is valid.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/externalTimeSeriesBaselineDataValidationJobs/{baselineValidationJobId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project to retrieve the validation job information from. |
baselineValidationJobId | path | string | true | The id for the validation job |
Example responses¶
200 Response
{
"backtests": [
{
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
],
"baselineValidationJobId": "string",
"catalogVersionId": "string",
"datetimePartitionColumn": "string",
"forecastWindowEnd": 0,
"forecastWindowStart": 0,
"holdoutEndDate": "2019-08-24T14:15:22Z",
"holdoutStartDate": "2019-08-24T14:15:22Z",
"isExternalBaselineDatasetValid": true,
"message": "string",
"multiseriesIdColumns": [
"string"
],
"projectId": "string",
"target": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | ExternalTSBaselineResponse |
403 | Forbidden | User does not have access to this functionality. | None |
404 | Not Found | External time series validation job not found. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featureDiscoveryDatasetDownload/¶
Download the project dataset with features added by feature discovery
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featureDiscoveryDatasetDownload/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
datasetId | query | string | false | The ID of the dataset to use for the prediction. |
projectId | path | string | true | The project ID |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Project dataset file. | None |
404 | Not Found | Data is not found. | None |
422 | Unprocessable Entity | Unable to process the request. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featureDiscoveryLogs/¶
Retrieve the feature discovery log content and log length for a feature discovery project. This route is only supported for feature discovery projects that have finished partitioning
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featureDiscoveryLogs/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
offset | query | integer | false | Number of results to skip. |
limit | query | integer | false | At most this many results are returned. The default may change without notice. |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"count": 0,
"featureDiscoveryLog": [
"string"
],
"next": "http://example.com",
"previous": "http://example.com",
"totalLogLines": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Feature discovery log data. | FeatureDiscoveryLogListResponse |
422 | Unprocessable Entity | Unable to process the request. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featureDiscoveryLogs/download/¶
Retrieve a text file containing the feature discovery log. This route is only supported for feature discovery projects that have finished partitioning.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featureDiscoveryLogs/download/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Feature discovery log file. | None |
422 | Unprocessable Entity | Unable to process the request. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featureDiscoveryRecipeSQLs/download/¶
Download feature discovery SQL recipe for a project
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featureDiscoveryRecipeSQLs/download/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
modelId | query | string | false | Model ID to export recipe for |
statusOnly | query | string | false | Return status only for availability check |
asText | query | string | false | Determines whether to download the file or just return text. |
projectId | path | string | true | The project ID |
Enumerated Values¶
Parameter | Value |
---|---|
statusOnly | [false , False , true , True ] |
asText | [false , False , true , True ] |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Project feature discovery SQL recipe file. | None |
400 | Bad Request | Unable to process the request | None |
404 | Not Found | Data not found | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/featureDiscoveryRecipeSqlExports/¶
Generate feature discovery SQL recipe for a project
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/featureDiscoveryRecipeSqlExports/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"modelId": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | FeatureDiscoveryRecipeSQLsExport | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Creation has successfully started. See the Location header. | None |
404 | Not Found | Data not found | None |
422 | Unprocessable Entity | Unable to process the request | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featureHistograms/{featureName}/¶
Get histogram chart data for a specific feature. Information that can be used to build histogram charts. Plot data returned is based on raw data that is calculated during initial project creation and updated after the project's target variable has been selected. The number of bins in the histogram is no greater than the requested limit.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featureHistograms/{featureName}/?binLimit=60 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
binLimit | query | integer | true | maximum number of bins in the returned plot |
key | query | string | false | name of the top 50 key for which plot to be retrieved. (Only required for the Summarized categorical feature) |
projectId | path | string | true | The ID of the project |
featureName | path | string | true | the name of the feature Note: DataRobot renames some features, so the feature name may not be the one from your original data. You can use GET /api/v2/projects/{projectId}/features/ to list the features and check the name. Note to users with non-ascii features names: The feature name should be utf-8-encoded (before URL-quoting) |
Example responses¶
200 Response
{
"plot": [
{
"count": 0,
"label": "string",
"target": 0
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The feature histogram chart data | FeatureHistogramResponse |
404 | Not Found | A Histogram is unavailable for this feature because the data contains unsupportedfeature types (e.g., image, location). | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featureLineages/{featureLineageId}/¶
Retrieve single Feature Discovery feature lineage.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featureLineages/{featureLineageId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project to retrieve a lineage from. |
featureLineageId | path | string | true | id of a feature lineage object to return. You can access the id with ModelingFeatureRetrieveController. |
Example responses¶
200 Response
{
"steps": [
{
"arguments": {},
"catalogId": "string",
"catalogVersionId": "string",
"description": "string",
"groupBy": [
"string"
],
"id": 0,
"isTimeAware": true,
"joinInfo": {
"joinType": "left, right",
"leftTable": {
"columns": [
"string"
],
"datasteps": [
1
]
},
"rightTable": {
"columns": [
"string"
],
"datasteps": [
1
]
}
},
"name": "string",
"parents": [
0
],
"stepType": "data",
"timeInfo": {
"duration": {
"duration": 0,
"timeUnit": "string"
},
"latest": {
"duration": 0,
"timeUnit": "string"
}
}
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | FeatureLineageResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featurelists/¶
List all featurelists for a project.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featurelists/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
sortBy | query | string | false | Property to sort featurelists by in the response |
searchFor | query | string | false | Limit results by specific featurelists. Performs a substring search for the term you provide in featurelist names. |
projectId | path | string | true | The project ID |
Enumerated Values¶
Parameter | Value |
---|---|
sortBy | [name , description , features , numModels , created , isUserCreated , -name , -description , -features , -numModels , -created , -isUserCreated ] |
Example responses¶
200 Response
[
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
]
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The list of featurelists | Inline |
Response Schema¶
Status Code 200
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
anonymous | [FeaturelistResponse] | false | none | |
» created | string | true | A :ref:timestamp <time_format> string specifying when the featurelist was created. |
|
» description | string¦null | false | User-friendly description of the featurelist, which can be updated by users. | |
» features | [string] | true | Names of features included in the featurelist. | |
» id | string | true | Featurelist ID. | |
» isUserCreated | boolean | true | Whether the featurelist was created manually by a user or by DataRobot automation. | |
» name | string | true | the name of the featurelist | |
» numModels | integer | true | The number of models that currently use this featurelist. A model is considered to use a featurelist if it is used to train the model or as a monotonic constraint featurelist, or if the model is a blender with at least one component model using the featurelist. | |
» projectId | string | true | Project ID the featurelist belongs to. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/featurelists/¶
Create a new featurelist from list of feature names.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/featurelists/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"features": [
"string"
],
"name": "string",
"skipDatetimePartitionColumn": false
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | CreateFeaturelist | false | none |
Example responses¶
201 Response
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | The newly created featurelist in the same format as GET /api/v2/projects/{projectId}/featurelists/{featurelistId}/. | FeaturelistResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/projects/{projectId}/featurelists/{featurelistId}/¶
Delete a specified featurelist.
All models using a featurelist, whether as the training featurelist or as a monotonic constraint featurelist, will also be deleted when the deletion is executed and any queued or running jobs using it will be cancelled. Similarly, predictions made on these models will also be deleted. All the entities that are to be deleted with a featurelist are described as "dependencies" of it. When deleting a featurelist with dependencies, users must pass an additional query parameter deleteDependencies
to confirm they want to delete the featurelist and all its dependencies. Without that option, only featurelists with no dependencies may be successfully deleted.
Featurelists configured into the project as a default featurelist or as a default monotonic constraint featurelist cannot be deleted.
Featurelists used in a model deployment cannot be deleted until the model deployment is deleted.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/featurelists/{featurelistId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
dryRun | query | string | false | Preview the deletion results without actually deleting the featurelist. |
deleteDependencies | query | string | false | Automatically delete all dependencies of a featurelist. If false (default), will only delete the featurelist if it has no dependencies. The value of deleteDependencies will not be used if dryRun is true.If a featurelist has dependencies, deleteDependencies must be true for the request to succeed. |
projectId | path | string | true | The project ID. |
featurelistId | path | string | true | The featurelist ID. |
Enumerated Values¶
Parameter | Value |
---|---|
dryRun | [false , False , true , True ] |
deleteDependencies | [false , False , true , True ] |
Example responses¶
200 Response
{
"canDelete": "false",
"deletionBlockedReason": "string",
"dryRun": "false",
"numAffectedJobs": 0,
"numAffectedModels": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | FeaturelistDestroyResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/featurelists/{featurelistId}/¶
Retrieve a single known feature list.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/featurelists/{featurelistId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
featurelistId | path | string | true | The featurelist ID. |
Example responses¶
200 Response
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Retrieve a single known feature list. | FeaturelistResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/projects/{projectId}/featurelists/{featurelistId}/¶
Update an existing featurelist by ID.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/projects/{projectId}/featurelists/{featurelistId}/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"description": "string",
"name": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
featurelistId | path | string | true | The featurelist ID. |
body | body | UpdateFeaturelist | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | The featurelist was successfully updated. | None |
422 | Unprocessable Entity | Update failed due to an invalid payload. This may be because the name is identical to an existing featurelist name. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/features/¶
List the features from a project with descriptive information.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/features/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
sortBy | query | string | false | Property to sort features by in the response |
searchFor | query | string | false | Limit results by specific features. Performs a substring search for the term you provide in featurelist names. |
featurelistId | query | string | false | Filter features by a specific featurelist ID. |
forSegmentedAnalysis | query | string | false | When True, features returned will be filtered to those usable for segmented analysis. |
projectId | path | string | true | The project ID. |
Enumerated Values¶
Parameter | Value |
---|---|
sortBy | [name , id , importance , featureType , uniqueCount , naCount , mean , stdDev , median , min , max , -name , -id , -importance , -featureType , -uniqueCount , -naCount , -mean , -stdDev , -median , -min , -max ] |
forSegmentedAnalysis | [false , False , true , True ] |
Example responses¶
200 Response
[
{
"dataQualities": "ISSUES_FOUND",
"dateFormat": "string",
"featureLineageId": "string",
"featureType": "Boolean",
"id": 0,
"importance": 0,
"isRestoredAfterReduction": true,
"isZeroInflated": true,
"keySummary": {
"key": "string",
"summary": {
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
},
"language": "string",
"lowInformation": true,
"max": "string",
"mean": "string",
"median": "string",
"min": "string",
"multilabelInsights": {
"multilabelInsightsKey": "string"
},
"naCount": 0,
"name": "string",
"parentFeatureNames": [
"string"
],
"projectId": "string",
"stdDev": "string",
"targetLeakage": "FALSE",
"targetLeakageReason": "string",
"timeSeriesEligibilityReason": "string",
"timeSeriesEligible": true,
"timeStep": 0,
"timeUnit": "string",
"uniqueCount": 0
}
]
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The list of features | Inline |
Response Schema¶
Status Code 200
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
anonymous | [ProjectFeatureResponse] | false | none | |
» dataQualities | string | false | Data Quality Status | |
» dateFormat | string¦null | true | the date format string for how this feature was interpreted (or null if not a date feature). If not null, it will be compatible with https://docs.python.org/2/library/time.html#time.strftime . | |
» featureLineageId | string¦null | true | id of a lineage for automatically generated features. | |
» featureType | string | true | Feature type. | |
» id | integer | true | the feature ID. (Note: Throughout the API, features are specified using their names, not this ID.) | |
» importance | number¦null | true | numeric measure of the strength of relationship between the feature and target (independent of any model or other features) | |
» isRestoredAfterReduction | boolean | false | Whether feature is restored after feature reduction | |
» isZeroInflated | boolean¦null | false | Whether feature has an excessive number of zeros | |
» keySummary | any | false | Per key summaries for Summarized Categorical or Multicategorical columns |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | FeatureKeySummaryResponseValidatorSummarizedCategorical | false | For a Summarized Categorical columns, this will contain statistics for the top 50 keys (truncated to 103 characters) | |
»»» key | string | true | Name of the key. | |
»»» summary | FeatureKeySummaryDetailsResponseValidatorSummarizedCategorical | true | Statistics of the key. | |
»»»» dataQualities | string | true | The indicator of data quality assessment of the feature. | |
»»»» max | number | true | Maximum value of the key. | |
»»»» mean | number | true | Mean value of the key. | |
»»»» median | number | true | Median value of the key. | |
»»»» min | number | true | Minimum value of the key. | |
»»»» pctRows | number | true | Percentage occurrence of key in the EDA sample of the feature. | |
»»»» stdDev | number | true | Standard deviation of the key. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | [FeatureKeySummaryResponseValidatorMultilabel] | false | For a Multicategorical columns, this will contain statistics for the top classes | |
»»» key | string | true | Name of the key. | |
»»» summary | FeatureKeySummaryDetailsResponseValidatorMultilabel | true | Statistics of the key. | |
»»»» max | number | true | Maximum value of the key. | |
»»»» mean | number | true | Mean value of the key. | |
»»»» median | number | true | Median value of the key. | |
»»»» min | number | true | Minimum value of the key. | |
»»»» pctRows | number | true | Percentage occurrence of key in the EDA sample of the feature. | |
»»»» stdDev | number | true | Standard deviation of the key. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» language | string | false | Feature's detected language. | |
» lowInformation | boolean | true | whether feature has too few values to be informative | |
» max | any | true | maximum value of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | string | false | maximum value of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | number | false | maximum value of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» mean | any | true | arithmetic mean of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | string | false | arithmetic mean of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | number | false | arithmetic mean of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» median | any | true | median of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | string | false | median of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | number | false | median of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» min | any | true | minimum value of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | string | false | minimum value of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | number | false | minimum value of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» multilabelInsights | MultilabelInsightsResponse | false | Multilabel project specific information | |
»» multilabelInsightsKey | string | true | Key for multilabel insights, unique per project, feature, and EDA stage. The response will contain the key for the most recent, finished EDA stage. | |
» naCount | integer | true | number of missing values | |
» name | string | true | feature name | |
» parentFeatureNames | [string] | false | an array of string feature names indicating which features in the input data were used to create this feature if the feature is a transformation. | |
» projectId | string | true | the ID of the project the feature belongs to | |
» stdDev | any | true | standard deviation of EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | string | false | standard deviation of EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
»» anonymous | number | false | standard deviation of EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» targetLeakage | string | true | the detected level of risk for target leakage, if any. 'SKIPPED_DETECTION' indicates leakage detection was not run on the feature, 'FALSE' indicates no leakage, 'MODERATE_RISK' indicates a moderate risk of target leakage, and 'HIGH_RISK' indicates a high risk of target leakage. | |
» targetLeakageReason | string | true | descriptive sentence explaining the reason for target leakage. | |
» timeSeriesEligibilityReason | string | true | why the feature is ineligible for time series projects, or 'suitable' if it is eligible. | |
» timeSeriesEligible | boolean | true | whether this feature can be used as a datetime partitioning feature for time series projects. Only sufficiently regular date features can be selected as the datetime feature for time series projects. Always false for non-date features. Date features that cannot be used in datetime partitioning for a time series project may be eligible for an OTV project, which has less stringent requirements. | |
» timeStep | integer¦null | true | The minimum time step that can be used to specify time series windows. The units for this value are the timeUnit . When specifying windows for time series projects, all windows must have durations that are integer multiples of this number. Only present for date features that are eligible for time series projects and null otherwise. |
|
» timeUnit | string¦null | true | the unit for the interval between values of this feature, e.g. DAY, MONTH, HOUR. When specifying windows for time series projects, the windows are expressed in terms of this unit. Only present for date features eligible for time series projects, and null otherwise. | |
» uniqueCount | integer | false | number of unique values |
Enumerated Values¶
Property | Value |
---|---|
dataQualities | [ISSUES_FOUND , NOT_ANALYZED , NO_ISSUES_FOUND ] |
featureType | [Boolean , Categorical , Currency , Date , Date Duration , Document , Image , Interaction , Length , Location , Multicategorical , Numeric , Percentage , Summarized Categorical , Text , Time ] |
dataQualities | [ISSUES_FOUND , NOT_ANALYZED , NO_ISSUES_FOUND ] |
targetLeakage | [FALSE , HIGH_RISK , MODERATE_RISK , SKIPPED_DETECTION ] |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/features/metrics/¶
List the appropriate metrics if a feature were chosen as the target. The metrics listed will include both weighted and unweighted metrics - which are appropriate will depend on whether a weights column is used.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/features/metrics/?featureName=string \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
featureName | query | string | true | The name of the feature to check |
projectId | path | string | true | The project ID. |
Example responses¶
200 Response
{
"availableMetrics": [
"string"
],
"featureName": "string",
"metricDetails": [
{
"ascending": true,
"metricName": "string",
"supportsBinary": true,
"supportsMulticlass": true,
"supportsRegression": true,
"supportsTimeseries": true
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The feature's metrics | FeatureMetricsResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/features/{featureName}/¶
Retrieve the specified feature with descriptive information. Descriptive information for features also includes summary statistics as of v2.8. These are returned via the fields max, min, mean, median, and stdDev. These fields are formatted according to the original feature type of the feature. For example, the format will be numeric if your feature is numeric, in feet and inches if your feature is length type, in currency if your feature is currency type, in time format if your feature is time type, or in ISO date format if your feature is a date type. Numbers will be rounded so that they have at most two non-zero decimal digits. For projects created prior to v2.8, these descriptive statistics will not be available. Also, some features, like categorical and text features, may not have summary statistics.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/features/{featureName}/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The ID of the project |
featureName | path | string | true | the name of the feature Note: DataRobot renames some features, so the feature name may not be the one from your original data. You can use GET /api/v2/projects/{projectId}/features/ to list the features and check the name. Note to users with non-ascii features names: The feature name should be utf-8-encoded (before URL-quoting) |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The feature information | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/features/{featureName}/multiseriesProperties/¶
Time series projects require that each timestamp have at most one row corresponding to it. However, multiple series of data can be handled within a single project by designating a multiseries ID column that assigns each row to a particular series. See the :ref:multiseries <multiseries>
docs on time series projects for more information.
Note that detection will have to be triggered via POST /api/v2/projects/{projectId}/multiseriesProperties/ in order for multiseries id columns to appear here. The route will return successfully with an empty array of detected columns if detection hasn't run yet, or hasn't found any valid columns.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/features/{featureName}/multiseriesProperties/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID to retrieve multiseries properties from. |
featureName | path | string | true | The feature to be used to the datetime partition column. |
Example responses¶
200 Response
{
"datetimePartitionColumn": "string",
"detectedMultiseriesIdColumns": [
{
"multiseriesIdColumns": [
"string"
],
"timeStep": 0,
"timeUnit": "MILLISECOND"
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Request to retrieve the potential multiseries ID columns was successful. | MultiseriesRetrieveResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/jobs/¶
List the project's jobs.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/jobs/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
status | query | string | false | If provided, only jobs with the same status will be included in the results; otherwise, queued and inprogress jobs (but not errored jobs) will be returned. |
projectId | path | string | true | The project ID. |
Enumerated Values¶
Parameter | Value |
---|---|
status | [queue , inprogress , error ] |
Example responses¶
200 Response
{
"count": 0,
"jobs": [
{
"id": "string",
"isBlocked": true,
"jobType": "model",
"message": "string",
"modelId": "string",
"projectId": "string",
"status": "queue",
"url": "string"
}
],
"next": "string",
"previous": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The project's jobs | JobListResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/projects/{projectId}/jobs/{jobId}/¶
Cancel a pending job.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/jobs/{jobId}/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
jobId | path | string | true | The job ID |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | The job has been canceled. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/jobs/{jobId}/¶
Retrieve details for a job that has been started but has not yet completed.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/jobs/{jobId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
jobId | path | string | true | The job ID |
Example responses¶
200 Response
{
"id": "string",
"isBlocked": true,
"jobType": "model",
"message": "string",
"modelId": "string",
"projectId": "string",
"status": "queue",
"url": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The job details | JobDetailsResponse |
303 | See Other | The requested job has already finished. See the Location header for the job details. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/modelingFeaturelists/¶
List all modeling featurelists from the project requested by ID.
This route will only become available after the target and partitioning options have been set for a project.
Modeling featurelists are featurelists of modeling features, and are the correct featurelists to use when creating models or restarting the autopilot. In a time series project, these will differ from those returned from GET /api/v2/projects/{projectId}/featurelists/ while in other projects these will be identical. See the :ref:documentation <input_vs_modeling>
for more information on the distinction between input and modeling data in time series projects.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeaturelists/?offset=0&limit=0 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
sortBy | query | string | false | Property to sort featurelists by in the response |
searchFor | query | string | false | Limit results by specific featurelists. Performs a substring search for the term you provide in featurelist names. |
offset | query | integer | true | This many results will be skipped. |
limit | query | integer | true | At most this many results are returned. If 0, all results. |
projectId | path | string | true | The project ID |
Enumerated Values¶
Parameter | Value |
---|---|
sortBy | [name , description , features , numModels , created , isUserCreated , -name , -description , -features , -numModels , -created , -isUserCreated ] |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
],
"next": "http://example.com",
"previous": "http://example.com",
"totalCount": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | List of requested project modeling featurelists. | FeaturelistListResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/modelingFeaturelists/¶
Create new modeling featurelist from list of feature names. Only time series projects differentiate between modeling and input featurelists. On other projects, this route will behave the same as POST /api/v2/projects/{projectId}/featurelists/. On time series projects, this can be used after the target has been set in order to create a new featurelist on the modeling features, although the previously mentioned route for creating featurelists will be disabled. On time series projects, only modeling features may be passed to this route to create a featurelist.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeaturelists/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"features": [
"string"
],
"name": "string",
"skipDatetimePartitionColumn": false
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | CreateFeaturelist | false | none |
Example responses¶
200 Response
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The newly created featurelist in the same format as GET /api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/. | FeaturelistResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/¶
Delete a specified modeling featurelist.
All models using a featurelist, whether as the training featurelist or as a monotonic constraint featurelist, will also be deleted when the deletion is executed and any queued or running jobs using it will be cancelled. Similarly, predictions made on these models will also be deleted. All the entities that are to be deleted with a featurelist are described as "dependencies" of it. When deleting a featurelist with dependencies, users must pass an additional query parameter deleteDependencies
to confirm they want to delete the featurelist and all its dependencies. Without that option, only featurelists with no dependencies may be successfully deleted.
Featurelists configured into the project as a default featurelist or as a default monotonic constraint featurelist cannot be deleted.
Featurelists used in a model deployment cannot be deleted until the model deployment is deleted.
Modeling featurelists are featurelists of modeling features, and are the appropriate featurelists to use when creating models or restarting the autopilot. In a time series project, these will be distinct from those returned from GET /api/v2/projects/{projectId}/featurelists/ while in other projects these will be identical.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
dryRun | query | string | false | Preview the deletion results without actually deleting the featurelist. |
deleteDependencies | query | string | false | Automatically delete all dependencies of a featurelist. If false (default), will only delete the featurelist if it has no dependencies. The value of deleteDependencies will not be used if dryRun is true.If a featurelist has dependencies, deleteDependencies must be true for the request to succeed. |
projectId | path | string | true | The project ID. |
featurelistId | path | string | true | The featurelist ID. |
Enumerated Values¶
Parameter | Value |
---|---|
dryRun | [false , False , true , True ] |
deleteDependencies | [false , False , true , True ] |
Example responses¶
200 Response
{
"canDelete": "false",
"deletionBlockedReason": "string",
"dryRun": "false",
"numAffectedJobs": 0,
"numAffectedModels": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | FeaturelistDestroyResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/¶
Retrieve a single modeling featurelist by ID. When reporting the number of models that "use" a featurelist, a model is considered to use a featurelist if it is used as to train the model or as a monotonic constraint featurelist, or if the model is a blender with component models that use the featurelist. This route will only become available after the target and partitioning options have been set for a project. Modeling featurelists are featurelists of modeling features, and are the appropriate featurelists to use when creating models or restarting the autopilot. In a time series project, these will be distinct from those returned from GET /api/v2/projects/{projectId}/featurelists/ while in other projects these will be identical.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
featurelistId | path | string | true | The featurelist ID. |
Example responses¶
200 Response
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Modeling featurelist with specified ID. | FeaturelistResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/¶
Update an existing modeling featurelist by ID. In non-time series projects, "modeling featurelists" and "featurelists" routes behave the same, except "modeling featurelists" are only accessible after the project is ready for modeling. In time series projects, "featurelists" contain the input features before feature derivation that are used to derive the time series features, while "modeling featurelists" contain the derived time series features used for modeling.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeaturelists/{featurelistId}/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"description": "string",
"name": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
featurelistId | path | string | true | The featurelist ID. |
body | body | UpdateFeaturelist | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | The modeling featurelist was successfully updated. | None |
422 | Unprocessable Entity | Update failed due to an invalid payload. This may be because the name is identical to an existing featurelist name. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/modelingFeatures/¶
List the features from a project that are used for modeling with descriptive information.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeatures/?offset=0&limit=0 \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
sortBy | query | string | false | Property to sort features by in the response |
searchFor | query | string | false | Limit results by specific features. Performs a substring search for the term you provide in featurelist names. |
featurelistId | query | string | false | Filter features by a specific featurelist ID. |
offset | query | integer | true | This many results will be skipped. |
limit | query | integer | true | At most this many results are returned. If 0, all results. |
projectId | path | string | true | The project ID |
Enumerated Values¶
Parameter | Value |
---|---|
sortBy | [name , id , importance , featureType , uniqueCount , naCount , mean , stdDev , median , min , max , -name , -id , -importance , -featureType , -uniqueCount , -naCount , -mean , -stdDev , -median , -min , -max ] |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"dataQualities": "ISSUES_FOUND",
"dateFormat": "string",
"featureLineageId": "string",
"featureType": "Boolean",
"importance": 0,
"isRestoredAfterReduction": true,
"isZeroInflated": true,
"keySummary": {
"key": "string",
"summary": {
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
},
"language": "string",
"lowInformation": true,
"max": "string",
"mean": "string",
"median": "string",
"min": "string",
"multilabelInsights": {
"multilabelInsightsKey": "string"
},
"naCount": 0,
"name": "string",
"parentFeatureNames": [
"string"
],
"projectId": "string",
"stdDev": "string",
"targetLeakage": "FALSE",
"targetLeakageReason": "string",
"uniqueCount": 0
}
],
"next": "http://example.com",
"previous": "http://example.com",
"totalCount": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Descriptive information for features. | ModelingFeatureListResponse |
422 | Unprocessable Entity | Unable to process the request | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/modelingFeatures/fromDiscardedFeatures/¶
Restore discarded time series features.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeatures/fromDiscardedFeatures/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"featuresToRestore": [
"string"
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | ModelingFeaturesCreateFromDiscarded | false | none |
Example responses¶
202 Response
{
"featuresToRestore": [
"string"
],
"warnings": [
"string"
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Creation has successfully started. See the Location header. | ModelingFeaturesCreateFromDiscardedResponse |
404 | Not Found | No discarded time series features information available. | None |
422 | Unprocessable Entity | Unable to process the request. | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/modelingFeatures/{featureName}/¶
Retrieve the specified modeling feature with descriptive information.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/modelingFeatures/{featureName}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The ID of the project |
featureName | path | string | true | the name of the feature Note: DataRobot renames some features, so the feature name may not be the one from your original data. You can use GET /api/v2/projects/{projectId}/features/ to list the features and check the name. Note to users with non-ascii features names: The feature name should be utf-8-encoded (before URL-quoting) |
Example responses¶
200 Response
{
"dataQualities": "ISSUES_FOUND",
"dateFormat": "string",
"featureLineageId": "string",
"featureType": "Boolean",
"importance": 0,
"isRestoredAfterReduction": true,
"isZeroInflated": true,
"keySummary": {
"key": "string",
"summary": {
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
},
"language": "string",
"lowInformation": true,
"max": "string",
"mean": "string",
"median": "string",
"min": "string",
"multilabelInsights": {
"multilabelInsightsKey": "string"
},
"naCount": 0,
"name": "string",
"parentFeatureNames": [
"string"
],
"projectId": "string",
"stdDev": "string",
"targetLeakage": "FALSE",
"targetLeakageReason": "string",
"uniqueCount": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Descriptive information for feature. | ModelingFeatureResponse |
404 | Not Found | Feature does not exist. | None |
422 | Unprocessable Entity | Unable to process the request | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/multiseriesIds/{multiseriesId}/crossSeriesProperties/¶
Retrieve eligible cross-series group-by columns.
Note that validation will have to have been triggered via POST /api/v2/projects/{projectId}/crossSeriesProperties/ in order for results to appear here.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/multiseriesIds/{multiseriesId}/crossSeriesProperties/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
crossSeriesGroupByColumns | query | any | false | The names of the columns to retrieve the validation status for. If not specified, all eligible columns will be returned. |
projectId | path | string | true | The project to retrieve cross-series group-by columns for. |
multiseriesId | path | string | true | The name of the column to be used as the multiseries ID column. |
Example responses¶
200 Response
{
"crossSeriesGroupByColumns": [
{
"eligibility": "string",
"isEligible": true,
"name": "string"
}
],
"multiseriesId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Request was successful. | CrossSeriesGroupByColumnRetrieveResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/multiseriesNames/¶
List the individual series names of a multiseries project
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/multiseriesNames/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
offset | query | integer | false | Number of results to skip. |
limit | query | integer | false | At most this many results are returned. The default may change without notice. |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"count": 0,
"data": {
"items": [
"string"
]
},
"next": "string",
"previous": "string",
"totalSeriesCount": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | MultiseriesNamesControllerResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/multiseriesProperties/¶
Analyze relationships between potential partition and multiseries ID columns. Time series projects require that each timestamp have at most one row corresponding to it. However, multiple series of data can be handled within a single project by designating a multiseries ID column that assigns each row to a particular series. See the :ref:multiseries <multiseries>
docs on time series projects for more information. A detection job analyzing the relationship between the multiseries ID column and the datetime partition column must be ran before it can be used. If the desired multiseries ID column(s) are known, it can be specified to limit the analysis to only those columns.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/multiseriesProperties/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"datetimePartitionColumn": "string",
"multiseriesIdColumns": [
"string"
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | MultiseriesPayload | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Request to analyze relationships between potential partition and multiseries ID columns was submitted. See Location header. | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/relationshipQualityAssessments/¶
Submit a job to assess the quality of the relationship configuration within a Feature Discovery project.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/relationshipQualityAssessments/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"credentials": [
{
"catalogVersionId": "string",
"credentialId": "string",
"url": "string"
}
],
"datetimePartitionColumn": "string",
"featureEngineeringPredictionPoint": "string",
"relationshipsConfiguration": {
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"featureListId": "string",
"identifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND"
}
],
"snowflakePushDownCompatible": true
},
"userId": "string"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | RelationshipQualityAssessmentsCreate | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Relationship quality assessment has successfully started. See the Location header. | None |
422 | Unprocessable Entity | Unable to process the request | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/relationshipsConfiguration/¶
Retrieve relationships configuration for a project
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/relationshipsConfiguration/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
configId | query | string | false | Id of Secondary Dataset Configuration |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND"
}
],
"snowflakePushDownCompatible": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Project relationships configuration. | RelationshipsConfigResponse |
404 | Not Found | Data was not found. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/secondaryDatasetsConfigurations/¶
List all secondary dataset configurations for a project, optionally filtered by feature list id.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/secondaryDatasetsConfigurations/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
featurelistId | query | string | false | feature list ID of the model |
modelId | query | string | false | ID of the model |
offset | query | integer | false | This many results will be skipped. |
limit | query | integer | false | At most this many results are returned. |
includeDeleted | query | string | false | Include deleted records. |
projectId | path | string | true | The project ID |
Enumerated Values¶
Parameter | Value |
---|---|
includeDeleted | [false , False , true , True ] |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"config": [
{
"featureEngineeringGraphId": "string",
"secondaryDatasets": [
{
"catalogId": "string",
"catalogVersionId": "string",
"identifier": "string",
"snapshotPolicy": "specified"
}
]
}
],
"created": "2019-08-24T14:15:22Z",
"creatorFullName": "string",
"creatorUserId": "string",
"credentialIds": [
{
"catalogVersionId": "string",
"credentialId": "string",
"url": "string"
}
],
"featurelistId": "string",
"id": "string",
"isDefault": true,
"name": "string",
"projectId": "string",
"projectVersion": "string",
"secondaryDatasets": [
{
"catalogId": "string",
"catalogVersionId": "string",
"identifier": "string",
"requiredFeatures": [
"string"
],
"snapshotPolicy": "specified"
}
]
}
],
"next": "http://example.com",
"previous": "http://example.com"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | List of secondary dataset configurations. | SecondaryDatasetConfigListResponse |
404 | Not Found | Data is not found. | None |
422 | Unprocessable Entity | Unable to process the request | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/secondaryDatasetsConfigurations/¶
Create secondary dataset configurations for a project.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/secondaryDatasetsConfigurations/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"config": [
{
"featureEngineeringGraphId": "string",
"secondaryDatasets": [
{
"catalogId": "string",
"catalogVersionId": "string",
"identifier": "string",
"snapshotPolicy": "specified"
}
]
}
],
"featurelistId": "string",
"modelId": "string",
"name": "string",
"save": true,
"secondaryDatasets": [
{
"catalogId": "string",
"catalogVersionId": "string",
"identifier": "string",
"snapshotPolicy": "specified"
}
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | SecondaryDatasetCreate | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Secondary dataset configuration created with allowable type mismatches | None |
201 | Created | Secondary dataset configuration created with no errors. | None |
204 | No Content | Validation of secondary dataset configuration is successful | None |
422 | Unprocessable Entity | Validation of secondary dataset configuration failed. | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/projects/{projectId}/secondaryDatasetsConfigurations/{secondaryDatasetConfigId}/¶
Soft deletes a secondary dataset configuration.
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/projects/{projectId}/secondaryDatasetsConfigurations/{secondaryDatasetConfigId}/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
secondaryDatasetConfigId | path | string | true | Secondary dataset configuration ID |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | Secondary dataset configuration successfully soft deleted. | None |
404 | Not Found | Data is not found. | None |
409 | Conflict | Dataset has already been deleted | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/secondaryDatasetsConfigurations/{secondaryDatasetConfigId}/¶
Retrieve secondary dataset configuration by ID.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/secondaryDatasetsConfigurations/{secondaryDatasetConfigId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
secondaryDatasetConfigId | path | string | true | Secondary dataset configuration ID |
Example responses¶
200 Response
{
"config": [
{
"featureEngineeringGraphId": "string",
"secondaryDatasets": [
{
"catalogId": "string",
"catalogVersionId": "string",
"identifier": "string",
"snapshotPolicy": "specified"
}
]
}
],
"created": "2019-08-24T14:15:22Z",
"creatorFullName": "string",
"creatorUserId": "string",
"credentialIds": [
{
"catalogVersionId": "string",
"credentialId": "string",
"url": "string"
}
],
"featurelistId": "string",
"id": "string",
"isDefault": true,
"name": "string",
"projectId": "string",
"projectVersion": "string",
"secondaryDatasets": [
{
"catalogId": "string",
"catalogVersionId": "string",
"identifier": "string",
"requiredFeatures": [
"string"
],
"snapshotPolicy": "specified"
}
]
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Secondary dataset configuration. | ProjectSecondaryDatasetConfigResponse |
404 | Not Found | Data is not found. | None |
422 | Unprocessable Entity | Unable to process the request | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/segmentationTaskJobResults/{segmentationTaskId}/¶
Retrieve the statuses of segmentation task jobs associated with the ID.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/segmentationTaskJobResults/{segmentationTaskId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
segmentationTaskId | path | string | true | The ID of the segmentation task to check the status of. |
Example responses¶
200 Response
{
"completedJobs": [
{
"name": "string",
"segmentationTaskId": "string",
"segmentsCount": 0,
"segmentsEda": [
{
"maxDate": "2019-08-24T14:15:22Z",
"minDate": "2019-08-24T14:15:22Z",
"name": "string",
"numberOfRows": 0,
"sizeInBytes": 0
}
],
"url": "string"
}
],
"failedJobs": [
{
"message": "string",
"name": "string",
"parameters": {}
}
],
"numberOfJobs": 0
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | SegmentationResultsResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/segmentationTasks/¶
List all segmentation tasks created for the project.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/segmentationTasks/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
offset | query | integer | false | This many results will be skipped. |
limit | query | integer | false | At most this many results are returned. |
projectId | path | string | true | The project ID |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"created": "2019-08-24T14:15:22Z",
"data": {
"clusteringModelId": "string",
"clusteringModelName": "string",
"clusteringProjectId": "string",
"datetimePartitionColumn": "string",
"modelPackageId": "string",
"multiseriesIdColumns": [
"string"
],
"userDefinedSegmentIdColumns": [
"string"
]
},
"metadata": {
"useAutomatedSegmentation": true,
"useMultiseriesIdColumns": true,
"useTimeSeries": true
},
"name": "string",
"projectId": "string",
"segmentationTaskId": "string",
"segments": [
"string"
],
"segmentsCount": 0,
"segmentsEda": [
{
"maxDate": "2019-08-24T14:15:22Z",
"minDate": "2019-08-24T14:15:22Z",
"name": "string",
"numberOfRows": 0,
"sizeInBytes": 0
}
],
"type": "string"
}
],
"next": "http://example.com",
"previous": "http://example.com"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | SegmentationTaskListResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/segmentationTasks/¶
Create segmentation tasks for the dataset used in the project.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/segmentationTasks/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"datetimePartitionColumn": "string",
"modelPackageId": "string",
"multiseriesIdColumns": [
"string"
],
"target": "string",
"useAutomatedSegmentation": false,
"useTimeSeries": false,
"userDefinedSegmentIdColumns": [
"string"
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
body | body | SegmentationTaskCreate | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
202 | Accepted | Job submitted. See Location header. | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
202 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/segmentationTasks/{segmentationTaskId}/¶
Retrieve information about a segmentation task.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/segmentationTasks/{segmentationTaskId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
segmentationTaskId | path | string | true | The ID of the segmentation task. |
Example responses¶
200 Response
{
"created": "2019-08-24T14:15:22Z",
"data": {
"clusteringModelId": "string",
"clusteringModelName": "string",
"clusteringProjectId": "string",
"datetimePartitionColumn": "string",
"modelPackageId": "string",
"multiseriesIdColumns": [
"string"
],
"userDefinedSegmentIdColumns": [
"string"
]
},
"metadata": {
"useAutomatedSegmentation": true,
"useMultiseriesIdColumns": true,
"useTimeSeries": true
},
"name": "string",
"projectId": "string",
"segmentationTaskId": "string",
"segments": [
"string"
],
"segmentsCount": 0,
"segmentsEda": [
{
"maxDate": "2019-08-24T14:15:22Z",
"minDate": "2019-08-24T14:15:22Z",
"name": "string",
"numberOfRows": 0,
"sizeInBytes": 0
}
],
"type": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | SegmentationTaskResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/segmentationTasks/{segmentationTaskId}/mappings/¶
Retrieve the seriesId to segmentId mappings for a Segmentation Task.
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/segmentationTasks/{segmentationTaskId}/mappings/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
offset | query | integer | false | This many results will be skipped. |
limit | query | integer | false | At most this many results are returned. |
projectId | path | string | true | The project ID |
segmentationTaskId | path | string | true | The ID of the segmentation task. |
Example responses¶
200 Response
{
"count": 0,
"data": [
{
"segment": "string",
"seriesId": "string"
}
],
"next": "http://example.com",
"previous": "http://example.com"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | SegmentationTaskSegmentMappingsResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PATCH /api/v2/projects/{projectId}/segments/{segmentId}/¶
The only supported operation right now is segment restart, which removes existing child segment project and starts another child project instead for the given segment. Should be only used for child segments which are stuck during project startup or upload.
Code samples¶
# You can also use wget
curl -X PATCH https://app.datarobot.com/api/v2/projects/{projectId}/segments/{segmentId}/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"operation": "restart"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID |
segmentId | path | string | true | The name of the segment |
body | body | ProjectSegmentUpdate | false | none |
Example responses¶
200 Response
{
"projectId": "string",
"segmentId": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The segment is updated. | ProjectSegmentUpdateResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/projects/{projectId}/status/¶
Check the status of a project
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/projects/{projectId}/status/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project ID. |
Example responses¶
200 Response
{
"autopilotDone": true,
"stage": "modeling",
"stageDescription": "string"
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | The project status | ProjectStatusResponse |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/projects/{projectId}/typeTransformFeatures/¶
Create a new feature by changing the type of an existing one.
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/projects/{projectId}/typeTransformFeatures/ \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"dateExtraction": "year",
"name": "string",
"parentName": "string",
"replacement": "string",
"variableType": "text"
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
projectId | path | string | true | The project to create the feature in. |
body | body | FeatureTransform | false | none |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | Creation has successfully started. See the Location header. | None |
422 | Unprocessable Entity | Unable to process the request | None |
Response Headers¶
Status | Header | Type | Format | Description |
---|---|---|---|---|
200 | Location | string | A url that can be polled to check the status. |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
POST /api/v2/relationshipsConfigurations/¶
Create a relationships configuration
Code samples¶
# You can also use wget
curl -X POST https://app.datarobot.com/api/v2/relationshipsConfigurations/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"featureListId": "string",
"identifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"name": "string",
"value": true
}
],
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND"
}
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
body | body | RelationshipsConfigCreate | false | none |
Example responses¶
201 Response
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND"
}
],
"snowflakePushDownCompatible": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
201 | Created | none | RelationshipsConfigResponse |
422 | Unprocessable Entity | User input fails validation | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
DELETE /api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/¶
Delete a relationships configuration
Code samples¶
# You can also use wget
curl -X DELETE https://app.datarobot.com/api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/ \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
relationshipsConfigurationId | path | string | true | Id of the relationships configuration to delete |
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
204 | No Content | none | None |
404 | Not Found | Relationships configuration not found | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/¶
Retrieve a relationships configuration
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
includeRelationshipQuality | query | string | false | Flag indicating whether or not to include relationship quality information in the returned result |
relationshipsConfigurationId | path | string | true | Id of the relationships configuration to delete |
Enumerated Values¶
Parameter | Value |
---|---|
includeRelationshipQuality | [false , False , true , True ] |
Example responses¶
200 Response
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND",
"relationshipQuality": {
"detailedReport": [
{
"enrichmentRate": {
"action": "string",
"category": "green",
"message": "string"
},
"enrichmentRateValue": 0,
"featureDerivationWindow": "string",
"mostRecentData": {
"action": "string",
"category": "green",
"message": "string"
},
"overallCategory": "green",
"windowSettings": {
"action": "string",
"category": "green",
"message": "string"
}
}
],
"lastUpdated": "2019-08-24T14:15:22Z",
"problemCount": 0,
"samplingFraction": 0,
"status": "Complete",
"statusId": [
"string"
],
"summaryCategory": "green",
"summaryMessage": "string"
}
}
],
"snowflakePushDownCompatible": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | ExtendedRelationshipsConfigRetrieve |
404 | Not Found | Relationships configuration cannot be found | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
PUT /api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/¶
Replace a relationships configuration
Code samples¶
# You can also use wget
curl -X PUT https://app.datarobot.com/api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/ \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Body parameter¶
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"featureListId": "string",
"identifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"name": "string",
"value": true
}
],
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND"
}
]
}
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
relationshipsConfigurationId | path | string | true | Id of the relationships configuration to delete |
body | body | RelationshipsConfigCreate | false | none |
Example responses¶
200 Response
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND"
}
],
"snowflakePushDownCompatible": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | RelationshipsConfigResponse |
422 | Unprocessable Entity | User input fails validation | None |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
GET /api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/projects/{projectId}/¶
Retrieve the relationships configuration with extended information
Code samples¶
# You can also use wget
curl -X GET https://app.datarobot.com/api/v2/relationshipsConfigurations/{relationshipsConfigurationId}/projects/{projectId}/ \
-H "Accept: application/json" \
-H "Authorization: Bearer {access-token}"
Parameters
Name | In | Type | Required | Description |
---|---|---|---|---|
includeRelationshipQuality | query | string | false | Flag indicating whether or not to include relationship quality information in the returned result |
projectId | path | string | true | The project ID |
relationshipsConfigurationId | path | string | true | The relationships configuration ID |
Enumerated Values¶
Parameter | Value |
---|---|
includeRelationshipQuality | [false , False , true , True ] |
Example responses¶
200 Response
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND",
"relationshipQuality": {
"detailedReport": [
{
"enrichmentRate": {
"action": "string",
"category": "green",
"message": "string"
},
"enrichmentRateValue": 0,
"featureDerivationWindow": "string",
"mostRecentData": {
"action": "string",
"category": "green",
"message": "string"
},
"overallCategory": "green",
"windowSettings": {
"action": "string",
"category": "green",
"message": "string"
}
}
],
"lastUpdated": "2019-08-24T14:15:22Z",
"problemCount": 0,
"samplingFraction": 0,
"status": "Complete",
"statusId": [
"string"
],
"summaryCategory": "green",
"summaryMessage": "string"
}
}
],
"snowflakePushDownCompatible": true
}
Responses¶
Status | Meaning | Description | Schema |
---|---|---|---|
200 | OK | none | ExtendedRelationshipsConfigRetrieve |
To perform this operation, you must be authenticated by means of one of the following methods:
BearerAuth
Schemas¶
AccessControl
{
"canShare": true,
"role": "string",
"userId": "string",
"username": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
canShare | boolean | true | Whether the recipient can share the role further. | |
role | string | true | The role of the user on this entity. | |
userId | string | true | The identifier of the user that has access to this entity. | |
username | string | true | The username of the user that has access to the entity. |
Aim
{
"accuracyOptimizedMb": true,
"aggregationType": "total",
"allowPartialHistoryTimeSeriesPredictions": true,
"allowedPairwiseInteractionGroups": [
[
"string",
"string"
]
],
"allowedPairwiseInteractionGroupsFilename": "string",
"autopilotClusterList": [
2
],
"autopilotDataSamplingMethod": "random",
"autopilotDataSelectionMethod": "duration",
"autopilotWithFeatureDiscovery": true,
"backtests": [
{
"gapDuration": "string",
"index": 0,
"primaryTrainingEndDate": "2019-08-24T14:15:22Z",
"primaryTrainingStartDate": "2019-08-24T14:15:22Z",
"validationDuration": "string",
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
],
"biasMitigationFeatureName": "string",
"biasMitigationTechnique": "preprocessingReweighing",
"blendBestModels": true,
"blueprintThreshold": 1,
"calendarId": "string",
"chunkDefinitionId": "string",
"classMappingAggregationSettings": {
"aggregationClassName": "string",
"excludedFromAggregation": [],
"maxUnaggregatedClassValues": 1000,
"minClassSupport": 1
},
"considerBlendersInRecommendation": true,
"credentials": [
{
"catalogVersionId": "string",
"password": "string",
"url": "string",
"user": "string"
}
],
"crossSeriesGroupByColumns": [
"string"
],
"cvHoldoutLevel": "string",
"cvMethod": "random",
"dateRemoval": true,
"datetimePartitionColumn": "string",
"datetimePartitioningId": "string",
"defaultToAPriori": true,
"defaultToDoNotDerive": true,
"defaultToKnownInAdvance": true,
"differencingMethod": "auto",
"disableHoldout": false,
"eventsCount": "string",
"exponentiallyWeightedMovingAlpha": 1,
"exposure": "string",
"externalPredictions": [
"string"
],
"externalTimeSeriesBaselineDatasetId": "string",
"externalTimeSeriesBaselineDatasetName": "string",
"fairnessMetricsSet": "proportionalParity",
"fairnessThreshold": 1,
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDiscoverySupervisedFeatureReduction": true,
"featureEngineeringPredictionPoint": "string",
"featureSettings": [
{
"aPriori": true,
"doNotDerive": true,
"featureName": "string",
"knownInAdvance": true
}
],
"featurelistId": "string",
"forecastWindowEnd": 0,
"forecastWindowStart": 0,
"gapDuration": "string",
"holdoutDuration": "string",
"holdoutEndDate": "2019-08-24T14:15:22Z",
"holdoutLevel": "string",
"holdoutPct": 98,
"holdoutStartDate": "2019-08-24T14:15:22Z",
"includeBiasMitigationFeatureAsPredictorVariable": true,
"incrementalLearningEarlyStoppingRounds": 0,
"incrementalLearningOnBestModel": true,
"incrementalLearningOnlyMode": true,
"isHoldoutModified": true,
"majorityDownsamplingRate": 0,
"metric": "string",
"minSecondaryValidationModelCount": 10,
"mode": "0",
"modelSplits": 5,
"monotonicDecreasingFeaturelistId": "string",
"monotonicIncreasingFeaturelistId": "string",
"multiseriesIdColumns": [
"string"
],
"numberOfBacktests": 0,
"offset": [
"string"
],
"onlyIncludeMonotonicBlueprints": false,
"partitionKeyCols": [
"string"
],
"periodicities": [
{
"timeSteps": 0,
"timeUnit": "MILLISECOND"
}
],
"positiveClass": "string",
"preferableTargetValue": "string",
"prepareModelForDeployment": true,
"primaryLocationColumn": "string",
"protectedFeatures": [
"string"
],
"quantileLevel": 0,
"quickrun": true,
"rateTopPctThreshold": 100,
"relationshipsConfigurationId": "string",
"reps": 2,
"responseCap": 0.5,
"runLeakageRemovedFeatureList": true,
"sampleStepPct": 0,
"scoringCodeOnly": true,
"seed": 999999999,
"segmentationTaskId": "string",
"shapOnlyMode": true,
"smartDownsampled": true,
"stopWords": [
"string"
],
"target": "string",
"targetType": "Binary",
"trainingLevel": "string",
"treatAsExponential": "auto",
"unsupervisedMode": false,
"unsupervisedType": "anomaly",
"useCrossSeriesFeatures": true,
"useGpu": true,
"useProjectSettings": true,
"useSupervisedFeatureReduction": true,
"useTimeSeries": false,
"userPartitionCol": "string",
"validationDuration": "string",
"validationLevel": "string",
"validationPct": 99,
"validationType": "CV",
"weights": "string",
"windowsBasisUnit": "MILLISECOND"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
accuracyOptimizedMb | boolean | false | Include additional, longer-running models that will be run by the autopilot and available to run manually. | |
aggregationType | string | false | For multiseries projects only. The aggregation type to apply when creating cross-series features. | |
allowPartialHistoryTimeSeriesPredictions | boolean | false | Specifies whether the time series predictions can use partial historical data. | |
allowedPairwiseInteractionGroups | [array]¦null | false | maxItems: 100 |
For GAM models - specify groups of columns for which pairwise interactions will be allowed. E.g. if set to [['A', 'B', 'C'], ['C', 'D']] then GAM models will allow interactions between columns AxB, BxC, AxC, CxD. All others (AxD, BxD) will not be considered. If not specified - all possible interactions will be considered by model. |
allowedPairwiseInteractionGroupsFilename | string¦null | false | Filename that was used to upload allowed_pairwise_interaction_groups. Necessary for persistence of UI/UX when you specify that parameter via file. | |
autopilotClusterList | [integer]¦null | false | maxItems: 10 |
A list of integers where each value will be used as the number of clusters in Autopilot model(s) for unsupervised clustering projects. Cannot be specified unless unsupervisedMode is true and unsupervisedType is set to clustering . |
autopilotDataSamplingMethod | string | false | Defines how autopilot will select subsample from training dataset in OTV/TS projects. Defaults to 'latest' for 'rowCount' dataSelectionMethod and to 'random' for 'duration'. | |
autopilotDataSelectionMethod | string | true | The Data Selection method to be used by autopilot when creating models for datetime-partitioned datasets. | |
autopilotWithFeatureDiscovery | boolean | false | If true, autopilot will run on a feature list that includes features found via search for interactions. | |
backtests | [Backtest] | false | maxItems: 50 minItems: 1 |
An array specifying the format of the backtests. |
biasMitigationFeatureName | string | false | minLength: 1 minLength: 1 |
The name of the protected feature used to mitigate bias on models. |
biasMitigationTechnique | string | false | Method applied to perform bias mitigation. | |
blendBestModels | boolean | false | Blend best models during Autopilot run. This option is not supported in SHAP-only mode or for multilabel projects. | |
blueprintThreshold | integer¦null | false | maximum: 1440 minimum: 1 |
The runtime (in hours) which if exceeded will exclude a model from autopilot runs. |
calendarId | string | false | The ID of the calendar to be used in this project. | |
chunkDefinitionId | string | false | Chunk definition id for incremental learning using chunking service | |
classMappingAggregationSettings | ClassMappingAggregationSettings | false | Class mapping aggregation settings. | |
considerBlendersInRecommendation | boolean | false | Include blenders when selecting a model to prepare for deployment in an Autopilot Run. This option is not supported in SHAP-only mode or for multilabel projects. | |
credentials | [oneOf] | false | maxItems: 30 |
List of credentials for the secondary datasets used in feature discovery project. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | PasswordCredentials | false | none |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | CredentialId | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
crossSeriesGroupByColumns | [string] | false | maxItems: 1 |
For multiseries projects with cross-series features enabled only. List of columns (currently of length 1). Setting that indicates how to further split series into related groups. For example, if every series is sales of an individual product, the series group-by could be the product category with values like "men's clothing", "sports equipment", etc. |
cvHoldoutLevel | any | false | The value of the partition column indicating a row is part of the holdout set. This level is optional - if not specified or if provided as null , then no holdout will be used in the project. The rest of the levels indicate which cross validation fold each row should fall into. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
cvMethod | string | false | The partitioning method to be applied to the training data. | |
dateRemoval | boolean | false | If true, enable creating additional feature lists without dates (does not apply to time-aware projects). | |
datetimePartitionColumn | string | false | The date column that will be used as a datetime partition column. | |
datetimePartitioningId | string | false | The ID of a datetime partitioning to use for the project.When datetime_partitioning_id is specified, no other datetime partitioning related field is allowed to be specified, as these fields get loaded from the already created partitioning. | |
defaultToAPriori | boolean | false | Renamed to defaultToKnownInAdvance . |
|
defaultToDoNotDerive | boolean | false | For time series projects only. Sets whether all features default to being treated as do-not-derive features, excluding them from feature derivation. Individual features can be set to a value different than the default by using the featureSettings parameter. |
|
defaultToKnownInAdvance | boolean | false | For time series projects only. Sets whether all features default to being treated as known in advance features, which are features that are known into the future. Features marked as known in advance must be specified into the future when making predictions. The default is false, all features are not known in advance. Individual features can be set to a value different than the default using the featureSettings parameter. See the :ref:Time Series Overview <time_series_overview> for more context. |
|
differencingMethod | string | false | For time series projects only. Used to specify which differencing method to apply if the data is stationary. For classification problems simple and seasonal are not allowed. Parameter periodicities must be specified if seasonal is chosen. Defaults to auto . |
|
disableHoldout | boolean | false | Whether to suppress allocating a holdout fold. If disableHoldout is set to true, holdoutStartDate and holdoutDuration must not be set. |
|
eventsCount | string | false | The name of a column specifying events count. The data in this column must be pure numeric and non negative without missing values | |
exponentiallyWeightedMovingAlpha | number | false | maximum: 1 minimum: 0 |
Discount factor (alpha) used for exponentially weighted moving features |
exposure | string | false | The name of a column specifying row exposure.The data in this column must be pure numeric (e.g. not currency, date, length, etc.) and without missing values | |
externalPredictions | [string] | false | maxItems: 100 minItems: 1 |
List of external prediction columns from the dataset. |
externalTimeSeriesBaselineDatasetId | string | false | Catalog version id for external prediction data that can be used as a baseline to calculate new metrics. | |
externalTimeSeriesBaselineDatasetName | string¦null | false | The name of the time series baseline dataset for the project. | |
fairnessMetricsSet | string | false | Metric to use for calculating fairness. Can be one of proportionalParity , equalParity , predictionBalance , trueFavorableAndUnfavorableRateParity or FavorableAndUnfavorablePredictiveValueParity . Used and required only if Bias & Fairness in AutoML feature is enabled. |
|
fairnessThreshold | number | false | maximum: 1 minimum: 0 |
The threshold value of the fairness metric. The valid range is [0:1]; the default fairness metric value is 0.8. This metric is only applicable if the Bias & Fairness in AutoML feature is enabled. |
featureDerivationWindowEnd | integer | false | maximum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the past relative to the forecast point the feature derivation window should end. |
featureDerivationWindowStart | integer | false | maximum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the past relative to the forecast point the feature derivation window should begin. |
featureDiscoverySupervisedFeatureReduction | boolean | false | Run supervised feature reduction for feature discovery projects. | |
featureEngineeringPredictionPoint | string¦null | false | The date column to be used as the prediction point for time-based feature engineering. | |
featureSettings | [FeatureSetting] | false | An array specifying per feature settings. Features can be left unspecified. | |
featurelistId | string | false | The ID of a featurelist to use for autopilot. | |
forecastWindowEnd | integer | false | minimum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the future relative to the forecast point the forecast window should end. |
forecastWindowStart | integer | false | minimum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the future relative to the forecast point the forecast window should start. |
gapDuration | string(duration) | false | The duration of the gap between holdout training and holdout scoring data. For time series projects, defaults to the duration of the gap between the end of the feature derivation window and the beginning of the forecast window. For OTV projects, defaults to a zero duration (P0Y0M0D). | |
holdoutDuration | string(duration) | false | The duration of holdout scoring data. When specifying holdoutDuration , holdoutStartDate must also be specified. This attribute cannot be specified when disableHoldout is true. |
|
holdoutEndDate | string(date-time) | false | The end date of holdout scoring data. When specifying holdoutEndDate , holdoutStartDate must also be specified. This attribute cannot be specified when disableHoldout is true. |
|
holdoutLevel | any | false | The value of the partition column indicating a row is part of the holdout set. This level is optional - if not specified or if provided as null , then no holdout will be used in the project. However, the column must have exactly 2 values in order for this option to be valid |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
holdoutPct | number | false | maximum: 98 minimum: 0 |
The percentage of the dataset to assign to the holdout set |
holdoutStartDate | string(date-time) | false | The start date of holdout scoring data. When specifying holdoutStartDate , one of holdoutEndDate or holdoutDuration must also be specified. This attribute cannot be specified when disableHoldout is true. |
|
includeBiasMitigationFeatureAsPredictorVariable | boolean | false | Specifies whether the mitigation feature will be used as a predictor variable (i.e., treated like other categorical features in the input to train the modeler), in addition to being used for bias mitigation. If false, the mitigation feature will be used only for bias mitigation, and not for training the modeler task. | |
incrementalLearningEarlyStoppingRounds | integer | false | minimum: 0 |
Early stopping rounds for the auto incremental learning service |
incrementalLearningOnBestModel | boolean | false | Automatically run incremental learning on the best model during Autopilot run. | |
incrementalLearningOnlyMode | boolean | false | Keep only models that support incremental learning during Autopilot run. | |
isHoldoutModified | boolean | false | A boolean value indicating whether holdout settings (start/end dates) have been modified by user. | |
majorityDownsamplingRate | number | false | The percentage between 0 and 100 of the majority rows that should be kept. Must be specified only if using smart downsampling. If not specified, a default will be selected based on the dataset distribution. The chosen rate may not cause the majority class to become smaller than the minority class. | |
metric | string | false | The metric to use to select the best models. See /api/v2/projects/(projectId)/features/metrics/ for the metrics that may be valid for a potential target. Note that weighted metrics must be used with a weights column. |
|
minSecondaryValidationModelCount | integer | false | maximum: 10 minimum: 0 |
Compute 'All backtest' scores (datetime models) or cross validation scores for the specified number of highest ranking models on the Leaderboard, if over the Autopilot default. |
mode | string | false | The autopilot mode to use. Either 'quick', 'auto', 'manual' or 'comprehensive' | |
modelSplits | integer | false | maximum: 10 minimum: 1 |
Sets the cap on the number of jobs per model used when building models to control number of jobs in the queue. Higher number of modelSplits will allow for less downsampling leading to the use of more post-processed data. |
monotonicDecreasingFeaturelistId | string¦null | false | The ID of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If null, no such constraints are enforced. When specified, this will set a default for the project that can be overriden at model submission time if desired. | |
monotonicIncreasingFeaturelistId | string¦null | false | The ID of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If null, no such constraints are enforced. When specified, this will set a default for the project that can be overriden at model submission time if desired. | |
multiseriesIdColumns | [string] | false | minItems: 1 |
May be used only with time series projects. An array of the column names identifying the series to which each row of the dataset belongs. Currently only one multiseries ID column is supported. See the :ref:multiseries <multiseries> section of the time series documentation for more context. |
numberOfBacktests | integer | false | The number of backtests to use. If omitted, defaults to a positive value selected by the server based on the validation and gap durations. | |
offset | [string] | false | An array of strings with names of a columns specifying row offsets.The data in this column must be pure numeric (e.g. not currency, date, length, etc.) and without missing values | |
onlyIncludeMonotonicBlueprints | boolean | true | When true, only blueprints that support enforcing montonic constraints will be available in the project or selected for autopilot. | |
partitionKeyCols | [string] | false | An array containing a single string - the name of the group partition column | |
periodicities | [Periodicity] | false | A list of periodicities for time series projects only. For classification problems periodicities are not allowed. If this is provided, parameter 'differencing_method' will default to 'seasonal' if not provided or 'auto'. | |
positiveClass | any | false | A value from the target column to use for the positive class. May only be specified for projects doing binary classification.If not specified, a positive class is selected automatically. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
preferableTargetValue | any | false | A target value that should be treated as a positive outcome for the prediction. For example if we want to check gender discrimination for giving a loan and our target named is_bad , then the positive outcome for the prediction would be No , which means that the loan is good and that's what we treat as a preferable result for the loaner. Used and required only if Bias & Fairness in AutoML feature is enabled. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
prepareModelForDeployment | boolean | false | Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning 'RECOMMENDED FOR DEPLOYMENT' label. | |
primaryLocationColumn | string¦null | false | Primary geospatial location column. | |
protectedFeatures | [string] | false | maxItems: 10 minItems: 1 |
A list of project feature to mark as protected for Bias metric calculation and Fairness correction. Used and required only if Bias & Fairness in AutoML feature is enabled. |
quantileLevel | number | false | maximum: 1 (exclusive) minimum: 0 (exclusive) |
The quantile level between 0.01 and 0.99 for specifying the Quantile metric. |
quickrun | boolean | false | (Deprecated): 'quick' should be used in the mode parameter instead of using this parameter. If set to true , autopilot mode will be set to 'quick'.Cannot be set to true when mode is set to 'comprehensive' or 'manual'. |
|
rateTopPctThreshold | number | false | maximum: 100 minimum: 0 |
The percentage threshold between 0.1 and 50 for specifying the Rate@Top% metric. |
relationshipsConfigurationId | string¦null | false | Relationships configuration id to be used for Feature Discovery projects. | |
reps | integer | false | maximum: 999999 minimum: 2 |
The number of cross validation folds to use. |
responseCap | number | false | maximum: 1 minimum: 0.5 |
Used to cap the maximum response of a model |
runLeakageRemovedFeatureList | boolean | false | Run Autopilot on Leakage Removed feature list (if exists). | |
sampleStepPct | number | false | maximum: 100 minimum: 0 (exclusive) |
A float between 0 and 100 indicating the desired percentage of data to sample when training models in comprehensive Autopilot. Note: this only supported for comprehensive Autopilot and the specified value may be lowered in order to be compatible with the project's dataset and partition settings. |
scoringCodeOnly | boolean | false | Keep only models that can be converted to scorable java code during Autopilot run. | |
seed | integer | false | maximum: 999999999 minimum: 0 |
A seed to use for randomization. |
segmentationTaskId | string¦null | false | Specifies the SegmentationTask that will be used for dividing the project up into multiple segmented projects. | |
shapOnlyMode | boolean | false | Keep only models that support SHAP values during Autopilot run. Use SHAP-based insights wherever possible. | |
smartDownsampled | boolean | false | Whether to use smart downsampling to throw away excess rows of the majority class. Only applicable to classification and zero-boosted regression projects. | |
stopWords | [string] | false | maxItems: 1000 |
A list of stop words to be used for text blueprints. Note: stop_words=True must be set in the blueprint preprocessing parameters for this list of stop words to actually be used during preprocessing. |
target | string | false | The name of the target feature. | |
targetType | string | false | Used to specify the targetType to use for a project when it is ambiguous, i.e. a numeric target with a few unique values that could be used for either regression or multiclass. | |
trainingLevel | any | false | The value of the partition column indicating a row is part of the training set. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
treatAsExponential | string | false | For time series projects only. Used to specify whether to treat data as exponential trend and apply transformations like log-transform. For classification problems always is not allowed. |
|
unsupervisedMode | boolean | false | If True, unsupervised project (without target) will be created. target cannot be specified if unsupervisedMode is True. |
|
unsupervisedType | string¦null | false | The type of unsupervised project. Only valid when unsupervisedMode is true. If unsupervisedMode , defaults to anomaly . |
|
useCrossSeriesFeatures | boolean | false | Indicating if user wants to use cross-series features. | |
useGpu | boolean | false | Indicates whether project should use GPU workers | |
useProjectSettings | boolean | false | Specifies whether datetime-partitioned project should use project settings (i.e. backtests configuration has been modified by the user). | |
useSupervisedFeatureReduction | boolean | false | When true, during feature generation DataRobot runs a supervised algorithm that identifies those features with predictive impact on the target and builds feature lists using only qualifying features. Setting false can severely impact autopilot duration, especially for datasets with many features. | |
useTimeSeries | boolean | false | A boolean value indicating whether a time series project should be created instead of a regular project which uses datetime partitioning. | |
userPartitionCol | string | false | The name of the column containing the partition assignments. | |
validationDuration | string(duration) | false | The default validation duration for all backtests. If the primary date/time feature in a time series project is irregular, you cannot set a default validation length. Instead, set each duration individually. For an OTV project setting the validation duration will always use regular partitioning. Omitting it will use irregular partitioning if the date/time feature is irregular. | |
validationLevel | any | false | The value of the partition column indicating a row is part of the validation set. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
validationPct | number | false | maximum: 99 minimum: 0 |
The percentage of the dataset to assign to the validation set |
validationType | string | false | The validation method to be used. CV for cross validation or TVH for train-validation-holdout split. | |
weights | string | false | The name of a column specifying row weights. The data in this column must be pure numeric (e.g. not currency, date, length, etc.) and without missing values | |
windowsBasisUnit | string | false | For time series projects only. Indicates which unit is basis for feature derivation window and forecast window. Valid options are detected time unit or ROW . If omitted, the default value is detected time unit. |
Enumerated Values¶
Property | Value |
---|---|
aggregationType | [total , average ] |
autopilotDataSamplingMethod | [random , latest ] |
autopilotDataSelectionMethod | [duration , rowCount ] |
biasMitigationTechnique | [preprocessingReweighing , postProcessingRejectionOptionBasedClassification ] |
cvMethod | [random , user , stratified , group , datetime ] |
differencingMethod | [auto , none , simple , seasonal ] |
fairnessMetricsSet | [proportionalParity , equalParity , predictionBalance , trueFavorableAndUnfavorableRateParity , favorableAndUnfavorablePredictiveValueParity ] |
mode | [0 , 2 , 4 , 3 , auto , manual , comprehensive , quick ] |
targetType | [Binary , Regression , Multiclass , Multilabel ] |
treatAsExponential | [auto , never , always ] |
unsupervisedType | [anomaly , clustering ] |
validationType | [CV , TVH ] |
windowsBasisUnit | [MILLISECOND , SECOND , MINUTE , HOUR , DAY , WEEK , MONTH , QUARTER , YEAR , ROW ] |
AllowExtra
{}
Parameters submitted by the user to the failed job
Properties¶
None
AssessmentNewFormat
{
"enrichmentRate": {
"action": "string",
"category": "green",
"message": "string"
},
"enrichmentRateValue": 0,
"featureDerivationWindow": "string",
"mostRecentData": {
"action": "string",
"category": "green",
"message": "string"
},
"overallCategory": "green",
"windowSettings": {
"action": "string",
"category": "green",
"message": "string"
}
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
enrichmentRate | Warnings | true | Warning about the enrichment rate | |
enrichmentRateValue | number | true | Percentage of primary table records that can be enriched with a record in this dataset | |
featureDerivationWindow | string¦null | false | Feature derivation window. | |
mostRecentData | Warnings | false | Warning about the enrichment rate | |
overallCategory | string | true | Class of the relationship quality | |
windowSettings | Warnings | false | Warning about the enrichment rate |
Enumerated Values¶
Property | Value |
---|---|
overallCategory | [green , yellow ] |
Autopilot
{
"command": "start"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
command | string | true | If start , will unpause the autopilot and run queued jobs if workers are available. If stop , will pause the autopilot so no new jobs will be started. |
Enumerated Values¶
Property | Value |
---|---|
command | [start , stop ] |
AutopilotStart
{
"autopilotClusterList": [
2
],
"blendBestModels": true,
"considerBlendersInRecommendation": true,
"featurelistId": "string",
"mode": "auto",
"prepareModelForDeployment": true,
"runLeakageRemovedFeatureList": true,
"scoringCodeOnly": true,
"useGpu": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
autopilotClusterList | [integer]¦null | false | maxItems: 10 |
Optional. A list of integers where each value will be used as the number of clusters in Autopilot model(s) for unsupervised clustering projects. Cannot be specified unless unsupervisedMode is true and unsupervisedType is set to 'clustering'. |
blendBestModels | boolean | false | Blend best models during Autopilot run. This option is not supported in SHAP-only mode or for multilabel projects. | |
considerBlendersInRecommendation | boolean | false | Include blenders when selecting a model to prepare for deployment in an Autopilot Run. This option is not supported in SHAP-only mode or for multilabel projects. | |
featurelistId | string | true | The ID of a featurelist that should be used for autopilot. | |
mode | string | false | The autopilot mode. | |
prepareModelForDeployment | boolean | false | Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning "RECOMMENDED FOR DEPLOYMENT" label. | |
runLeakageRemovedFeatureList | boolean | false | Run Autopilot on Leakage Removed feature list (if exists). | |
scoringCodeOnly | boolean | false | Keep only models that can be converted to scorable java code during Autopilot run. | |
useGpu | boolean | false | Use GPU workers for Autopilot run. |
Enumerated Values¶
Property | Value |
---|---|
mode | [auto , comprehensive , quick ] |
AzureServicePrincipalCredentials
{
"azureTenantId": "string",
"clientId": "string",
"clientSecret": "string",
"configId": "string",
"credentialType": "azure_service_principal"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
azureTenantId | string | false | Tenant ID of the Azure AD service principal. | |
clientId | string | false | Client ID of the Azure AD service principal. | |
clientSecret | string | false | Client Secret of the Azure AD service principal. | |
configId | string | false | ID of secure configurations of credentials shared by admin. | |
credentialType | string | true | The type of these credentials, 'azure_service_principal' here. |
Enumerated Values¶
Property | Value |
---|---|
credentialType | azure_service_principal |
Backtest
{
"gapDuration": "string",
"index": 0,
"primaryTrainingEndDate": "2019-08-24T14:15:22Z",
"primaryTrainingStartDate": "2019-08-24T14:15:22Z",
"validationDuration": "string",
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
gapDuration | string(duration) | false | A duration string representing the duration of the gap between the training and the validation data for this backtest. | |
index | integer | true | The index from zero of the backtest specified by this object. | |
primaryTrainingEndDate | string(date-time) | false | A datetime string representing the end date of the primary training data for this backtest. | |
primaryTrainingStartDate | string(date-time) | false | A datetime string representing the start date of the primary training data for this backtest. | |
validationDuration | string(duration) | false | A duration string representing the duration of the validation data for this backtest. | |
validationEndDate | string(date-time) | false | A datetime string representing the end date of the validation data for this backtest. | |
validationStartDate | string(date-time) | true | A datetime string representing the start date of the validation data for this backtest. |
Backtests
{
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
validationEndDate | string(date-time) | true | The end date of the validation scoring data for this backtest. | |
validationStartDate | string(date-time) | true | The start date of the validation scoring data for this backtest. |
BasicCredentials
{
"credentialType": "basic",
"password": "string",
"user": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
credentialType | string | true | The type of these credentials, 'basic' here. | |
password | string | true | The password for database authentication. The password is encrypted at rest and never saved / stored. | |
user | string | true | The username for database authentication. |
Enumerated Values¶
Property | Value |
---|---|
credentialType | basic |
BatchFeatureTransform
{
"parentNames": [
"string"
],
"prefix": "string",
"suffix": "string",
"variableType": "text"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
parentNames | [string] | true | maxItems: 500 minItems: 1 |
List of feature names that will be transformed into a new variable type. |
prefix | string | false | maxLength: 500 |
The string that will preface all feature names. Optional if suffix is present. (One or both are required.) |
suffix | string | false | maxLength: 500 |
The string that will be appended at the end to all feature names. Optional if prefix is present. (One or both are required.) |
variableType | string | true | The type of the new feature. Must be one of text , categorical (Deprecated in version v2.21), numeric , or categoricalInt . |
Enumerated Values¶
Property | Value |
---|---|
variableType | [text , categorical , numeric , categoricalInt ] |
BatchFeatureTransformRetrieveResponse
{
"failures": {},
"newFeatureNames": [
"string"
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
failures | object | true | An object keyed by original feature names, the values are strings indicating why the transformation failed. | |
newFeatureNames | [string] | true | List of new feature names. |
CalendarAccessControlListResponse
{
"count": 0,
"data": [
{
"canShare": true,
"role": "ADMIN",
"userId": "string",
"username": "string"
}
],
"next": "string",
"previous": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | minimum: 0 |
The number of items returned on this page. |
data | [CalendarUserRoleRecordResponse] | true | Records of users and their roles on the calendar. | |
next | string¦null | true | A URL pointing to the next page (if null , there is no next page). |
|
previous | string¦null | true | A URL pointing to the previous page (if null , there is no previous page). |
CalendarAccessControlUpdate
{
"users": [
{
"role": "ADMIN",
"username": "string"
}
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
users | [CalendarUsernameRole] | true | maxItems: 100 |
The list of users and their updated roles used to modify the access for this calendar. |
CalendarEvent
{
"date": "2019-08-24T14:15:22Z",
"name": "string",
"seriesId": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
date | string(date-time) | true | The date of the calendar event. | |
name | string | true | Name of the calendar event. | |
seriesId | string¦null | true | The series ID for the event. If this event does not specify a series ID, then this will be null , indicating that the event applies to all series. |
CalendarEventsResponseQuery
{
"count": 0,
"data": [
{
"date": "2019-08-24T14:15:22Z",
"name": "string",
"seriesId": "string"
}
],
"next": "string",
"previous": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | minimum: 0 |
The number of items returned on this page. |
data | [CalendarEvent] | true | maxItems: 1000 |
An array of calendar events |
next | string¦null | true | A URL pointing to the next page (if null , there is no next page). |
|
previous | string¦null | true | A URL pointing to the previous page (if null , there is no previous page). |
CalendarFileUpload
{
"file": "string",
"multiseriesIdColumns": "string",
"name": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
file | string(binary) | true | The calendar file used to create a calendar. The calendar file expect to meet the following criteria: Must be in a csv or xlsx format. Must have a header row. The names themselves in the header row can be anything. Must have a single date column, in YYYY-MM-DD format. May optionally have a name column as the second column. May optionally have one series ID column that states what series each event is applicable for. If present, the name of this column must be specified in the multiseriesIdColumns parameter. |
|
multiseriesIdColumns | string | false | An array of multiseries ID column names for the calendar file. Currently only one multiseries ID column is supported. If not specified, the calendar is considered to be single series. | |
name | string | false | The name of the calendar file. If not provided, this will be set to the name of the provided file. |
CalendarFromDataset
{
"datasetId": "string",
"datasetVersionId": "string",
"deleteOnError": true,
"multiseriesIdColumns": [
"string"
],
"name": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
datasetId | string | true | The ID of the dataset from which to create the calendar. | |
datasetVersionId | string | false | The ID of the dataset version from which to create the calendar. | |
deleteOnError | boolean | false | Whether delete calendar file from Catalog if it's not valid. | |
multiseriesIdColumns | [string] | false | maxItems: 1 |
Optional multiseries id columns for calendar. |
name | string | false | Optional name for catalog. |
CalendarListResponse
{
"count": 0,
"data": [
{
"created": "2019-08-24T14:15:22Z",
"datetimeFormat": "%m/%d/%Y",
"earliestEvent": "2019-08-24T14:15:22Z",
"id": "string",
"latestEvent": "2019-08-24T14:15:22Z",
"multiseriesIdColumns": [
"string"
],
"name": "string",
"numEventTypes": 0,
"numEvents": 0,
"projectId": [
"string"
],
"role": "ADMIN",
"source": "string"
}
],
"next": "string",
"previous": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | minimum: 0 |
The number of items returned on this page. |
data | [CalendarRecord] | true | An array of calendars, each in the form described under GET /api/v2/calendars/. | |
next | string¦null | true | A URL pointing to the next page (if null , there is no next page). |
|
previous | string¦null | true | A URL pointing to the previous page (if null , there is no previous page). |
CalendarNameUpdate
{
"name": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
name | string | true | The new name to assign to the calendar. |
CalendarRecord
{
"created": "2019-08-24T14:15:22Z",
"datetimeFormat": "%m/%d/%Y",
"earliestEvent": "2019-08-24T14:15:22Z",
"id": "string",
"latestEvent": "2019-08-24T14:15:22Z",
"multiseriesIdColumns": [
"string"
],
"name": "string",
"numEventTypes": 0,
"numEvents": 0,
"projectId": [
"string"
],
"role": "ADMIN",
"source": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
created | string(date-time) | true | An ISO-8601 string with the time that this calendar was created. | |
datetimeFormat | string | true | The datetime format detected for the uploaded calendar file. | |
earliestEvent | string(date-time) | true | An ISO-8601 date string of the earliest event seen in this calendar. | |
id | string | true | The ID of this calendar. | |
latestEvent | string(date-time) | true | An ISO-8601 date string of the latest event seen in this calendar. | |
multiseriesIdColumns | [string]¦null | true | maxItems: 1 |
An array of multiseries ID column names in this calendar file. Currently only one multiseries ID column is supported. Will be null if this calendar is single-series. |
name | string | true | The name of this calendar. This will be the same as source if no name was specified when the calendar was created. |
|
numEventTypes | integer | true | The number of distinct eventTypes in this calendar. | |
numEvents | integer | true | The number of dates that are marked as having an event in this calendar. | |
projectId | [string] | true | The project IDs of projects currently using this calendar. | |
role | string | true | The role the requesting user has on this calendar. | |
source | string | true | The name of the source file that was used to create this calendar. |
Enumerated Values¶
Property | Value |
---|---|
datetimeFormat | [%m/%d/%Y , %m/%d/%y , %d/%m/%y , %m-%d-%Y , %m-%d-%y , %Y/%m/%d , %Y-%m-%d , %Y-%m-%d %H:%M:%S , %Y/%m/%d %H:%M:%S , %Y.%m.%d %H:%M:%S , %Y-%m-%d %H:%M , %Y/%m/%d %H:%M , %y/%m/%d , %y-%m-%d , %y-%m-%d %H:%M:%S , %y.%m.%d %H:%M:%S , %y/%m/%d %H:%M:%S , %y-%m-%d %H:%M , %y.%m.%d %H:%M , %y/%m/%d %H:%M , %m/%d/%Y %H:%M , %m/%d/%y %H:%M , %d/%m/%Y %H:%M , %d/%m/%y %H:%M , %m-%d-%Y %H:%M , %m-%d-%y %H:%M , %d-%m-%Y %H:%M , %d-%m-%y %H:%M , %m.%d.%Y %H:%M , %m/%d.%y %H:%M , %d.%m.%Y %H:%M , %d.%m.%y %H:%M , %m/%d/%Y %H:%M:%S , %m/%d/%y %H:%M:%S , %m-%d-%Y %H:%M:%S , %m-%d-%y %H:%M:%S , %m.%d.%Y %H:%M:%S , %m.%d.%y %H:%M:%S , %d/%m/%Y %H:%M:%S , %d/%m/%y %H:%M:%S , %Y-%m-%d %H:%M:%S.%f , %y-%m-%d %H:%M:%S.%f , %Y-%m-%dT%H:%M:%S.%fZ , %y-%m-%dT%H:%M:%S.%fZ , %Y-%m-%dT%H:%M:%S.%f , %y-%m-%dT%H:%M:%S.%f , %Y-%m-%dT%H:%M:%S , %y-%m-%dT%H:%M:%S , %Y-%m-%dT%H:%M:%SZ , %y-%m-%dT%H:%M:%SZ , %Y.%m.%d %H:%M:%S.%f , %y.%m.%d %H:%M:%S.%f , %Y.%m.%dT%H:%M:%S.%fZ , %y.%m.%dT%H:%M:%S.%fZ , %Y.%m.%dT%H:%M:%S.%f , %y.%m.%dT%H:%M:%S.%f , %Y.%m.%dT%H:%M:%S , %y.%m.%dT%H:%M:%S , %Y.%m.%dT%H:%M:%SZ , %y.%m.%dT%H:%M:%SZ , %Y%m%d , %m %d %Y %H %M %S , %m %d %y %H %M %S , %H:%M , %M:%S , %H:%M:%S , %Y %m %d %H %M %S , %y %m %d %H %M %S , %Y %m %d , %y %m %d , %d/%m/%Y , %Y-%d-%m , %y-%d-%m , %Y/%d/%m %H:%M:%S.%f , %Y/%d/%m %H:%M:%S.%fZ , %Y/%m/%d %H:%M:%S.%f , %Y/%m/%d %H:%M:%S.%fZ , %y/%d/%m %H:%M:%S.%f , %y/%d/%m %H:%M:%S.%fZ , %y/%m/%d %H:%M:%S.%f , %y/%m/%d %H:%M:%S.%fZ , %m.%d.%Y , %m.%d.%y , %d.%m.%y , %d.%m.%Y , %Y.%m.%d , %Y.%d.%m , %y.%m.%d , %y.%d.%m , %Y-%m-%d %I:%M:%S %p , %Y/%m/%d %I:%M:%S %p , %Y.%m.%d %I:%M:%S %p , %Y-%m-%d %I:%M %p , %Y/%m/%d %I:%M %p , %y-%m-%d %I:%M:%S %p , %y.%m.%d %I:%M:%S %p , %y/%m/%d %I:%M:%S %p , %y-%m-%d %I:%M %p , %y.%m.%d %I:%M %p , %y/%m/%d %I:%M %p , %m/%d/%Y %I:%M %p , %m/%d/%y %I:%M %p , %d/%m/%Y %I:%M %p , %d/%m/%y %I:%M %p , %m-%d-%Y %I:%M %p , %m-%d-%y %I:%M %p , %d-%m-%Y %I:%M %p , %d-%m-%y %I:%M %p , %m.%d.%Y %I:%M %p , %m/%d.%y %I:%M %p , %d.%m.%Y %I:%M %p , %d.%m.%y %I:%M %p , %m/%d/%Y %I:%M:%S %p , %m/%d/%y %I:%M:%S %p , %m-%d-%Y %I:%M:%S %p , %m-%d-%y %I:%M:%S %p , %m.%d.%Y %I:%M:%S %p , %m.%d.%y %I:%M:%S %p , %d/%m/%Y %I:%M:%S %p , %d/%m/%y %I:%M:%S %p , %Y-%m-%d %I:%M:%S.%f %p , %y-%m-%d %I:%M:%S.%f %p , %Y-%m-%dT%I:%M:%S.%fZ %p , %y-%m-%dT%I:%M:%S.%fZ %p , %Y-%m-%dT%I:%M:%S.%f %p , %y-%m-%dT%I:%M:%S.%f %p , %Y-%m-%dT%I:%M:%S %p , %y-%m-%dT%I:%M:%S %p , %Y-%m-%dT%I:%M:%SZ %p , %y-%m-%dT%I:%M:%SZ %p , %Y.%m.%d %I:%M:%S.%f %p , %y.%m.%d %I:%M:%S.%f %p , %Y.%m.%dT%I:%M:%S.%fZ %p , %y.%m.%dT%I:%M:%S.%fZ %p , %Y.%m.%dT%I:%M:%S.%f %p , %y.%m.%dT%I:%M:%S.%f %p , %Y.%m.%dT%I:%M:%S %p , %y.%m.%dT%I:%M:%S %p , %Y.%m.%dT%I:%M:%SZ %p , %y.%m.%dT%I:%M:%SZ %p , %m %d %Y %I %M %S %p , %m %d %y %I %M %S %p , %I:%M %p , %I:%M:%S %p , %Y %m %d %I %M %S %p , %y %m %d %I %M %S %p , %Y/%d/%m %I:%M:%S.%f %p , %Y/%d/%m %I:%M:%S.%fZ %p , %Y/%m/%d %I:%M:%S.%f %p , %Y/%m/%d %I:%M:%S.%fZ %p , %y/%d/%m %I:%M:%S.%f %p , %y/%d/%m %I:%M:%S.%fZ %p , %y/%m/%d %I:%M:%S.%f %p , %y/%m/%d %I:%M:%S.%fZ %p ] |
role | [ADMIN , CONSUMER , DATA_SCIENTIST , EDITOR , OBSERVER , OWNER , READ_ONLY , READ_WRITE , USER ] |
CalendarUserRoleRecordResponse
{
"canShare": true,
"role": "ADMIN",
"userId": "string",
"username": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
canShare | boolean | true | Whether this user can share this calendar with other users. | |
role | string | true | The role of the user on this calendar. | |
userId | string | true | The ID of the user. | |
username | string | true | The username of a user with access to this calendar. |
Enumerated Values¶
Property | Value |
---|---|
role | [ADMIN , CONSUMER , DATA_SCIENTIST , EDITOR , OBSERVER , OWNER , READ_ONLY , READ_WRITE , USER ] |
CalendarUsernameRole
{
"role": "ADMIN",
"username": "string"
}
Each item in users
refers to the username record and its newly assigned role.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
role | string¦null | true | The new role to assign to the specified user. | |
username | string | true | The username of the user to modify access for. |
Enumerated Values¶
Property | Value |
---|---|
role | [ADMIN , CONSUMER , DATA_SCIENTIST , EDITOR , OBSERVER , OWNER , READ_ONLY , READ_WRITE , USER ] |
CatalogPasswordCredentials
{
"catalogVersionId": "string",
"password": "string",
"url": "string",
"user": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
catalogVersionId | string | false | Identifier of the catalog version | |
password | string | true | The password (in cleartext) for database authentication. The password will be encrypted on the server side as part of the HTTP request and never saved or stored. | |
url | string | false | URL that is subject to credentials. | |
user | string | true | The username for database authentication. |
ClassMappingAggregationSettings
{
"aggregationClassName": "string",
"excludedFromAggregation": [],
"maxUnaggregatedClassValues": 1000,
"minClassSupport": 1
}
Class mapping aggregation settings.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
aggregationClassName | string | false | The name of the class that will be assigned to all rows with aggregated classes. Should not match any excluded_from_aggregation or we will have 2 classes with the same name and no way to distinguish between them. This option is only available formulticlass projects. By default 'DR_RARE_TARGET_VALUES' is used. | |
excludedFromAggregation | [string] | false | List of target values that should be guaranteed to kept as is, regardless of other settings. | |
maxUnaggregatedClassValues | integer | false | maximum: 1000 minimum: 3 |
The maximum number of unique labels before aggregation kicks in. Should be at least len(excludedFromAggregation) + 1 for multiclass and at least len(excludedFromAggregation) for multilabel. |
minClassSupport | integer | false | Minimum number of instances necessary for each target value in the dataset. All values with fewer instances than this value will be aggregated |
CreateFeaturelist
{
"features": [
"string"
],
"name": "string",
"skipDatetimePartitionColumn": false
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
features | [string] | true | minItems: 1 |
List of features for new featurelist. |
name | string | true | maxLength: 100 |
New featurelist name. |
skipDatetimePartitionColumn | boolean | false | Whether featurelist should contain datetime partition column. |
CreatedCalendarDatasetResponse
{
"statusId": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
statusId | string | true | ID that can be used with GET /api/v2/status/ to poll for the testing job's status. |
CredentialId
{
"catalogVersionId": "string",
"credentialId": "string",
"url": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
catalogVersionId | string | false | The ID of the latest version of the catalog entry. | |
credentialId | string | true | The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional. | |
url | string | false | The link to retrieve more detailed information about the entity that uses this catalog dataset. |
CrossSeriesGroupByColumnRetrieveResponse
{
"crossSeriesGroupByColumns": [
{
"eligibility": "string",
"isEligible": true,
"name": "string"
}
],
"multiseriesId": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
crossSeriesGroupByColumns | [CrossSeriesGroupByColumnsListItem] | true | A list of columns with information about each column's eligibility as a cross-series group-by column. | |
multiseriesId | string | true | The name of the multiseries ID column. |
CrossSeriesGroupByColumnValidatePayload
{
"crossSeriesGroupByColumns": [
"string"
],
"datetimePartitionColumn": "string",
"multiseriesIdColumn": "string",
"userDefinedSegmentIdColumn": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
crossSeriesGroupByColumns | [string] | false | If specified, these columns will be validated for usage as the group-by column for creating cross-series features. If not present, then all columns from the dataset will be validated and only the eligible ones returned. To be valid, a column should be categorical or numerical (but not float), not be the series ID or equivalent to the series ID, not split any series, and not consist of only one value. | |
datetimePartitionColumn | string | true | The name of the column that will be used as the datetime partitioning column. | |
multiseriesIdColumn | string | true | The name of the column that wil be used as the multiseries ID column for this project. | |
userDefinedSegmentIdColumn | string | false | The name of the column that wil be used as the user defined segment ID column for this project. |
CrossSeriesGroupByColumnValidateResponse
{
"message": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
message | string | true | An extended message about the result. For example, if a job is submitted that is a duplicate of a job that has already been added to the queue, the message will mention that no new job was created. |
CrossSeriesGroupByColumnsListItem
{
"eligibility": "string",
"isEligible": true,
"name": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
eligibility | string | true | Information about the column's eligibility. If the column is not eligible, this will include the reason why. | |
isEligible | boolean | true | Indicates whether this column can be used as a group-by column. | |
name | string | true | The name of the column. |
DataSource
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
Data source details for a JDBC dataset
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
catalog | string¦null | false | Catalog name of the data source. | |
dataSourceId | string | false | ID of the data source. | |
dataStoreId | string | true | ID of the data store. | |
dataStoreName | string | true | Name of the data store. | |
dbtable | string | true | Table name of the data source. | |
schema | string¦null | true | Schema name of the data source. | |
url | string | true | URL of the data store. |
DatabricksAccessTokenCredentials
{
"credentialType": "databricks_access_token_account",
"databricksAccessToken": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
credentialType | string | true | The type of these credentials, 'databricks_access_token_account' here. | |
databricksAccessToken | string | true | minLength: 1 minLength: 1 |
Databricks personal access token. |
Enumerated Values¶
Property | Value |
---|---|
credentialType | databricks_access_token_account |
DatabricksServicePrincipalCredentials
{
"clientId": "string",
"clientSecret": "string",
"configId": "string",
"credentialType": "databricks_service_principal_account"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
clientId | string | false | minLength: 1 minLength: 1 |
Client ID for Databricks service principal. |
clientSecret | string | false | minLength: 1 minLength: 1 |
Client secret for Databricks service principal. |
configId | string | false | The ID of the saved shared credentials. If specified, cannot include clientIdand clientSecret. | |
credentialType | string | true | The type of these credentials, 'databricks_service_principal_account' here. |
Enumerated Values¶
Property | Value |
---|---|
credentialType | databricks_service_principal_account |
DatasetDefinition
{
"catalogId": "string",
"catalogVersionId": "string",
"featureListId": "string",
"identifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
catalogId | string | true | ID of the catalog item. | |
catalogVersionId | string | true | ID of the catalog item version. | |
featureListId | string¦null | false | ID of the feature list. This decides which columns in the dataset are used for feature generation. | |
identifier | string | true | maxLength: 20 minLength: 1 minLength: 1 |
Short name of the dataset (used directly as part of the generated feature names). |
primaryTemporalKey | string¦null | false | Name of the column indicating time of record creation. | |
snapshotPolicy | string | false | Policy for using dataset snapshots when creating a project or making predictions. Must be one of the following values: 'specified': Use specific snapshot specified by catalogVersionId. 'latest': Use latest snapshot from the same catalog item. 'dynamic': Get data from the source (only applicable for JDBC datasets). |
Enumerated Values¶
Property | Value |
---|---|
snapshotPolicy | [specified , latest , dynamic ] |
DatasetDefinitionResponse
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
catalogId | string¦null | true | ID of the catalog item. | |
catalogVersionId | string | true | ID of the catalog item version. | |
dataSource | DataSource | false | Data source details for a JDBC dataset | |
dataSources | [DataSource]¦null | false | Data source details for a JDBC dataset | |
featureListId | string | true | ID of the feature list. This decides which columns in the dataset are used for feature generation. | |
featureLists | [string] | false | List of available feature list ids for the dataset | |
identifier | string | true | maxLength: 20 minLength: 1 minLength: 1 |
Short name of the dataset (used directly as part of the generated feature names). |
isDeleted | boolean¦null | false | Is this dataset deleted? | |
originalIdentifier | string¦null | false | maxLength: 20 minLength: 1 minLength: 1 |
Original identifier of the dataset if it was updated to resolve name conflicts. |
primaryTemporalKey | string¦null | false | Name of the column indicating time of record creation. | |
snapshotPolicy | string | true | Policy for using dataset snapshots when creating a project or making predictions. Must be one of the following values: 'specified': Use specific snapshot specified by catalogVersionId. 'latest': Use latest snapshot from the same catalog item. 'dynamic': Get data from the source (only applicable for JDBC datasets). |
Enumerated Values¶
Property | Value |
---|---|
snapshotPolicy | [specified , latest , dynamic ] |
DatasetsCredential
{
"catalogVersionId": "string",
"credentialId": "string",
"url": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
catalogVersionId | string | true | ID of the catalog version | |
credentialId | string | true | ID of the credential store to be used for the given catalog version | |
url | string¦null | false | The URL of the datasource |
DeletedProjectCountResponse
{
"deletedProjectsCount": 0,
"projectCountLimit": 0,
"valueExceedsLimit": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
deletedProjectsCount | integer | true | minimum: 0 |
Amount of soft-deleted projects. The value is limited by projectCountLimit |
projectCountLimit | integer | true | minimum: 0 |
Deleted projects counting limit value. Stop counting above this limit |
valueExceedsLimit | boolean | true | If an actual number of soft-deleted projects exceeds counting limit |
DeletedProjectListResponse
{
"count": 0,
"data": [
{
"createdBy": {
"email": "string",
"id": "string"
},
"deletedBy": {
"email": "string",
"id": "string"
},
"deletionTime": "2019-08-24T14:15:22Z",
"fileName": "string",
"id": "string",
"organization": {
"id": "string",
"name": "string"
},
"projectName": "Untitled Project",
"scheduledForDeletion": true
}
],
"next": "http://example.com",
"previous": "http://example.com"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | false | Number of items returned on this page. | |
data | [DeletedProjectResponse] | true | List of deleted projects | |
next | string(uri)¦null | true | URL pointing to the next page (if null, there is no next page). | |
previous | string(uri)¦null | true | URL pointing to the previous page (if null, there is no previous page). |
DeletedProjectOrganization
{
"id": "string",
"name": "string"
}
The organization the project belongs to
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
id | string | true | ID of the organization the project belongs to | |
name | string | true | Name of the organization the project belongs to |
DeletedProjectResponse
{
"createdBy": {
"email": "string",
"id": "string"
},
"deletedBy": {
"email": "string",
"id": "string"
},
"deletionTime": "2019-08-24T14:15:22Z",
"fileName": "string",
"id": "string",
"organization": {
"id": "string",
"name": "string"
},
"projectName": "Untitled Project",
"scheduledForDeletion": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
createdBy | DeletedProjectUser | true | The user who created the project | |
deletedBy | DeletedProjectUser | true | The user who created the project | |
deletionTime | string(date-time)¦null | true | ISO-8601 formatted date when project was deleted | |
fileName | string¦null | true | The name of the file uploaded for the project dataset | |
id | string | true | The ID of the project | |
organization | DeletedProjectOrganization | true | The organization the project belongs to | |
projectName | string | true | The name of the project | |
scheduledForDeletion | boolean | true | Whether project permanent deletion has already been scheduled |
DeletedProjectUser
{
"email": "string",
"id": "string"
}
The user who created the project
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
string | true | Email of the user | ||
id | string | true | ID of the user |
DiscardedFeaturesResponse
{
"count": 0,
"features": [
"string"
],
"remainingRestoreLimit": 0,
"totalRestoreLimit": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | minimum: 0 |
Discarded features count. |
features | [string] | true | Discarded features. | |
remainingRestoreLimit | integer | true | minimum: 0 |
The remaining available number of the features which can be restored in this project. |
totalRestoreLimit | integer | true | minimum: 0 |
The total limit indicating how many features can be restored in this project. |
Empty
{}
Properties¶
None
ExtendedRelationship
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND",
"relationshipQuality": {
"detailedReport": [
{
"enrichmentRate": {
"action": "string",
"category": "green",
"message": "string"
},
"enrichmentRateValue": 0,
"featureDerivationWindow": "string",
"mostRecentData": {
"action": "string",
"category": "green",
"message": "string"
},
"overallCategory": "green",
"windowSettings": {
"action": "string",
"category": "green",
"message": "string"
}
}
],
"lastUpdated": "2019-08-24T14:15:22Z",
"problemCount": 0,
"samplingFraction": 0,
"status": "Complete",
"statusId": [
"string"
],
"summaryCategory": "green",
"summaryMessage": "string"
}
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
dataset1Identifier | string¦null | false | maxLength: 20 minLength: 1 minLength: 1 |
Identifier of the first dataset in the relationship. If this is not provided, it represents the primary dataset. |
dataset1Keys | [string] | true | maxItems: 10 minItems: 1 |
column(s) in the first dataset that are used to join to the second dataset. |
dataset2Identifier | string | true | maxLength: 20 minLength: 1 minLength: 1 |
Identifier of the second dataset in the relationship. |
dataset2Keys | [string] | true | maxItems: 10 minItems: 1 |
column(s) in the second dataset that are used to join to the first dataset. |
featureDerivationWindowEnd | integer | false | maximum: 0 |
How many featureDerivationWindowUnits of each dataset's primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end. Will be a non-positive integer, if present. If present, time-aware joins will be used. Only applicable when table1Identifier is not provided. |
featureDerivationWindowStart | integer | false | maximum: 0 (exclusive) |
How many featureDerivationWindowUnits of each dataset's primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin. Will be a negative integer, if present. If present, time-aware joins will be used. Only applicable when table1Identifier is not provided. |
featureDerivationWindowTimeUnit | string | false | Time unit of the feature derivation window. Supported values are MILLISECOND, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR. If present, time-aware joins will be used. Only applicable when table1Identifier is not provided. | |
featureDerivationWindows | [FeatureDerivationWindow] | false | maxItems: 3 |
List of feature derivation window definitions that will be used. |
predictionPointRounding | integer | false | maximum: 30 minimum: 0 (exclusive) |
Closest value of predictionPointRoundingTimeUnit to round the prediction point into the past when applying the feature derivation window. Will be a positive integer, if present. Only applicable when table1Identifier is not provided. |
predictionPointRoundingTimeUnit | string | false | Time unit of the prediction point rounding. Supported values are MILLISECOND, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR. Only applicable when table1Identifier is not provided. | |
relationshipQuality | any | false | Summary of the relationship quality information |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | RelationshipQualitySummaryNewFormat | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | RelationshipQualitySummary | false | none |
Enumerated Values¶
Property | Value |
---|---|
featureDerivationWindowTimeUnit | [MILLISECOND , SECOND , MINUTE , HOUR , DAY , WEEK , MONTH , QUARTER , YEAR ] |
predictionPointRoundingTimeUnit | [MILLISECOND , SECOND , MINUTE , HOUR , DAY , WEEK , MONTH , QUARTER , YEAR ] |
ExtendedRelationshipsConfigRetrieve
{
"datasetDefinitions": [
{
"catalogId": "string",
"catalogVersionId": "string",
"dataSource": {
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
},
"dataSources": [
{
"catalog": "string",
"dataSourceId": "string",
"dataStoreId": "string",
"dataStoreName": "string",
"dbtable": "string",
"schema": "string",
"url": "string"
}
],
"featureListId": "string",
"featureLists": [
"string"
],
"identifier": "string",
"isDeleted": true,
"originalIdentifier": "string",
"primaryTemporalKey": "string",
"snapshotPolicy": "specified"
}
],
"featureDiscoveryMode": "default",
"featureDiscoverySettings": [
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
],
"id": "string",
"relationships": [
{
"dataset1Identifier": "string",
"dataset1Keys": [
"string"
],
"dataset2Identifier": "string",
"dataset2Keys": [
"string"
],
"featureDerivationWindowEnd": 0,
"featureDerivationWindowStart": 0,
"featureDerivationWindowTimeUnit": "MILLISECOND",
"featureDerivationWindows": [
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
],
"predictionPointRounding": 0,
"predictionPointRoundingTimeUnit": "MILLISECOND",
"relationshipQuality": {
"detailedReport": [
{
"enrichmentRate": {
"action": "string",
"category": "green",
"message": "string"
},
"enrichmentRateValue": 0,
"featureDerivationWindow": "string",
"mostRecentData": {
"action": "string",
"category": "green",
"message": "string"
},
"overallCategory": "green",
"windowSettings": {
"action": "string",
"category": "green",
"message": "string"
}
}
],
"lastUpdated": "2019-08-24T14:15:22Z",
"problemCount": 0,
"samplingFraction": 0,
"status": "Complete",
"statusId": [
"string"
],
"summaryCategory": "green",
"summaryMessage": "string"
}
}
],
"snowflakePushDownCompatible": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
datasetDefinitions | [DatasetDefinitionResponse] | true | List of dataset definitions. | |
featureDiscoveryMode | string¦null | false | Mode of feature discovery. Supported values are 'default' and 'manual'. | |
featureDiscoverySettings | [FeatureDiscoverySettingResponse]¦null | false | List of feature discovery settings used to customize the feature discovery process. | |
id | string | false | ID of relationships configuration. | |
relationships | [ExtendedRelationship] | true | maxItems: 100 minItems: 1 |
A list of relationships with quality assessment information |
snowflakePushDownCompatible | boolean¦null | false | Is this configuration compatible with pushdown computation on Snowflake? |
Enumerated Values¶
Property | Value |
---|---|
featureDiscoveryMode | [default , manual ] |
ExternalTSBaselineMetadata
{
"datasetId": "string",
"datasetName": "string"
}
The id of the catalog item that is being used as the external baseline data
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
datasetId | string¦null | true | Catalog version id for external prediction data that can be used as a baseline to calculate new metrics. | |
datasetName | string¦null | true | The name of the timeseries baseline dataset for the project |
ExternalTSBaselinePayload
{
"backtests": [
{
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
],
"catalogVersionId": "string",
"datetimePartitionColumn": "string",
"forecastWindowEnd": 0,
"forecastWindowStart": 0,
"holdoutEndDate": "2019-08-24T14:15:22Z",
"holdoutStartDate": "2019-08-24T14:15:22Z",
"multiseriesIdColumns": [
"string"
],
"target": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
backtests | [Backtests] | false | maxItems: 20 minItems: 1 |
An array of the configured backtests. |
catalogVersionId | string | true | The version id of the external baseline data item in the AI catalog. | |
datetimePartitionColumn | string | true | The date column that will be used as the datetime partition column for the specified project. | |
forecastWindowEnd | integer | true | minimum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the future relative to the forecast point the forecast window should end. |
forecastWindowStart | integer | true | minimum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the future relative to the forecast point the forecast window should start. |
holdoutEndDate | string(date-time) | false | The end date of holdout scoring data. | |
holdoutStartDate | string(date-time) | false | The start date of holdout scoring data. | |
multiseriesIdColumns | [string] | false | maxItems: 1 minItems: 1 |
An array of column names identifying the multiseries ID column(s)to use to identify series within the data. Must match the multiseries ID column(s) for the specified project. Currently, only one multiseries ID column may be specified. |
target | string | true | The selected target of the specified project. |
ExternalTSBaselineResponse
{
"backtests": [
{
"validationEndDate": "2019-08-24T14:15:22Z",
"validationStartDate": "2019-08-24T14:15:22Z"
}
],
"baselineValidationJobId": "string",
"catalogVersionId": "string",
"datetimePartitionColumn": "string",
"forecastWindowEnd": 0,
"forecastWindowStart": 0,
"holdoutEndDate": "2019-08-24T14:15:22Z",
"holdoutStartDate": "2019-08-24T14:15:22Z",
"isExternalBaselineDatasetValid": true,
"message": "string",
"multiseriesIdColumns": [
"string"
],
"projectId": "string",
"target": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
backtests | [Backtests] | false | maxItems: 20 minItems: 1 |
An array of the configured backtests. |
baselineValidationJobId | string | true | The id of the validation job. | |
catalogVersionId | string | true | The version id of the external baseline data item in the AI catalog. | |
datetimePartitionColumn | string | true | The date column that will be used as the datetime partition column for the specified project. | |
forecastWindowEnd | integer | false | minimum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the future relative to the forecast point the forecast window should end. |
forecastWindowStart | integer | false | minimum: 0 |
For time series projects only. How many timeUnits of the datetimePartitionColumn into the future relative to the forecast point the forecast window should start. |
holdoutEndDate | string(date-time) | false | The end date of holdout scoring data. | |
holdoutStartDate | string(date-time) | false | The start date of holdout scoring data. | |
isExternalBaselineDatasetValid | boolean | true | Indicates whether the external dataset has pass the validation check or not. | |
message | string¦null | true | A message providing mode detail on the validation result. | |
multiseriesIdColumns | [string] | false | maxItems: 1 minItems: 1 |
An array of column names identifying the multiseries ID column(s)to use to identify series within the data. Must match the multiseries ID column(s) for the specified project. Currently, only one multiseries ID column may be specified. |
projectId | string | true | The project id of the external baseline data item. | |
target | string | true | The selected target of the specified project. |
FeatureDerivationWindow
{
"end": 0,
"start": 0,
"unit": "MILLISECOND"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
end | integer | true | maximum: 0 |
How many featureDerivationWindowUnits of each dataset's primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end. Will be a non-positive integer, if present. If present, time-aware joins will be used. Only applicable when table1Identifier is not provided. |
start | integer | true | maximum: 0 (exclusive) |
How many featureDerivationWindowUnits of each dataset's primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin. Will be a negative integer, if present. If present, time-aware joins will be used. Only applicable when table1Identifier is not provided. |
unit | string | true | Time unit of the feature derivation window. Supported values are MILLISECOND, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR. If present, time-aware joins will be used. Only applicable when table1Identifier is not provided. |
Enumerated Values¶
Property | Value |
---|---|
unit | [MILLISECOND , SECOND , MINUTE , HOUR , DAY , WEEK , MONTH , QUARTER , YEAR ] |
FeatureDiscoveryLogListResponse
{
"count": 0,
"featureDiscoveryLog": [
"string"
],
"next": "http://example.com",
"previous": "http://example.com",
"totalLogLines": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | Number of items returned on this page | |
featureDiscoveryLog | [string] | true | List of lines retrieved from the feature discovery log | |
next | string(uri)¦null | true | URL pointing to the next page (if null, there is no next page) | |
previous | string(uri)¦null | true | URL pointing to the previous page (if null, there is no previous page) | |
totalLogLines | integer | true | total number of lines in feature derivation log |
FeatureDiscoveryRecipeSQLsExport
{
"modelId": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
modelId | string | false | Model ID to export recipe for |
FeatureDiscoverySetting
{
"name": "string",
"value": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
name | string | true | maxLength: 100 |
Name of this feature discovery setting |
value | boolean | true | Value of this feature discovery setting |
FeatureDiscoverySettingResponse
{
"description": "string",
"family": "string",
"name": "string",
"settingType": "string",
"value": true,
"verboseName": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
description | string | true | Description of this feature discovery setting | |
family | string | true | Family of this feature discovery setting | |
name | string | true | maxLength: 100 |
Name of this feature discovery setting |
settingType | string | true | Type of this feature discovery setting | |
value | boolean | true | Value of this feature discovery setting | |
verboseName | string | true | Human readable name of this feature discovery setting |
FeatureHistogramPlotResponse
{
"count": 0,
"label": "string",
"target": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | number | true | number of values in the bin (or weights if project is weighted) | |
label | string | true | bin start for numerical/uncapped, or string value for categorical. The bin ==Missing== is created for rows that did not have the feature. |
|
target | number¦null | true | Average value of the target feature values for the bin. For regression projects, it will be null, if the feature was deemed as low informative or project target has not been selected yet or AIM processing has not finished yet. You can use GET /api/v2/projects/{projectId}/features/ endpoint to find more about low informative features. For binary classification, the same conditions apply as above, but the value should be treated as the ratio of total positives in the bin to bin's total size (count ). For multiclass projects the value is always null. |
FeatureHistogramResponse
{
"plot": [
{
"count": 0,
"label": "string",
"target": 0
}
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
plot | [FeatureHistogramPlotResponse] | true | plot data based on feature values. |
FeatureKeySummaryDetailsResponseValidatorMultilabel
{
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
Statistics of the key.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
max | number | true | Maximum value of the key. | |
mean | number | true | Mean value of the key. | |
median | number | true | Median value of the key. | |
min | number | true | Minimum value of the key. | |
pctRows | number | true | Percentage occurrence of key in the EDA sample of the feature. | |
stdDev | number | true | Standard deviation of the key. |
FeatureKeySummaryDetailsResponseValidatorSummarizedCategorical
{
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
Statistics of the key.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
dataQualities | string | true | The indicator of data quality assessment of the feature. | |
max | number | true | Maximum value of the key. | |
mean | number | true | Mean value of the key. | |
median | number | true | Median value of the key. | |
min | number | true | Minimum value of the key. | |
pctRows | number | true | Percentage occurrence of key in the EDA sample of the feature. | |
stdDev | number | true | Standard deviation of the key. |
Enumerated Values¶
Property | Value |
---|---|
dataQualities | [ISSUES_FOUND , NOT_ANALYZED , NO_ISSUES_FOUND ] |
FeatureKeySummaryResponseValidatorMultilabel
{
"key": "string",
"summary": {
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
key | string | true | Name of the key. | |
summary | FeatureKeySummaryDetailsResponseValidatorMultilabel | true | Statistics of the key. |
FeatureKeySummaryResponseValidatorSummarizedCategorical
{
"key": "string",
"summary": {
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
}
For a Summarized Categorical columns, this will contain statistics for the top 50 keys (truncated to 103 characters)
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
key | string | true | Name of the key. | |
summary | FeatureKeySummaryDetailsResponseValidatorSummarizedCategorical | true | Statistics of the key. |
FeatureLineageJoin
{
"joinType": "left, right",
"leftTable": {
"columns": [
"string"
],
"datasteps": [
1
]
},
"rightTable": {
"columns": [
"string"
],
"datasteps": [
1
]
}
}
join step details.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
joinType | string | true | Kind of SQL JOIN applied. | |
leftTable | FeatureLineageJoinTable | true | Information about a dataset which was considered left in a join. | |
rightTable | FeatureLineageJoinTable | true | Information about a dataset which was considered left in a join. |
Enumerated Values¶
Property | Value |
---|---|
joinType | left, right |
FeatureLineageJoinTable
{
"columns": [
"string"
],
"datasteps": [
1
]
}
Information about a dataset which was considered left in a join.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
columns | [string] | true | minItems: 1 |
List of columns which datasets were joined by. |
datasteps | [integer] | true | List of data steps id which brought the columns into the current step dataset. |
FeatureLineageResponse
{
"steps": [
{
"arguments": {},
"catalogId": "string",
"catalogVersionId": "string",
"description": "string",
"groupBy": [
"string"
],
"id": 0,
"isTimeAware": true,
"joinInfo": {
"joinType": "left, right",
"leftTable": {
"columns": [
"string"
],
"datasteps": [
1
]
},
"rightTable": {
"columns": [
"string"
],
"datasteps": [
1
]
}
},
"name": "string",
"parents": [
0
],
"stepType": "data",
"timeInfo": {
"duration": {
"duration": 0,
"timeUnit": "string"
},
"latest": {
"duration": 0,
"timeUnit": "string"
}
}
}
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
steps | [FeatureLineageStep] | true | List of steps which were applied to build the feature. |
FeatureLineageStep
{
"arguments": {},
"catalogId": "string",
"catalogVersionId": "string",
"description": "string",
"groupBy": [
"string"
],
"id": 0,
"isTimeAware": true,
"joinInfo": {
"joinType": "left, right",
"leftTable": {
"columns": [
"string"
],
"datasteps": [
1
]
},
"rightTable": {
"columns": [
"string"
],
"datasteps": [
1
]
}
},
"name": "string",
"parents": [
0
],
"stepType": "data",
"timeInfo": {
"duration": {
"duration": 0,
"timeUnit": "string"
},
"latest": {
"duration": 0,
"timeUnit": "string"
}
}
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
arguments | object | false | Generic key-value pairs to describe action step additional parameters. | |
catalogId | string | false | id of the catalog for a data step. | |
catalogVersionId | string | false | id of the catalog version for a data step. | |
description | string | false | Description of the step. | |
groupBy | [string] | false | List of columns which this action step aggregated. | |
id | integer | true | minimum: 0 |
Step id starting with 0. |
isTimeAware | boolean | false | Indicator of step being time aware. Mandatory only for action and join steps. action step provides additional information about feature derivation window in the timeInfo field. |
|
joinInfo | FeatureLineageJoin | false | join step details. | |
name | string | false | Name of the step. | |
parents | [integer] | true | id of steps which use this step output as their input. |
|
stepType | string | true | One of four types of a step. data - source features. action - data aggregation or trasformation. join - SQL JOIN. generatedData - final feature. There is always one generatedData step and at least one data step. | |
timeInfo | FeatureLineageTimeInfo | false | Description of a feature derivation window which was applied to this action step. |
Enumerated Values¶
Property | Value |
---|---|
stepType | [data , action , join , generatedData ] |
FeatureLineageTimeInfo
{
"duration": {
"duration": 0,
"timeUnit": "string"
},
"latest": {
"duration": 0,
"timeUnit": "string"
}
}
Description of a feature derivation window which was applied to this action step.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
duration | TimeDelta | true | End of the feature derivation window applied. | |
latest | TimeDelta | true | End of the feature derivation window applied. |
FeatureMetricDetailsResponse
{
"ascending": true,
"metricName": "string",
"supportsBinary": true,
"supportsMulticlass": true,
"supportsRegression": true,
"supportsTimeseries": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
ascending | boolean | true | Should the metric be sorted in ascending order | |
metricName | string | true | Name of the metric | |
supportsBinary | boolean | true | This metric is valid for binary classifciaton | |
supportsMulticlass | boolean | true | This metric is valid for mutliclass classifciaton | |
supportsRegression | boolean | true | This metric is valid for regression | |
supportsTimeseries | boolean | true | This metric is valid for timeseries |
FeatureMetricsResponse
{
"availableMetrics": [
"string"
],
"featureName": "string",
"metricDetails": [
{
"ascending": true,
"metricName": "string",
"supportsBinary": true,
"supportsMulticlass": true,
"supportsRegression": true,
"supportsTimeseries": true
}
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
availableMetrics | [string] | true | an array of strings representing the appropriate metrics. If the feature cannot be selected as the target, then this array will be empty. | |
featureName | string | true | the name of the feature that was looked up | |
metricDetails | [FeatureMetricDetailsResponse] | true | the list of metricDetails objects. |
FeatureSetting
{
"aPriori": true,
"doNotDerive": true,
"featureName": "string",
"knownInAdvance": true
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
aPriori | boolean | false | Renamed to knownInAdvance . |
|
doNotDerive | boolean | false | For time series projects only. Sets whether the feature is do-not-derive, i.e., is excluded from feature derivation. If not specified, the feature uses the value from the defaultToDoNotDerive flag. |
|
featureName | string | true | The name of the feature being specified. | |
knownInAdvance | boolean | false | For time series projects only. Sets whether the feature is known in advance, i.e., values for future dates are known at prediction time. If not specified, the feature uses the value from the defaultToKnownInAdvance flag. |
FeatureTransform
{
"dateExtraction": "year",
"name": "string",
"parentName": "string",
"replacement": "string",
"variableType": "text"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
dateExtraction | string | false | The value to extract from the date column, of these options: [year|yearDay|month|monthDay|week|weekDay] . Required for transformation of a date column. Otherwise must not be provided. |
|
name | string | true | The name of the new feature. Must not be the same as any existing features for this project. Must not contain '/' character. | |
parentName | string | true | The name of the parent feature. | |
replacement | any | false | The replacement in case of a failed transformation. |
anyOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string¦null | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | boolean¦null | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number¦null | false | none |
or
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | integer¦null | false | none |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
variableType | string | true | The type of the new feature. Must be one of text , categorical (Deprecated in version v2.21), numeric , or categoricalInt . See the description of this method for more information. |
Enumerated Values¶
Property | Value |
---|---|
dateExtraction | [year , yearDay , month , monthDay , week , weekDay ] |
variableType | [text , categorical , numeric , categoricalInt ] |
FeaturelistDestroyResponse
{
"canDelete": "false",
"deletionBlockedReason": "string",
"dryRun": "false",
"numAffectedJobs": 0,
"numAffectedModels": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
canDelete | string | true | Whether the featurelist can be deleted. | |
deletionBlockedReason | string | true | If the featurelist can't be deleted, this explains why. | |
dryRun | string | true | Whether this was a dry-run or the featurelist was actually deleted. | |
numAffectedJobs | integer | true | Number of incomplete jobs using this featurelist. | |
numAffectedModels | integer | true | Number of models using this featurelist. |
Enumerated Values¶
Property | Value |
---|---|
canDelete | [false , False , true , True ] |
dryRun | [false , False , true , True ] |
FeaturelistListResponse
{
"count": 0,
"data": [
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
],
"next": "http://example.com",
"previous": "http://example.com",
"totalCount": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | false | Number of items returned on this page. | |
data | [FeaturelistResponse] | true | An array of modeling features. | |
next | string(uri)¦null | true | URL pointing to the next page (if null, there is no next page). | |
previous | string(uri)¦null | true | URL pointing to the previous page (if null, there is no previous page). | |
totalCount | integer | true | The total number of items across all pages. |
FeaturelistResponse
{
"created": "string",
"description": "string",
"features": [
"string"
],
"id": "string",
"isUserCreated": true,
"name": "string",
"numModels": 0,
"projectId": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
created | string | true | A :ref:timestamp <time_format> string specifying when the featurelist was created. |
|
description | string¦null | false | User-friendly description of the featurelist, which can be updated by users. | |
features | [string] | true | Names of features included in the featurelist. | |
id | string | true | Featurelist ID. | |
isUserCreated | boolean | true | Whether the featurelist was created manually by a user or by DataRobot automation. | |
name | string | true | the name of the featurelist | |
numModels | integer | true | The number of models that currently use this featurelist. A model is considered to use a featurelist if it is used to train the model or as a monotonic constraint featurelist, or if the model is a blender with at least one component model using the featurelist. | |
projectId | string | true | Project ID the featurelist belongs to. |
FormattedSummary
{
"enrichmentRate": {
"action": "string",
"category": "green",
"message": "string"
},
"mostRecentData": {
"action": "string",
"category": "green",
"message": "string"
},
"windowSettings": {
"action": "string",
"category": "green",
"message": "string"
}
}
Relationship quality assessment report associated with the relationship
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
enrichmentRate | Warnings | true | Warning about the enrichment rate | |
mostRecentData | Warnings | false | Warning about the enrichment rate | |
windowSettings | Warnings | false | Warning about the enrichment rate |
GCPKey
{
"authProviderX509CertUrl": "http://example.com",
"authUri": "http://example.com",
"clientEmail": "string",
"clientId": "string",
"clientX509CertUrl": "http://example.com",
"privateKey": "string",
"privateKeyId": "string",
"projectId": "string",
"tokenUri": "http://example.com",
"type": "service_account"
}
The Google Cloud Platform (GCP) key. Output is the downloaded JSON resulting from creating a service account User Managed Key (in the IAM & admin > Service accounts section of GCP).Required if googleConfigId/configId is not specified.Cannot include this parameter if googleConfigId/configId is specified.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
authProviderX509CertUrl | string(uri) | false | Auth provider X509 certificate URL. | |
authUri | string(uri) | false | Auth URI. | |
clientEmail | string | false | Client email address. | |
clientId | string | false | Client ID. | |
clientX509CertUrl | string(uri) | false | Client X509 certificate URL. | |
privateKey | string | false | Private key. | |
privateKeyId | string | false | Private key ID | |
projectId | string | false | Project ID. | |
tokenUri | string(uri) | false | Token URI. | |
type | string | true | GCP account type. |
Enumerated Values¶
Property | Value |
---|---|
type | service_account |
GoogleServiceAccountCredentials
{
"configId": "string",
"credentialType": "gcp",
"gcpKey": {
"authProviderX509CertUrl": "http://example.com",
"authUri": "http://example.com",
"clientEmail": "string",
"clientId": "string",
"clientX509CertUrl": "http://example.com",
"privateKey": "string",
"privateKeyId": "string",
"projectId": "string",
"tokenUri": "http://example.com",
"type": "service_account"
},
"googleConfigId": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
configId | string | false | ID of Secure configurations shared by admin.Alternative to googleConfigId (deprecated). If specified, cannot include gcpKey. | |
credentialType | string | true | The type of these credentials, 'gcp' here. | |
gcpKey | GCPKey | false | The Google Cloud Platform (GCP) key. Output is the downloaded JSON resulting from creating a service account User Managed Key (in the IAM & admin > Service accounts section of GCP).Required if googleConfigId/configId is not specified.Cannot include this parameter if googleConfigId/configId is specified. | |
googleConfigId | string | false | ID of Secure configurations shared by admin. This is deprecated.Please use configId instead. If specified, cannot include gcpKey. |
Enumerated Values¶
Property | Value |
---|---|
credentialType | gcp |
HdfsProjectCreate
{
"password": "string",
"port": 0,
"projectName": "string",
"url": "http://example.com",
"user": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
password | string | false | Password for authenticating to HDFS using Kerberos. The password will be encrypted on the server side in scope of HTTP request and never saved or stored. | |
port | integer | false | Port of the WebHDFS Namenode server. If not specified, defaults to HDFS default port 50070. | |
projectName | string | false | Name of the project to be created. If not specified, project name will be based on the file name. | |
url | string(uri) | true | URL of the WebHDFS resource. Represent the file using the hdfs:// protocol marker (for example, hdfs:///tmp/somedataset.csv ). |
|
user | string | false | Username for authenticating to HDFS using Kerberos |
JobDetailsResponse
{
"id": "string",
"isBlocked": true,
"jobType": "model",
"message": "string",
"modelId": "string",
"projectId": "string",
"status": "queue",
"url": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
id | string | true | The job ID. | |
isBlocked | boolean | true | True if the job is waiting for its dependencies to be resolved first. | |
jobType | string | true | The job type. | |
message | string | true | Error message in case of failure. | |
modelId | string | true | The model this job is associated with. | |
projectId | string | true | The project the job belongs to. | |
status | string | true | The job status. | |
url | string | true | A URL that can be used to request details about the job. |
Enumerated Values¶
Property | Value |
---|---|
jobType | [model , predict , trainingPredictions , featureImpact , featureEffects , shapImpact , anomalyAssessment , shapExplanations , shapMatrix , reasonCodesInitialization , reasonCodes , predictionExplanations , predictionExplanationsInitialization , primeDownloadValidation , ruleFitDownloadValidation , primeRulesets , primeModel , modelExport , usageData , modelXRay , accuracyOverTime , seriesAccuracy , validateRatingTable , generateComplianceDocumentation , automatedDocumentation , eda , pipeline , calculatePredictionIntervals , calculatePredictionIntervalBoundUsingOnlineConformal , batchVarTypeTransform , computeImageActivationMaps , computeImageAugmentations , computeImageEmbeddings , computeDocumentTextExtractionSamples , externalDatasetInsights , startDatetimePartitioning , runSegmentationTasks , piiDetection , computeBiasAndFairness , sensitivityTesting , clusterInsights , scoringCodeBundleExport , onnxExport , scoringCodeSegmentedModeling , insights , distributionPredictionModel , batchScoringAvailableForecastPoints , notebooksScheduling , uncategorized ] |
status | [queue , inprogress , error , ABORTED , COMPLETED ] |
JobListResponse
{
"count": 0,
"jobs": [
{
"id": "string",
"isBlocked": true,
"jobType": "model",
"message": "string",
"modelId": "string",
"projectId": "string",
"status": "queue",
"url": "string"
}
],
"next": "string",
"previous": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | the number of jobs returned. | |
jobs | [JobDetailsResponse] | true | A json array of jobs | |
next | string¦null | true | URL pointing to the next page (if null, there is no next page). | |
previous | string¦null | true | URL pointing to the previous page (if null, there is no previous page). |
ModelingFeatureListResponse
{
"count": 0,
"data": [
{
"dataQualities": "ISSUES_FOUND",
"dateFormat": "string",
"featureLineageId": "string",
"featureType": "Boolean",
"importance": 0,
"isRestoredAfterReduction": true,
"isZeroInflated": true,
"keySummary": {
"key": "string",
"summary": {
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
},
"language": "string",
"lowInformation": true,
"max": "string",
"mean": "string",
"median": "string",
"min": "string",
"multilabelInsights": {
"multilabelInsightsKey": "string"
},
"naCount": 0,
"name": "string",
"parentFeatureNames": [
"string"
],
"projectId": "string",
"stdDev": "string",
"targetLeakage": "FALSE",
"targetLeakageReason": "string",
"uniqueCount": 0
}
],
"next": "http://example.com",
"previous": "http://example.com",
"totalCount": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | false | Number of items returned on this page. | |
data | [ModelingFeatureResponse] | true | Modeling features data. | |
next | string(uri)¦null | true | URL pointing to the next page (if null, there is no next page). | |
previous | string(uri)¦null | true | URL pointing to the previous page (if null, there is no previous page). | |
totalCount | integer | true | The total number of items across all pages. |
ModelingFeatureResponse
{
"dataQualities": "ISSUES_FOUND",
"dateFormat": "string",
"featureLineageId": "string",
"featureType": "Boolean",
"importance": 0,
"isRestoredAfterReduction": true,
"isZeroInflated": true,
"keySummary": {
"key": "string",
"summary": {
"dataQualities": "ISSUES_FOUND",
"max": 0,
"mean": 0,
"median": 0,
"min": 0,
"pctRows": 0,
"stdDev": 0
}
},
"language": "string",
"lowInformation": true,
"max": "string",
"mean": "string",
"median": "string",
"min": "string",
"multilabelInsights": {
"multilabelInsightsKey": "string"
},
"naCount": 0,
"name": "string",
"parentFeatureNames": [
"string"
],
"projectId": "string",
"stdDev": "string",
"targetLeakage": "FALSE",
"targetLeakageReason": "string",
"uniqueCount": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
dataQualities | string | false | Data Quality Status | |
dateFormat | string¦null | true | the date format string for how this feature was interpreted (or null if not a date feature). If not null, it will be compatible with https://docs.python.org/2/library/time.html#time.strftime . | |
featureLineageId | string¦null | true | id of a lineage for automatically generated features. | |
featureType | string | true | Feature type. | |
importance | number¦null | true | numeric measure of the strength of relationship between the feature and target (independent of any model or other features) | |
isRestoredAfterReduction | boolean | false | Whether feature is restored after feature reduction | |
isZeroInflated | boolean¦null | false | Whether feature has an excessive number of zeros | |
keySummary | any | false | Per key summaries for Summarized Categorical or Multicategorical columns |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | FeatureKeySummaryResponseValidatorSummarizedCategorical | false | For a Summarized Categorical columns, this will contain statistics for the top 50 keys (truncated to 103 characters) |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | [FeatureKeySummaryResponseValidatorMultilabel] | false | For a Multicategorical columns, this will contain statistics for the top classes |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
language | string | false | Feature's detected language. | |
lowInformation | boolean | true | whether feature has too few values to be informative | |
max | any | true | maximum value of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | maximum value of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | maximum value of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
mean | any | true | arithmetic mean of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | arithmetic mean of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | arithmetic mean of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
median | any | true | median of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | median of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | median of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
min | any | true | minimum value of the EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | minimum value of the EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | minimum value of the EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
multilabelInsights | MultilabelInsightsResponse | false | Multilabel project specific information | |
naCount | integer | true | number of missing values | |
name | string | true | feature name | |
parentFeatureNames | [string] | false | an array of string feature names indicating which features in the input data were used to create this feature if the feature is a transformation. | |
projectId | string | true | the ID of the project the feature belongs to | |
stdDev | any | true | standard deviation of EDA sample of the feature. |
oneOf
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | string | false | standard deviation of EDA sample of the feature. |
xor
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
» anonymous | number | false | standard deviation of EDA sample of the feature. |
continued
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
targetLeakage | string | true | the detected level of risk for target leakage, if any. 'SKIPPED_DETECTION' indicates leakage detection was not run on the feature, 'FALSE' indicates no leakage, 'MODERATE_RISK' indicates a moderate risk of target leakage, and 'HIGH_RISK' indicates a high risk of target leakage. | |
targetLeakageReason | string | true | descriptive sentence explaining the reason for target leakage. | |
uniqueCount | integer | false | number of unique values |
Enumerated Values¶
Property | Value |
---|---|
dataQualities | [ISSUES_FOUND , NOT_ANALYZED , NO_ISSUES_FOUND ] |
featureType | [Boolean , Categorical , Currency , Date , Date Duration , Document , Image , Interaction , Length , Location , Multicategorical , Numeric , Percentage , Summarized Categorical , Text , Time ] |
targetLeakage | [FALSE , HIGH_RISK , MODERATE_RISK , SKIPPED_DETECTION ] |
ModelingFeaturesCreateFromDiscarded
{
"featuresToRestore": [
"string"
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
featuresToRestore | [string] | true | minItems: 1 |
Discarded features to restore. |
ModelingFeaturesCreateFromDiscardedResponse
{
"featuresToRestore": [
"string"
],
"warnings": [
"string"
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
featuresToRestore | [string] | true | Features to add back to the project. | |
warnings | [string] | true | Warnings about features which can not be restored. |
MultilabelInsightsResponse
{
"multilabelInsightsKey": "string"
}
Multilabel project specific information
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
multilabelInsightsKey | string | true | Key for multilabel insights, unique per project, feature, and EDA stage. The response will contain the key for the most recent, finished EDA stage. |
MultiseriesIdColumnsRecord
{
"multiseriesIdColumns": [
"string"
],
"timeStep": 0,
"timeUnit": "MILLISECOND"
}
Detected multiseries ID columns along with timeStep and timeUnit information
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
multiseriesIdColumns | [string] | true | minItems: 1 |
A list of one or more names of columns that contain the individual series identifiers if the dataset consists of multiple time series. |
timeStep | integer | true | timeStep: detected time step | |
timeUnit | string | true | detected time unit (e.g. DAY, HOUR, etc.) |
Enumerated Values¶
Property | Value |
---|---|
timeUnit | [MILLISECOND , SECOND , MINUTE , HOUR , DAY , WEEK , MONTH , QUARTER , YEAR , ROW ] |
MultiseriesNamesControllerDataRecord
{
"items": [
"string"
]
}
Data fields of the multi series names
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
items | [string] | true | List of series names |
MultiseriesNamesControllerResponse
{
"count": 0,
"data": {
"items": [
"string"
]
},
"next": "string",
"previous": "string",
"totalSeriesCount": 0
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
count | integer | true | Total number of series items in the response | |
data | MultiseriesNamesControllerDataRecord | true | Data fields of the multi series names | |
next | string¦null | true | A URL pointing to the next page (if null , there is no next page). |
|
previous | string¦null | true | A URL pointing to the previous page (if null , there is no previous page). |
|
totalSeriesCount | integer | true | Total number of series items |
MultiseriesPayload
{
"datetimePartitionColumn": "string",
"multiseriesIdColumns": [
"string"
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
datetimePartitionColumn | string | true | The date column that will be used to perform detection and validation for. | |
multiseriesIdColumns | [string] | false | minItems: 1 |
List of one or more names of potential multiseries id columns. If not provided, all numerical and categorical columns are used. |
MultiseriesRetrieveResponse
{
"datetimePartitionColumn": "string",
"detectedMultiseriesIdColumns": [
{
"multiseriesIdColumns": [
"string"
],
"timeStep": 0,
"timeUnit": "MILLISECOND"
}
]
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
datetimePartitionColumn | string | true | The datetime partition column name. | |
detectedMultiseriesIdColumns | [MultiseriesIdColumnsRecord] | true | A list of detected multiseries ID columns along with timeStep and timeUnit information. Note that if no eligible columns have been detected, this list will be empty. |
OAuthCredentials
{
"credentialType": "oauth",
"oauthAccessToken": null,
"oauthClientId": null,
"oauthClientSecret": null,
"oauthRefreshToken": "string"
}
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
credentialType | string | true | The type of these credentials, 'oauth' here. | |
oauthAccessToken | string¦null | false | The oauth access token. | |
oauthClientId | string¦null | false | The oauth client ID. | |
oauthClientSecret | string¦null | false | The oauth client secret. | |
oauthRefreshToken | string | true | The oauth refresh token. |
Enumerated Values¶
Property | Value |
---|---|
credentialType | oauth |
Partition
{
"cvHoldoutLevel": "string",
"cvMethod": "random",
"datetimeCol": "string",
"datetimePartitionColumn": "string",
"holdoutLevel": "string",
"holdoutPct": 0,
"partitionKeyCols": [
"string"
],
"reps": 0,
"trainingLevel": "string",
"useTimeSeries": true,
"userPartitionCol": "string",
"validationLevel": "string",
"validationPct": 0,
"validationType": "CV"
}
The partition object of a project indicates the settings used for partitioning. Depending on the partitioning selected, many of the options will be null.
Properties¶
Name | Type | Required | Restrictions | Description |
---|---|---|---|---|
cvHoldoutLevel | string¦null | true | If a user partition column was used with cross validation, the value assigned to the holdout set | |
cvMethod | string | true | The partitioning method used. Note that "date" partitioning is an old partitioning method no longer supported for new projects, as of API version v2.0. | |
datetimeCol | string¦null | true | If a date partition column was used, the name of the column. Note that datetimeCol applies to an old partitioning method no longer supported for new projects, as of API version v2.0. | |
datetimePartitionColumn | string¦null | false | If a datetime partition column was used, the name of the column. | |
holdoutLevel | string¦null | true | If a user partition column was used with train-validation-holdout split, the value assigned to the holdout set. | |
holdoutPct | number¦null | true | The percentage of the dataset reserved for the holdout set. | |
partitionKeyCols | [string]¦null | true | An array containing a single string - the name of the group partition column | |
reps | integer¦null | true | If cross validation was used, the number of folds to use. | |
trainingLevel | string¦null | true | If a user partition column was used with train-validation-holdout split, the value assigned to the training set. | |
useTimeSeries | boolean¦null | false | Indicates whether a time series project was created as opposed to a regular project using datetime partitioning. | |
userPartitionCol | string¦null | true | If a user partition column was used, the name of the column. | |
validationLevel | string¦null | true | If a user partition column was used with train-validation-holdout split, the value assigned to the validation set. | |
validationPct | number¦null | true | If train-validation-holdout split was used, the percentage of the dataset used for the validation set. | |
validationType | string | true | The type of validation used. Either CV (cross validation) or TVH (train-validation-holdout split). |
Enumerated Values¶
Property | Value |
---|---|
cvMethod | [random , stratified , datetime , user , group , date ] |
validationType | [CV , TVH ] |
PasswordCredentials
{
"catalogVersionId": "string",
"password": "string",
"url": "string",
"user": "string"
}