# Prediction output options

> Prediction output options - Configure batch prediction destinations (output) with the Job
> Definitions UI or the Batch Prediction API.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.614035+00:00` (UTC).

## Primary page

- [Prediction output options](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html): Full documentation for this topic (HTML).

## Sections on this page

- [Local file streaming](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#local-file-streaming): In-page section heading.
- [HTTP write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#http-write): In-page section heading.
- [JDBC write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#jdbc-write): In-page section heading.
- [Statement types](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#statement-types): In-page section heading.
- [Allowed source IP addresses](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#allowed-source-ip-addresses): In-page section heading.
- [SAP Datasphere write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#sap-datasphere-write): In-page section heading.
- [Databricks write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#databricks-write): In-page section heading.
- [Trino write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#trino-write): In-page section heading.
- [Azure Blob Storage write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#azure-blob-storage-write): In-page section heading.
- [Google Cloud Storage write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#google-cloud-storage-write): In-page section heading.
- [Amazon S3 write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#amazon-s3-write): In-page section heading.
- [BigQuery write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#bigquery-write): In-page section heading.
- [Snowflake write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#snowflake-write): In-page section heading.
- [Azure Synapse write](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-options.html#azure-synapse-write): In-page section heading.

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [API reference](https://docs.datarobot.com/en/docs/api/reference/index.html): Linked from this page.
- [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html): Linked from this page.
- [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations): Linked from this page.
- [output format](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-format.html): Linked from this page.
- [this sample use case](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/pred-examples.html#end-to-end-scoring-of-csv-files-from-local-files): Linked from this page.
- [external data sources](https://docs.datarobot.com/en/docs/classic-ui/data/connect-data/data-conn.html#add-data-sources): Linked from this page.
- [securely stored credentials](https://docs.datarobot.com/en/docs/platform/acct-settings/stored-creds.html): Linked from this page.
- [Allowed source IP addresses](https://docs.datarobot.com/en/docs/reference/data-ref/allowed-ips.html): Linked from this page.
- [SAP Datasphere connection documentation](https://docs.datarobot.com/en/docs/reference/data-ref/data-sources/wb-sap.html): Linked from this page.
- [Databricks connector](https://docs.datarobot.com/en/docs/reference/data-ref/data-sources/wb-databricks.html): Linked from this page.
- [Trino connector](https://docs.datarobot.com/en/docs/reference/data-ref/data-sources/dc-trino.html): Linked from this page.

## Documentation content

You can configure a prediction destination using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html). This topic describes both the UI and API output options.

> [!NOTE] Note
> For a complete list of supported output options, see the [data sources supported for batch predictions](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html#data-sources-supported-for-batch-predictions).

| Output option | Description |
| --- | --- |
| Local file streaming | Stream scored data through a URL endpoint for immediate download when the job moves to a running state. |
| HTTP write | Stream scored data to an absolute URL for writing. This option can write data to pre-signed URLs for Amazon S3, Azure, and Google Cloud Platform. |
| Database connections |  |
| JDBC write | Write prediction results back to a JDBC data source with data destination details supplied through a job definition or the Batch Prediction API. |
| SAP Datasphere write | Write prediction results back to a SAP Datasphere data source with data destination details supplied through a job definition or the Batch Prediction API. |
| Trino write | Write prediction results back to a Trino database with data destination details supplied through a job definition or the Batch Prediction API. |
| Cloud storage connections |  |
| Azure Blob Storage write | Write scored data to Azure Blob Storage with a DataRobot credential consisting of an Azure Connection String. |
| Google Cloud Storage write | Write scored data to Google Cloud Storage with a DataRobot credential consisting of a JSON-formatted account key. |
| Amazon S3 write | Write scored data to public or private S3 buckets with a DataRobot credential consisting of an access key (ID and key) and a session token (Optional) |
| Data warehouse connections |  |
| BigQuery write | Write prediction results to BigQuery with data destination details supplied through a job definition or the Batch Prediction API. |
| Snowflake write | Write prediction results to Snowflake with data destination details supplied through a job definition or the Batch Prediction API. |
| Azure Synapse write | Write prediction results to Synapse with data destination details supplied through a job definition or the Batch Prediction API. |

If you are using a custom [CSV format](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html#csv-format), any output option dealing with CSV will adhere to that format. The columns that appear in the output are documented in the section on [output format](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-format.html).

## Local file streaming

If your job is configured with local file streaming as the output option, you can start downloading the scored data as soon as the job moves to a `RUNNING` state. In the example job data JSON below, the URL needed to make the local file streaming request is available in the `download` key of the `links` object:

```
{
  "elapsedTimeSec": 97,
  "failedRows": 0,
  "jobIntakeSize": 1150602342,
  "jobOutputSize": 107791140,
  "jobSpec": {
    "deploymentId": "5dc1a6a9865d6c004dd881ef",
    "maxExplanations": 0,
    "numConcurrent": 4,
    "passthroughColumns": null,
    "passthroughColumnsSet": null,
    "predictionWarningEnabled": null,
    "thresholdHigh": null,
    "thresholdLow": null
  },
  "links": {
    "download": "https://app.datarobot.com/api/v2/batchPredictions/5dc45e583c36a100e45276da/download/",
    "self": "https://app.datarobot.com/api/v2/batchPredictions/5dc45e583c36a100e45276da/"
  },
  "logs": [
    "Job created by user@example.org from 203.0.113.42 at 2019-11-07 18:11:36.870000",
    "Job started processing at 2019-11-07 18:11:49.781000",
    "Job done processing at 2019-11-07 18:13:14.533000"
  ],
  "percentageCompleted": 0.0,
  "scoredRows": 3000000,
  "status": "COMPLETED",
  "statusDetails": "Job done processing at 2019-11-07 18:13:14.533000"
}
```

If you download faster than DataRobot can ingest and score your data, the download may appear sluggish because DataRobot streams the scored data as soon as it arrives (in chunks).

Refer to the [this sample use case](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/pred-examples.html#end-to-end-scoring-of-csv-files-from-local-files) for a complete example.

## HTTP write

You can point Batch Predictions at a regular URL, and DataRobot streams the data:

| Parameter | Example | Description |
| --- | --- | --- |
| type | http | Use HTTP for output. |
| url | https://example.com/datasets/scored.csv | An absolute URL that designates where the file is written. |

The URL can optionally contain a username and password such as: `https://username:password@example.com/datasets/scored.csv`.

The `http` adapter can be used for writing data to pre-signed URLs from either [S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html), [Azure](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview), or [GCP](https://cloud.google.com/storage/docs/access-control/signed-urls).

## JDBC write

DataRobot supports writing prediction results back to a JDBC data source. For this, the Batch Prediction API integrates with [external data sources](https://docs.datarobot.com/en/docs/classic-ui/data/connect-data/data-conn.html#add-data-sources) using [securely stored credentials](https://docs.datarobot.com/en/docs/platform/acct-settings/stored-creds.html).

Supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`) as described in the table below.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | jdbc | Use a JDBC data store as output. |
| Data connection parameters |  |  |  |
| + Select connection | dataStoreId | 5e4bc5b35e6e763beb9db14a | The external data source ID. |
| Enter credentials | credentialId | 5e4bc5555e6e763beb9db147 | (Optional) The ID of a stored credential. Refer to storing credentials securely. |
| Schemas | schema | public | (Optional) The name of the schema where scored data will be written. |
| Tables | table | scoring_data | The name of the database table where scored data will be written. |
| Database | catalog | output_data | (Optional) The name of the specified database catalog to write output data to. |
| Write strategy options |  |  |  |
| Write strategy | statementType | update | The statement type, insert, update, or insertUpdate. |
| Create table if it does not exist (for Insert or Insert + Update) | create_table_if_not_exists | true | (Optional) If no existing table is detected, attempt to create it before writing data with the strategy defined in the statementType parameter. |
| Row identifier (for Update or Insert + Update) | updateColumns | ['index'] | (Optional) A list of strings containing the column names to be updated when statementType is set to update or insertUpdate. |
| Row identifier (for Update or Insert + Update) | where_columns | ['refId'] | (Optional) A list of strings containing the column names to be selected when statementType is set to update or insertUpdate. |
| Advanced options |  |  |  |
| Commit interval | commitInterval | 600 | (Optional) Defines a time interval, in seconds, between commits to the target database. If set to 0, the batch prediction operation will write the entire job before committing. Default: 600 |

> [!NOTE] Note
> If your target database doesn't support the column naming conventions of DataRobot's [output format](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-format.html), you can use [Column Name Remapping](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/output-format.html#column-name-remapping) to re-write the output column names to a format your target database supports (e.g., remove spaces from the name).

### Statement types

When dealing with Write strategy options, you can use the following statement types to write data, depending on the situation:

| Statement type | Description |
| --- | --- |
| insert | Scored data rows are inserted in the target database as a new entry. Suitable for writing to an empty table. |
| update | Scored data entries in the target database matching the row identifier of a result row are updated with the new result (columns identified in updateColumns). Suitable for writing to an existing table. |
| insertUpdate | Entries in the target database matching the row identifier of a result row (where_columns) are updated with the new result (update queries). All other result rows are inserted as new entries (insert queries). |
| createTable (deprecated) | DataRobot no longer recommends createTable. Use a different option with create_table_if_not_exists set to True. If used, scored data rows are saved to a new table using INSERT queries. The table must not exist before writing. |

### Allowed source IP addresses

Any connection initiated from DataRobot originates from an allowed IP addresses. See a full list at [Allowed source IP addresses](https://docs.datarobot.com/en/docs/reference/data-ref/allowed-ips.html).

## SAP Datasphere write

> [!NOTE] Premium
> Support for SAP Datasphere is off by default. Contact your DataRobot representative or administrator for information on enabling the feature.
> 
> Feature flag(s): Enable SAP Datasphere Connector, Enable SAP Datasphere Batch Predictions Integration

To use SAP Datasphere, supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`) as described in the table below.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | datasphere | Use a SAP Datasphere database for output. |
| Data connection parameters |  |  |  |
| + Select connection | dataStoreId | 5e4bc5b35e6e763beb9db14a | The ID of an external data source. In the UI, select a data connection or click add a new data connection. Refer to the SAP Datasphere connection documentation. |
| Enter credentials | credentialId | 5e4bc5555e6e763beb9db147 | The ID of a stored credential for Datasphere. Refer to storing credentials securely. |
|  | catalog | / | The name of the database catalog containing the table to write to. |
| Schemas | schema | public | The name of the database schema containing the table to write to. |
| Tables | table | scoring_data | The name of the database table containing data to write to. In the UI, select a table or click Create a table. |

## Databricks write

To use the [Databricks connector](https://docs.datarobot.com/en/docs/reference/data-ref/data-sources/wb-databricks.html) for output, supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`) as described in the table below.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | databricks | Use a Databricks database for output. |
| Data connection parameters |  |  |  |
| + Select connection | dataStoreId | 5e4bc5b35e6e763beb9db14a | The ID of an external data source. In the UI, select a data connection or click add a new data connection. |
| + Add credentials | credentialId | 5e96092ef7e8773ddbdbabed | The ID of stored credentials for the external Databricks database connection. |
| Catalog | catalog | default | (Optional) The Databricks database catalog containing the destination table. |
| Schema | schema | public | The Databricks schema containing the destination table. |
| Table | table | kickcars_predictions | The Databricks table in which to write output data. |

## Trino write

To use the [Trino connector](https://docs.datarobot.com/en/docs/reference/data-ref/data-sources/dc-trino.html) for output, supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`):

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | trino | Use a Trino database for output. |
| Data connection parameters |  |  |  |
| + Select connection | dataStoreId | 5e4bc5b35e6e763beb9db14a | The ID of an external data source. In the UI, select a data connection or click add a new data connection. |
| + Credentials | credentialId | 5e96092ef7e8773ddbdbabed | The credentials to use for the external Trino database connection. |
| Catalog | catalog | starburst_catalog | The Trino database catalog to store the output table. |
| Schema | schema | analytics | The Trino schema to store the output table. |
| Table | table | prediction_results | The Trino table in which to write output data. |
| Advanced options |  |  |  |
| Chunk size | chunkSize | 500000 | An explicit numeric chunk size in bytes. Must be a positive integer no greater than 1000000 (1MB). Named strategies (auto, dynamic, fixed) are not supported and will cause the job to fail. See the note below. |

> [!WARNING] Trino chunk size requirement
> Trino enforces a default [query.max-length](https://trino.io/docs/current/admin/properties-query-management.html#query-max-length) of 1MB (1,000,000 bytes). Because DataRobot generates SQL `INSERT` statements for each chunk of rows sent to Trino, the `chunkSize` parameter must be set to an explicit numeric value and must not exceed 1,000,000 bytes. Using named chunk strategies ( `auto`, `dynamic`, or `fixed`) or a value larger than `1000000` will cause the batch prediction job to fail.

> [!WARNING] Trino column name case requirement
> Use lowercase only for column names in the dataset used to train a project. Trino sanitizes column names automatically (unquoted identifiers are lowercased), so mixed-case or uppercase column names can cause column inconsistency errors when reading from Trino for batch scoring. This applies even when creating tables with quoted column names—Trino still stores them as lowercase. For more information, see [trinodb/trino#17](https://github.com/trinodb/trino/issues/17).

## Azure Blob Storage write

Azure Blob Storage is an option for writing large files. To save a dataset to Azure Blob Storage, you must set up a credential with DataRobot consisting of an Azure Connection String.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | azure | Use Azure Blob Storage for output. |
| URL | url | https://myaccount.blob.core.windows.net/datasets/scored.csv | An absolute URL for the file to be written. |
| Format | format | csv | (Optional) Select CSV (csv) or Parquet (parquet). Default value: CSV |
| + Add credentials | credentialId | 5e4bc5555e6e763beb488dba | In the UI, enable the + Add credentials field by selecting This URL requires credentials. Required if explicit access to credentials for this URL are necessary (optional otherwise). Refer to storing credentials securely. |

Azure credentials are encrypted and only decrypted when used to set up the client for communication with Azure when writing.

## Google Cloud Storage write

DataRobot supports the Google Cloud Storage adapter. To save a dataset to Google Cloud Storage, you must set up a credential with DataRobot consisting of a JSON-formatted account key.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | gcp | Use Google Cloud Storage for output. |
| URL | url | gcs://bucket-name/datasets/scored.csv | An absolute URL designating where the file is written. |
| Format | format | csv | (Optional) Select CSV (csv) or Parquet (parquet). Default value: CSV |
| + Add credentials | credentialId | 5e4bc5555e6e763beb488dba | Required if explicit access credentials for this URL are required, otherwise (Optional) Refer to storing credentials securely. |

GCP credentials are encrypted and are only decrypted when used to set up the client for communication with GCP when writing.

## Amazon S3 write

DataRobot can save scored data to both public and private buckets. To write to S3, you must set up a credential with DataRobot consisting of an access key (ID and key) and optionally a session token.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | s3 | Use S3 for output. |
| URL | url | s3://bucket-name/results/scored.csv | An absolute URL for the file to be written. DataRobot only supports directory scoring when scoring from cloud to cloud. Provide a directory in S3 (or another cloud provider) for the input and a directory ending with / for the output. Using this configuration, all files in the input directory are scored and the results are written to the output directory with the original filenames. When a single file is specified for both the input and the output, the file is overwritten each time the job runs. If you do not wish to overwrite the file, specify a filename template such as s3://bucket-name/results/scored_{{ current_run_time }}.csv. You can review template variable definitions in the documentation. |
| Format | format | csv | (Optional) Select CSV (csv) or Parquet (parquet). Default value: CSV |
| + Add credentials | credentialId | 5e4bc5555e6e763beb9db147 | In the UI, enable the + Add credentials field by selecting This URL requires credentials. Required if explicit access credentials for this URL are required. Refer to storing credentials securely. |
| Advanced options |  |  |  |
| Endpoint URL | endpointUrl | https://s3.us-east-1.amazonaws.com | (Optional) Override the endpoint used to connect to S3, for example, to use an API gateway or another S3-compatible storage service. |

AWS credentials are encrypted and only decrypted when used to set up the client for communication with AWS when writing.

> [!NOTE] Note
> If running a Private AI Cloud within AWS, you can provide implicit credentials for your application instances using an IAM Instance Profile to access your S3 buckets without supplying explicit credentials in the job data. For more information, see the AWS article, [Create an IAM Instance Profile](https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html).

## BigQuery write

To use BigQuery, supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`) as described in the table below.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | bigquery | Use Google Cloud Storage for output and the batch loading job to ingest data from GCS into a BigQuery table. |
| Dataset | dataset | my_dataset | The BigQuery dataset to use. |
| Table | table | my_table | The BigQuery table from the dataset to use for output. |
| Bucket name | bucket | my-bucket-in-gcs | The GCP bucket where data files are stored to be loaded into or unloaded from a BiqQuery table. |
| + Add credentials | credentialId | 5e4bc5555e6e763beb488dba | Required if explicit access credentials for this bucket are necessary (otherwise optional). In the UI, enable the + Add credentials field by selecting This connection requires credentials. Refer to storing credentials securely. |

> [!NOTE] BigQuery output write strategy
> The write strategy for BigQuery output is `insert`. First, the output adapter checks if a BigQuery table exists. If a table exists, the data is inserted. If a table doesn't exist, a table is created and then the data is inserted.

Refer to the [example section](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/pred-examples.html#end-to-end-scoring-with-bigquery) for a complete API example.

## Snowflake write

To use Snowflake, supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`) as described in the table below.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | snowflake | Adapter type. |
| Data connection parameters |  |  |  |
| + Select connection | dataStoreId | 5e4bc5b35e6e763beb9db14a | ID of Snowflake data source. |
| Enter credentials | credentialId | 5e4bc5555e6e763beb9db147 | (Optional) The ID of a stored credential for Snowflake. |
| Tables | table | RESULTS | Name of the Snowflake table to store results. |
| Schemas | schema | PUBLIC | (Optional) The name of the schema containing the table where results are written. |
| Database | catalog | OUTPUT | (Optional) The name of the specified database catalog to write output data to. |
| Use external stage options |  |  |  |
| Cloud storage type | cloudStorageType | s3 | (Optional) Type of cloud storage backend used in Snowflake external stage. Can be one of 3 cloud storage providers: s3/azure/gcp. The default is s3. In the UI, select Use external stage to enable the Cloud storage type field. |
| External stage | externalStage | my_s3_stage | Snowflake external stage. In the UI, select Use external stage to enable the External stage field. |
| Endpoint URL (for S3 only) | endpointUrl | https://www.example.com/datasets/ | (Optional) Override the endpoint used to connect to S3, for example, to use an API gateway or another S3-compatible storage service. In the UI, for the S3 option in Cloud storage type click Show advanced options to reveal the Endpoint URL field. |
| + Add credentials | cloudStorageCredentialId | 6e4bc5541e6e763beb9db15c | (Optional) ID of stored credentials for a storage backend (S3/Azure/GCS) used in Snowflake stage. In the UI, enable the + Add credentials field by selecting This URL requires credentials. |
| Write strategy options (for fallback JDBC connection) |  |  |  |
| Write strategy | statementType | insert | If you're using a Snowflake external stage the statementType is insert. However, in the UI you have two configuration options: If you haven't configured an external stage, the connection defaults to JDBC and you can select Insert or Update. If you select Update, you can provide a Row identifier.If you selected Use external stage, the Insert option is required. |
| Create table if it does not exist (for Insert) | create_table_if_not_exists | true | (Optional) If no existing table is detected, attempt to create one. |
| Advanced options |  |  |  |
| Commit interval | commitInterval | 600 | (Optional) Defines a time interval, in seconds, between commits to the target database. If set to 0, the batch prediction operation will write the entire job before committing. Default: 600 |

Refer to the [example section](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/pred-examples.html#end-to-end-scoring-with-snowflake) for a complete API example.

## Azure Synapse write

To use Azure Synapse, supply data destination details using the [Predictions > Job Definitions](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#set-up-prediction-destinations) tab or the [Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html) ( `outputSettings`) as described in the table below.

| UI field | Parameter | Example | Description |
| --- | --- | --- | --- |
| Destination type | type | synapse | Adapter type. |
| Data connection parameters |  |  |  |
| + Select connection | dataStoreId | 5e4bc5b35e6e763beb9db14a | ID of Synapse data source. |
| Enter credentials | credentialId | 5e4bc5555e6e763beb9db147 | (Optional) The ID of a stored credential for Synapse. |
| Tables | table | RESULTS | Name of the Synapse table to keep results in. |
| Schemas | schema | dbo | (Optional) Name of the schema containing the table where results are written. |
| Use external stage options |  |  |  |
| External data source | externalDatasource | my_data_source | Name of the identifier created in Synapse for the external data source. |
| + Add credentials | cloudStorageCredentialId | 6e4bc5541e6e763beb9db15c | (Optional) ID of a stored credential for Azure Blob storage. |
| Write strategy options (for fallback JDBC connection) |  |  |  |
| Write strategy | statementType | insert | If you're using a Synapse external stage the statementType is insert. However, in the UI you have two configuration options: If you haven't configured an external stage, the connection defaults to JDBC and you can select Insert, Update, or Insert + Update. If you select Update or Insert + Update, you can provide a Row identifier.If you selected Use external stage, the Insert option is required. |
| Create table if it does not exist (for Insert or Insert + Update) | create_table_if_not_exists | true | (Optional) If no existing table is detected, attempt to create it before writing data with the strategy defined in the statementType parameter. |
| Create table if it does not exist | create_table_if_not_exists | true | (Optional) Attempt to create the table first if no existing one is detected. |
| Advanced options |  |  |  |
| Commit interval | commitInterval | 600 | (Optional) Defines a time interval, in seconds, between commits to the target database. If set to 0, the batch prediction operation will write the entire job before committing. Default: 600 |

Refer to the [example section](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/pred-examples.html#end-to-end-scoring-with-synapse) for a complete API example.
