# Make batch predictions with Azure Blob storage

> Make batch predictions with Azure Blob storage - Use the DataRobot Python Client package to set up a
> batch prediction job that reads an input file for scoring from Azure Blob storage and then writes
> the results back to Azure.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.293883+00:00` (UTC).

## Primary page

- [Make batch predictions with Azure Blob storage](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/prediction-examples/azure-pred.html): Full documentation for this topic (HTML).

## Sections on this page

- [Requirements](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/prediction-examples/azure-pred.html#takeaways): In-page section heading.
- [Create stored credentials](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/prediction-examples/azure-pred.html#create-stored-credentials): In-page section heading.
- [Run the prediction job](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/prediction-examples/azure-pred.html#run-the-prediction-job): In-page section heading.
- [Documentation](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/prediction-examples/azure-pred.html#documentation): In-page section heading.

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [Developer learning](https://docs.datarobot.com/en/docs/api/dev-learning/index.html): Linked from this page.
- [Python API client user guide](https://docs.datarobot.com/en/docs/api/dev-learning/python/index.html): Linked from this page.
- [Python code examples](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/index.html): Linked from this page.
- [Prediction code examples](https://docs.datarobot.com/en/docs/api/dev-learning/python/py-code-examples/prediction-examples/index.html): Linked from this page.
- [A DataRobot deployment](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/index.html): Linked from this page.
- [Deployments > Predictions > Prediction API](https://docs.datarobot.com/en/docs/classic-ui/predictions/realtime/code-py.html): Linked from this page.
- [Prediction API overview](https://docs.datarobot.com/en/docs/api/reference/predapi/index.html): Linked from this page.
- [DataRobot Batch Prediction API](https://docs.datarobot.com/en/docs/api/reference/batch-prediction-api/index.html): Linked from this page.

## Documentation content

# Make batch predictions with Azure Blob storage

The DataRobot Batch Prediction API allows you to take in large datasets and score them against deployed models running on a prediction server. The API also provides flexible options for the intake and output of these files.

In this tutorial, you will learn how to use the DataRobot Python Client package (which calls the Batch Prediction API) to set up a batch prediction job. The job reads an input file for scoring from Azure Blob storage and then writes the results back to Azure. This approach also works for Azure Data Lake Storage Gen2 accounts because the underlying storage is the same.

## Requirements

In order to use the code provided in this tutorial, make sure you have the following:

- Python 2.7 or 3.4+
- The DataRobot Python package (2.21.0+) (pypi) (conda)
- A DataRobot deployment
- An Azure storage account
- An Azure storage container
- A scoring dataset in the storage container to use with your DataRobot deployment

## Create stored credentials

Running batch prediction jobs requires the appropriate credentials to read and write to Azure Blob storage. You must provide the name of the Azure storage account and an access key.

1. To retrieve these credentials, select theAccess keysmenu in the Azure portal.
2. ClickShow keysto retrieve an access key. You can use either of the keys shown (key1 or key2).
3. Use the following code to create a new credential object within DataRobot that can be used in the batch prediction job to connect to your Azure storage account. AZURE_STORAGE_ACCOUNT="YOUR AZURE STORAGE ACCOUNT NAME"AZURE_STORAGE_ACCESS_KEY="AZURE STORAGE ACCOUNT ACCESS KEY"DR_CREDENTIAL_NAME="Azure_{}".format(AZURE_STORAGE_ACCOUNT)# Create Azure-specific credentials# You can also copy the connection string, which is found below the access key in Azure.credential=dr.Credential.create_azure(name=DR_CREDENTIAL_NAME,azure_connection_string="DefaultEndpointsProtocol=https;AccountName={};AccountKey={};".format(AZURE_STORAGE_ACCOUNT,AZURE_STORAGE_ACCESS_KEY))# Use this code to look up the ID of the credential object created.credential_id=Noneforcredindr.Credential.list():ifcred.name==DR_CREDENTIAL_NAME:credential_id=cred.credential_idbreakprint(credential_id)

## Run the prediction job

With a credential object created, you can now configure the batch prediction job as shown in the code sample below:

- Setintake_settingsandoutput_settingsto theazuretype.
- Forintake_settingsandoutput_settings, seturlto the files in Blob storage that you want to read and write to (the output file does not need to exist already).
- Provide the ID of the credential object that was created above.

The code sample creates and runs the batch prediction job. Once finished, it provides the status of the job. This code also demonstrates how to configure the job to return both Prediction Explanations and passthrough columns for the scoring data.

> [!NOTE] Note
> You can find the deployment ID in the sample code output of the [Deployments > Predictions > Prediction API](https://docs.datarobot.com/en/docs/classic-ui/predictions/realtime/code-py.html) tab (with Interface set to "API Client").

```
DEPLOYMENT_ID = 'YOUR DEPLOYMENT ID'
AZURE_STORAGE_ACCOUNT = "YOUR AZURE STORAGE ACCOUNT NAME"
AZURE_STORAGE_CONTAINER = "YOUR AZURE STORAGE ACCOUNT CONTAINER"
AZURE_INPUT_SCORING_FILE = "YOUR INPUT SCORING FILE NAME"
AZURE_OUTPUT_RESULTS_FILE = "YOUR OUTPUT RESULTS FILE NAME"

# Set up our batch prediction job
# Input: Azure Blob Storage
# Output: Azure Blob Storage

job = dr.BatchPredictionJob.score(
   deployment=DEPLOYMENT_ID,
   intake_settings={
       'type': 'azure',
       'url': "https://{}.blob.core.windows.net/{}/{}".format(AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_CONTAINER,AZURE_INPUT_SCORING_FILE),
       "credential_id": credential_id
   },
   output_settings={
       'type': 'azure',
       'url': "https://{}.blob.core.windows.net/{}/{}".format(AZURE_STORAGE_ACCOUNT, AZURE_STORAGE_CONTAINER,AZURE_OUTPUT_RESULTS_FILE),
       "credential_id": credential_id
   },
   # If explanations are required, uncomment the line below
   max_explanations=5,

   # If passthrough columns are required, use this line
   passthrough_columns=['column1','column2']
)

job.wait_for_completion()
job.get_status()
```

When the job completes successfully, you should see the output file in your Azure Blob storage container.

## Documentation

- Prediction API overview
- DataRobot Batch Prediction API
