# Use Scoring Code with Azure ML

> Use Scoring Code with Azure ML - Import Scoring Code models to Azure ML to make prediction requests
> using Azure.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.550018+00:00` (UTC).

## Primary page

- [Use Scoring Code with Azure ML](https://docs.datarobot.com/en/docs/classic-ui/integrations/azure/sc-azureml.html): Full documentation for this topic (HTML).

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Integrations](https://docs.datarobot.com/en/docs/classic-ui/integrations/index.html): Linked from this page.
- [Azure](https://docs.datarobot.com/en/docs/classic-ui/integrations/azure/index.html): Linked from this page.
- [Leaderboard](https://docs.datarobot.com/en/docs/classic-ui/predictions/port-pred/scoring-code/sc-download-leaderboard.html): Linked from this page.
- [deployment](https://docs.datarobot.com/en/docs/classic-ui/predictions/port-pred/scoring-code/sc-download-deployment.html): Linked from this page.

## Documentation content

# Use Scoring Code with Azure ML

You must complete the following before importing Scoring Code models to Azure ML:

- Install the Azure CLI client to configure your service to the terminal.
- Install the Azure Machine Learning CLI extension .

To import a Scoring Code model to Azure ML:

1. Login to Azure with the login command. az login
2. If you have not yet created a resource group, you can create one usingthis command: az group create --location --name [--subscription] [--tags] For example: az group create --location westus2 --name myresourcegroup
3. If you do not have an existing container registry that you want to use for storing custom Docker images, you must create one. If you want to use a DataRobot Docker image instead of building your own, you do not need to create a container registry. Instead, skip ahead to step 6. Create a container withthe following command: az acr create --name --resource-group --sku {Basic | Classic | Premium | Standard}
[--admin-enabled {false | true}] [--default-action {Allow | Deny}] [--location]
[--subscription] [--tags] [--workspace] For example: az acr create --name mycontainerregistry --resource-group myresourcegroup --sku Basic
4. Set up admin access usingthe following commands: az acr update --name --admin-enabled {false | true} For example: az acr update --name mycontainerregistry --admin-enabled true And print theregistry credentials: az acr credential show --name For example: az acr credential show --name mycontainerregistry Returns: {
  "passwords": [
    {
      "name": "password",
      "value": <password>
    },
    {
      "name": "password2",
      "value": <password>
    }
  ],
  "username": mycontainerregistry
}
5. Upload a custom Docker image thatruns Java: az acr build --registry [--auth-mode {Default | None}] [--build-arg] [--file] [--image]
[--no-format] [--no-logs] [--no-push] [--no-wait] [--platform] [--resource-group]
[--secret-build-arg] [--subscription] [--target] [--timeout] [] For example: az acr build --registry mycontainerregistry --image myImage:1 --resource-group myresourcegroup --file Dockerfile . The following is an example of a custom Docker image. Reference theMicrosoft documentationto read more about building an image. FROM ubuntu:16.04

ARG CONDA_VERSION=4.5.12
ARG PYTHON_VERSION=3.6

ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/miniconda/bin:$PATH

RUN apt-get update --fix-missing && \
    apt-get install -y wget bzip2 && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-${CONDA_VERSION}-Linux-x86_64.sh -O ~/miniconda.sh && \
    /bin/bash ~/miniconda.sh -b -p /opt/miniconda && \
    rm ~/miniconda.sh && \
    /opt/miniconda/bin/conda clean -tipsy

RUN conda install -y conda=${CONDA_VERSION} python=${PYTHON_VERSION} && \
    conda clean -aqy && \
    rm -rf /opt/miniconda/pkgs && \
    find / -type d -name __pycache__ -prune -exec rm -rf {} \;

RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install software-properties-common -y && \
    add-apt-repository ppa:openjdk-r/ppa -y && \
    apt-get update -q && \
    apt-get install -y openjdk-11-jdk && \
    apt-get clean
6. If you have not already created a workspace,use the following command to create one. Otherwise, skip to step 7. az ml workspace create --workspace-name [--application-insights] [--container-registry]
[--exist-ok] [--friendly-name] [--keyvault] [--location] [--resource-group] [--sku]
[--storage-account] [--yes] For example: az ml workspace create --workspace-name myworkspace --resource-group myresourcegroup
7. Register your Scoring Code modelto the Azure model storage. NoteMake sure you have exported your Scoring Code JAR file from DataRobot before proceeding. You can download the JAR file from theLeaderboardor from adeployment.az ml model register --name [--asset-path][--cc] [--description][--experiment-name]
[--gb][--gc] [--model-framework][--model-framework-version] [--model-path][--output-metadata-file] [--path][--property] [--resource-group][--run-id]
[--run-metadata-file][--sample-input-dataset-id] [--sample-output-dataset-id][--tag] [--workspace-name][-v] For example, to register model namedcodegenmodel: az ml model register --name codegenmodel --model-path 5cd071deef881f011a334c2f.jar --resource-group myresourcegroup --workspace-name myworkspace
8. Prepare two configs and aPython entry scriptthat will execute the prediction. Below are some examples of configs with a Python entry script.
9. Create a new prediction endpoint: az ml model deploy --name [--ae] [--ai] [--ar] [--as] [--at] [--autoscale-max-replicas]
[--autoscale-min-replicas] [--base-image] [--base-image-registry] [--cc] [--cf]
[--collect-model-data] [--compute-target] [--compute-type] [--cuda-version] [--dc]
[--description] [--dn] [--ds] [--ed] [--eg] [--entry-script] [--environment-name]
[--environment-version] [--failure-threshold] [--gb] [--gc] [--ic] [--id] [--kp]
[--ks] [--lo] [--max-request-wait-time] [--model] [--model-metadata-file] [--namespace]
[--no-wait] [--nr] [--overwrite] [--path] [--period-seconds] [--pi] [--po] [--property]
[--replica-max-concurrent-requests] [--resource-group] [--rt] [--sc] [--scoring-timeout-ms]
[--sd] [--se] [--sk] [--sp] [--st] [--tag] [--timeout-seconds] [--token-auth-enabled]
[--workspace-name] [-v] For example, to create a new endpoint with the namemyservice: az ml model deploy --name myservice --model codegenmodel:1 --compute-target akscomputetarget --ic inferenceconfig.json --dc deploymentconfig.json --resource-group myresourcegroup --workspace-name myworkspace
10. Get a tokento make prediction requests: az ml service get-keys --name [--path] [--resource-group] [--workspace-name] [-v] For example: az ml service get-keys --name myservice --resource-group myresourcegroup --workspace-name myworkspace This command returns a JSON response: {
    "primaryKey": <key>,
    "secondaryKey": <key>
}

You can now make prediction requests using Azure.
