Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Portable Prediction Server

The Portable Prediction Server (PPS) is a DataRobot execution environment for DataRobot model packages (.mlpkg files) distributed as a self-contained Docker image. After you configure the Portable Prediction Server, you can begin running single or multi model portable real-time predictions and portable batch prediction jobs.

CPU considerations

DataRobot strongly recommends using an Intel CPU to run the Portable Prediction Server. Using non-Intel CPUs can result in prediction inconsistencies, especially in deep learning models like those built with Tensorflow or Keras. This includes ARM architecture processors (e.g., AArch32 and AArch64).

The general configuration steps are:

  • Download the model package.
  • Download the PPS Docker image.
  • Load the PPS image to Docker.
  • Copy the Docker snippet DataRobot provides to run the Portable Prediction Server in your Docker container.

Important

If you want to configure the Portable Prediction Server for a model through a deployment, you must first add an external prediction environment and deploy that model to an external environment.

Download the model package

You can download a PPS model package for a deployed DataRobot model running on an external prediction environment. In addition, with the correct MLOps permissions, you can download a model package from the Leaderboard. You can then run prediction jobs with a portable prediction server outside of DataRobot.

When you download a model package from a deployment, the Portable Prediction Server will monitor your model for performance and track prediction statistics; however, you must ensure that your deployment supports model package downloads. The deployment must have a DataRobot build environment and an external prediction environment, which you can verify using the Governance Lens in the deployment inventory:

What if a deployment doesn't have an external prediction environment?

If the deployed model you want to run in the Portable Prediction Server isn't associated with an external prediction environment, you can do either of the following:

  • Create a new deployment with an external prediction environment.
  • If you have the correct permissions, download the model package from the Leaderboard.

If you access a deployment that doesn't support model package download, you can quickly navigate to the Leaderboard from the deployment:

  1. Click the Model name (on the Overview tab) to open the model package in the Model Registry.
  2. In the Model Registry, click the Model Name (on the Package Info tab) to open the model on the Leaderboard.
  3. On the Leaderboard, download the Portable Prediction Server model package from the Predict > Portable Predictions tab.

When you download the model package from the Leaderboard, the Portable Prediction Server won't monitor your model for performance or track prediction statistics.

On the Deployments tab (the deployment inventory), open a deployment with both a DataRobot build environment and an external prediction environment, and then navigate to the Predictions > Portable Predictions tab:

Element Description
1 Portable Prediction Server Helps you configure a REST API-based prediction server as a Docker image.
2 Portable Prediction Server Usage Links to the Developer Tools tab where you obtain the Portable Prediction Server Docker image.
3 Download model package (.mlpkg) Downloads the model package for your deployed model. Alternatively, you can download the model package from the Leaderboard.
4 Docker snippet After you download your model package, use the Docker snippet to launch the Portable Prediction Server for the model with monitoring enabled. You will need to specify your API key, local filenames, paths, and monitoring before launching.
5 Copy to clipboard Copies the Docker snippet to your clipboard so that you can paste it on the command line.

In the Predictions > Portable Predictions tab, click Download model package. The download appears in the downloads bar when complete.

After downloading the model package, click Copy to clipboard and save the code snippet for later. You need this code to launch the Portable Prediction Server for the downloaded model package.

Availability information

The ability to download a model package from the Leaderboard depends on the MLOps configuration for your organization.

If you have built a model with AutoML and want to download its model package for use with the Portable Prediction Server, navigate to the model on the Leaderboard and select the Predict > Portable Predictions tab.

Note

When downloaded from the Leaderboard, the Portable Prediction Server won't monitor your model for performance or track prediction statistics.

Click Download .mlpkg. After downloading the model package, click Copy to clipboard and save the code snippet for later. You need this code to launch the Portable Prediction Server for the downloaded model package.

Configure the Portable Prediction Server

To deploy the model package you downloaded to the Portable Prediction Server, you must first download the PPS Docker image and then load that image to Docker.

Obtain the PPS Docker image

Navigate to the Developer Tools tab to download the Portable Prediction Server Docker image. Depending on your DataRobot environment and version, options for accessing the latest image may differ, as described in the table below.

Deployment type Software version Access method
Self-Managed AI Platform v6.3 or older Contact your DataRobot representative. The image will be provided upon request.
Self-Managed AI Platform v7.0 or later Download the image from Developer Tools; install as described below. If the image is not available contact your DataRobot representative.
Managed AI Platform Jan 2021 and later Download the image from Developer Tools; install as described below.

Load the image to Docker

Warning

DataRobot is working to reduce image size; however, the compressed Docker image can exceed 6GB (Docker-loaded image layers can exceed 14GB). Consider these sizes when downloading and importing PPS images.

Before proceeding, make sure you have downloaded the image from Developer Tools. It is a gzip'ed tar archive that can be loaded by Docker.

Once downloaded and the file checksum is verified, use docker load to load the image. You do not have to uncompress the downloaded file because Docker supports loading images from gzip'ed tar archives natively.

Copy the command below, replace <version>, and run the command to load the PPS image to Docker:

docker load < datarobot-portable-prediction-api-<version>.tar.gz

Note

If the PPS file isn't located in the current directory, you need to provide a local, absolute filepath to the tar file (for example, /path/to/datarobot-portable-prediction-api-<version>.tar.gz).

After running the docker load command for your PPS file, you should see output similar to the following:

docker load < datarobot-portable-prediction-api-9.0.0-r4582.tar.gz
33204bfe17ee: Loading layer [==================================================>]  214.1MB/214.1MB
62c077c42637: Loading layer [==================================================>]  3.584kB/3.584kB
54475c7b6aee: Loading layer [==================================================>]  30.21kB/30.21kB
0f91625c248c: Loading layer [==================================================>]  3.072kB/3.072kB
21c5127d921b: Loading layer [==================================================>]  27.05MB/27.05MB
91feb2d07e73: Loading layer [==================================================>]  421.4kB/421.4kB
12ca493d22d9: Loading layer [==================================================>]  41.61MB/41.61MB
ffb6e915efe7: Loading layer [==================================================>]  26.55MB/26.55MB
83e2c4ee6761: Loading layer [==================================================>]  5.632kB/5.632kB
109bf21d51e0: Loading layer [==================================================>]  3.093MB/3.093MB
d5ebeca35cd2: Loading layer [==================================================>]  646.6MB/646.6MB
f72ea73370ce: Loading layer [==================================================>]  1.108GB/1.108GB
4ecb5fe1d7c7: Loading layer [==================================================>]  1.844GB/1.844GB
d5d87d53ea21: Loading layer [==================================================>]  71.79MB/71.79MB
34e5df35e3cf: Loading layer [==================================================>]  187.3MB/187.3MB
38ccf3dd09eb: Loading layer [==================================================>]  995.5MB/995.5MB
fc5583d56a81: Loading layer [==================================================>]  3.584kB/3.584kB
c51face886fc: Loading layer [==================================================>]    402MB/402MB
c6017c1b6604: Loading layer [==================================================>]  1.465GB/1.465GB
7a879d3cd431: Loading layer [==================================================>]  166.6MB/166.6MB
8c2f17f7a166: Loading layer [==================================================>]  188.7MB/188.7MB
059189864c15: Loading layer [==================================================>]  115.9MB/115.9MB
991f5ac99c29: Loading layer [==================================================>]  3.072kB/3.072kB
f6bbaa29a1c6: Loading layer [==================================================>]   2.56kB/2.56kB
4a0a241b3aab: Loading layer [==================================================>]  415.7kB/415.7kB
3d509cf1aa18: Loading layer [==================================================>]  5.632kB/5.632kB
a611f162b44f: Loading layer [==================================================>]  1.701MB/1.701MB
0135aa7d76a0: Loading layer [==================================================>]  6.766MB/6.766MB
fe5890c6ddfc: Loading layer [==================================================>]  4.096kB/4.096kB
d2f4df5f0344: Loading layer [==================================================>]  5.875GB/5.875GB
1a1a6aa8556e: Loading layer [==================================================>]  10.24kB/10.24kB
77fcb6e243d1: Loading layer [==================================================>]  12.97MB/12.97MB
7749d3ff03bb: Loading layer [==================================================>]  4.096kB/4.096kB
29de05e7fdb3: Loading layer [==================================================>]  3.072kB/3.072kB
2579aba98176: Loading layer [==================================================>]  4.698MB/4.698MB
5f3d150f5680: Loading layer [==================================================>]  4.699MB/4.699MB
1f63989f2175: Loading layer [==================================================>]  3.798GB/3.798GB
3e722f5814f1: Loading layer [==================================================>]  182.3kB/182.3kB
b248981a0c7e: Loading layer [==================================================>]  3.072kB/3.072kB
b104fa769b35: Loading layer [==================================================>]  4.096kB/4.096kB
Loaded image: datarobot/datarobot-portable-prediction-api:9.0.0-r4582

Once the docker load command completes successfully with the Loaded image message, you should verify that the image is loaded with the docker images command:

Copy the command below and run it to view a list of the images in Docker:

docker images

In this example, you can see the datarobot/datarobot-portable-prediction-api image loaded in the previous step:

docker images
REPOSITORY                                    TAG           IMAGE ID       CREATED        SIZE
datarobot/datarobot-portable-prediction-api   9.0.0-r4582   df38ea008767   29 hours ago   17GB

Tip

(Optional) To save disk space, you can delete the compressed image archive datarobot-portable-prediction-api-<version>.tar.gz after your Docker image loads successfully.

Launch the PPS with the code snippet

After you've downloaded the model package and configured the Docker PPS image, you can use the associated docker run code snippet to launch the Portable Prediction Server with the downloaded model package.

In the example code snippet below from a deployed model, you should configure the following highlighted options:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
docker run \
-p 8080:8080 \
-v <local path to model package>/:/opt/ml/model/ \
-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \
-e PREDICTION_API_MONITORING_ENABLED="True" \
-e MLOPS_DEPLOYMENT_ID="6387928ebc3a099085be32b7" \
-e MONITORING_AGENT="True" \
-e MONITORING_AGENT_DATAROBOT_APP_URL="https://app.datarobot.com" \
-e MONITORING_AGENT_DATAROBOT_APP_TOKEN="<your api token>" \
datarobot-portable-prediction-api
  • -v <local path to model package>/:/opt/ml/model/ \: Provide the local, absolute file path to the location of the model package you downloaded. The -v (or --volume) option bind mounts a volume, adding the contents of your local model package directory (at <local path to model package>) to your Docker container's /opt/ml/model volume.

  • -e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \: Provide the file name of the model package mounted to the /opt/ml/model/ volume. This sets the PREDICTION_API_MODEL_REPOSITORY_PATH environment variable, indicating where the PPS can find the model package.

  • -e MONITORING_AGENT_DATAROBOT_APP_TOKEN="<your api token>" \: Provide your API token from the DataRobot Developer Tools for monitoring purposes. This sets the MONITORING_AGENT_DATAROBOT_APP_TOKEN environment variable, where the PPS can find your API key.

  • datarobot-portable-prediction-api: Replace this line with the image name and version of the PPS image you're using. For example, datarobot/datarobot-portable-prediction-api:<version>.

In the example code snippet below for a Leaderboard model, you should configure the following highlighted options:

1
2
3
4
5
docker run \
-p 8080:8080 \
-v <local path to model package>/:/opt/ml/model/ \
-e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \
datarobot-portable-prediction-api
  • -v <local path to model package>/:/opt/ml/model/ \: Provide the local, absolute file path to the directory containing the model package you downloaded. The -v (or --volume) option bind mounts a volume, adding the contents of your local model package directory (at <local path to model package>) to your Docker container's /opt/ml/model volume.

  • -e PREDICTION_API_MODEL_REPOSITORY_PATH="/opt/ml/model/<model package file name>" \: Provide the file name of the model package mounted to the /opt/ml/model/ volume. This sets the PREDICTION_API_MODEL_REPOSITORY_PATH environment variable, indicating where the PPS can find the model package.

  • datarobot-portable-prediction-api: Replace this line with the image name and version of the PPS image you're using. For example, datarobot/datarobot-portable-prediction-api:<version>.

Use docker tag to name and tag an image

Alternatively, you can keep datarobot-portable-prediction-api in the last line if you use docker tag to tag the new image as latest and rename it to datarobot-portable-prediction-api.

In this example, Docker renames the image and replaces the 9.0.0-r4582 tag with the latest tag:

docker tag datarobot/datarobot-portable-prediction-api:9.0.0-r4582 datarobot-portable-prediction-api:latest

To verify the new tag and name, you can use the docker images command again:

docker images
REPOSITORY                                    TAG           IMAGE ID       CREATED        SIZE
datarobot/datarobot-portable-prediction-api   9.0.0-r4582   df38ea008767   29 hours ago   17GB
datarobot-portable-prediction-api             latest        df38ea008767   29 hours ago   17GB

After completing the setup, you can use the Docker snippet to run single or multi model portable real-time predictions or run portable batch predictions. See also additional examples for prediction jobs using PPS. The PPS can be run disconnected from the main DataRobot installation environments. Once started, the image serves HTTP API via the :8080 port.

Run the PPS for FIPS-enabled model packages

If you configure your DataRobot cluster with ENABLE_FIPS_140_2_MODE: true (in the config.yaml file at the cluster level), that cluster builds MLKPG files that require that you to launch the PPS with ENABLE_FIPS_140_2_MODE: true. For this reason, you can’t host FIPS-enabled models and standard models in the same PPS instance.

To run the PPS with support for FIPS-enabled models, you can include the following argument in the docker run command:

-e  ENABLE_FIPS_140_2_MODE="true"

The full command for PPS container startup would look like the following example:

docker run 
-td 
-p 8080:8080 
-e PYTHON3_SERVICES="true" 
-e ENABLE_FIPS_140_2_MODE="true" 
-v <local path to model package>/:/opt/ml/model 
--name portable_predictions_server  
--rm datarobot/datarobot-portable-prediction-api:<version>

Updated February 16, 2024
Back to top