Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Standalone Prediction Server

Availability information

The model transfer capability is not available for Managed AI Cloud deployments.

While you can export models to both a Standalone Prediction Server or HDFS for batch scoring, importing models and the Predictions Admin screens are only available if the cluster is deployed with a Standalone Prediction Server.

Transferring a model to a Standalone Prediction Server increases robustness by avoiding both contention and unplanned interactions between the prediction server and the model development server. An "unplanned interaction", for example, can result because users have the ability to delete models on the model development server as part of their normal workflow. You typically wouldn’t want to allow this for models in a production environment. Use the model transfer capability when you want to create an isolated and stable environment for your prediction system.

Using the model transfer feature

Exporting the model from the Downloads tab creates a binary file—suffix .drx—with metadata. When you import the file to the standalone predictions cluster, DataRobot stores the model for future predictions. The Standalone Prediction Server, with access to the imported model files, runs the DataRobot application to make predictions with the model. To use this capability:

  • For export, you must have access to the project. That is, have the Owner, User, or Observer role.
  • For import, you must have prediction admin permissions to the Standalone Prediction Server (target machine).

Note that file import size cannot exceed 5GB and you cannot export user or open-source models (or blender models containing user or open-source models).

Exporting models

To export a model:

  1. Expand the model you want to export and navigate to the Predict > Downloads tab.

  2. Under the Model Export option, click the Download button.

    DataRobot will begin generating the .drx file, which can take several minutes. It is displayed in the progress queue on the right-hand part of the screen:

  3. When generation completes, DataRobot automatically begins the download. If it fails to start, follow the link that replaces the Download button:

    When the download completes, the .drx file should appear in your Downloads bar at the bottom of your Chrome browser. Then, you can begin the import process.

Importing models

Clusters with Standalone Prediction Servers may come with a full DataRobot web application, depending on your DataRobot configuration. The sections below describe how to import and manage models with the user interface of the DataRobot application. For information on how to do this in installations where the full DataRobot application is not available, see Importing models using the API.

To use the .drx file created from the export process, import it into DataRobot on your prediction instance. To import:

  1. Select Manage Predictions from the Account Settings dropdown menu:

  2. Click Import Model to bring up an import dialog:

  3. Select the .drx file for the model that you wish to import by browsing or dragging:

    When import completes, the model appears on the Manage Predictions page:

  4. Click the imported model listed to expand it. The entry displays a snippet of the model's source code. You can now use this model code to make predictions in a standalone predictions environment. Click Show full snippet to display complete code.

Understanding model listing and code

The following image illustrates a usage example of an expanded model on the Manage Predictions page:

The table below describes elements of the page; you can search on any element:

Element Description
Model name (1) The name of the model. This is either the name from the Leaderboard or, if you edited the display name, the modified name. Text beneath reports the original project name (from the export) and the ID of the person who imported the model.
Prediction API URL (2) The configured prediction endpoint. This destination was set during installation. Contact your system administrator if changes are required.
Notes (3) Any notes you added to the model's record through the Manage Predictions page. These notes are for display purposes only and not reflected in the model code.
Import ID (4) A unique ID, different from the model ID, required to make predictions against this model. If you try to make predictions without the correct ID, you receive an error.
Target (5) The target variable selected for the project.
Featurelist Name (6) The feature list used when building the project.
Dataset (7) The dataset used to train the exported model.
Model (8) The name of the model as described in (1).
IMPORT_ID A unique ID, as reported in the code snippet. This is the same ID as described in (4).

Listing imported models

To view all models imported to the prediction cluster, click Manage Predictions in the Account Settings dropdown menu (requires admin privileges). The page displays up to 50 models (additional models are available on a separate page), ordered by the most recently changed models first. From the display, you can see the project name, model name, and the model creator’s username. Expand the model to display the usage example, model origin project, model’s training set, model’s creator, and the model’s deployment timestamp.

Searching imported models

When your cluster has many imported models, use the Search feature to quickly find models without scrolling through the list.

Enter a string in the Search bar to match on:

  • User name (model importer) first name and/or last name
  • Project ID
  • Model name (as displayed in the Leaderboard) or the edited model name
  • Unique model ID (for example, 57b70cf5c80891295500c6e8)
  • Project name
  • Feature list name
  • Dataset name
  • Target value

Editing model information

You can edit the imported model's listing information. Click the pencil icon to the right of the model name to edit the model display name or add notes that are visible in the list:

Deleting imported models

Each model in the imported models list has a delete button to remove it from the prediction cluster. Be certain to deactivate any imported model before deleting it from the cluster.

Importing models using the API

To import models directly to your Standalone Prediction Servers using the API:

$ curl -F file=@/path/to/model.drx https://standalone-example.datarobot.com/predApi/v1.0/importedModels/
{
    "import_id": "21f31ffe9f20207f054d04a62d8ad15f8e69431c",
    "message": "OK",
    "predict_url": "https://standalone-example.datarobot.com/predApi/v1.0/21f31ffe9f20207f054d04a62d8ad15f8e69431c/predict"
}

For multiple-instance standalone prediction environments configured to use the local file system for storing the *.drx files, you must perform the API import on each Standalone Prediction Server individually.

Note

See the section on deploying models for information on how to use the import_id found in the response JSON.

Deleting imported models using the API

To delete a previously imported model:

$ curl -X DELETE https://standalone-example.datarobot.com/predApi/v1.0/importedModels/<importId>
{
    "message": "OK"
}

Other actions on imported models using the API

DataRobot clusters with just Standalone Prediction Servers are lightweight and isolated from the rest of the DataRobot system. While this increases robustness and availability, it also has limitations. The following actions are not available using just the Standalone Prediction Server's API:

  • Listing imported models
  • Editing model information
  • Searching imported models

If you have lost the import_id of a previously imported *.drx model file, you can re-import the model to get it back. This will overwrite the previously stored *.drx file and, assuming the contents of both are identical, ensures storage is not wasted.

Deploying models

Use the code in the snippet to make predictions, using the imported model, on your Standalone Prediction Server. Note that each deployed model has a unique key—the IMPORT_ID. This ID, which can be seen in the usage example, is required when making predictions with the model.

Once you have the code snippet available in the usage example, you can make a POST request to the Standalone Prediction Server. To do that, send the endpoint a CSV file or JSON-encoded data with the same columns as those in the dataset used for building the model.

The JSON response you receive has the same number of records as the input data, but just a single column for each record. That single column contains the prediction result for the corresponding row of input data.


Updated October 27, 2021
Back to top