Deploy a custom model in a DataRobot Environment¶
Custom inference models allow you to bring your pre-trained models into DataRobot. To deploy a custom model to a DataRobot prediction environment, you can create a custom model in the Custom Model Workshop. Then, you can prepare, test, and register that model, and deploy it to a centralized deployment hub where you can monitor, manage, and govern it alongside your deployed DataRobot models. DataRobot supports custom models built in various programming languages, including Python, R, and Java.
To create and deploy a custom model in DataRobot, follow the workflow outlined below:
graph TB
A[Create a custom model] --> B{Use a custom model environment?}
B --> |Yes|C[Create a custom model environment]
B --> |No|D[Prepare the custom model];
C --> D
D --> E{Test locally?}
E --> |No|H[Test the custom model in DataRobot]
E --> |Yes|F[Install the DataRobot Model Runner]
F --> G[Test the custom model locally]
G --> H
H --> I[Register the custom model]
I --> J[Deploy the custom model]
Create a custom model¶
Custom inference models are user-created, pre-trained models (made up of a collection of files) uploaded to DataRobot via the Custom Model Workshop.
You can assemble custom inference models in either of the following ways:
-
Create a custom model without providing the model requirements and
start_server.sh
file on the Assemble tab. This type of custom model must use a drop-in environment. Drop-in environments contain the requirements andstart_server.sh
file used by the model. They are provided by DataRobot in the Custom Model Workshop. You can also create your own drop-in custom environment. -
Create a custom model with the model requirements and
start_server.sh
file on the Assemble tab. This type of custom model can be paired with a custom or drop-in environment.
(Optional) Create a custom model environment¶
If you decide to use a custom environment or a custom drop-in environment, you must create that environment in the Custom Model Workshop. You can reuse these environments for other custom models.
You can assemble custom model environments in either of the following ways:
-
Create a custom drop-in environment with the model requirements and
start_server.sh
file for the model. DataRobot provides several default drop-in environments in the Custom Model Workshop. -
Create a custom environment without the model requirements and
start_server.sh
file. Instead, you must provide the requirements and astart_server.sh
file in the model folder for the custom model you intend to use with this environment.
Create a custom model environment
Prepare the custom model¶
Before adding custom models and environments to DataRobot, you must prepare and structure the files required to run them successfully. The tools and templates necessary to prepare custom models are hosted in the Custom Model GitHub repository (Log in to GitHub before clicking this link.). Once you verify the model's files and folder structure, you can proceed to test the model.
(Optional) Test locally¶
The DataRobot Model Runner (DRUM) is a tool you can use to work locally with Python, R, and Java custom models. It can verify that a custom model can run and make predictions before you add it to DataRobot. However, this testing is only for development purposes, and DataRobot recommends that you use the Custom Model Workshop to test any model you intend to deploy.
Test in DataRobot¶
Testing the custom model in the Custom Model Workshop ensures that the model is functional before deployment. These tests use the model environment to run the model and make predictions with test data.
Note
While you can deploy your custom inference model without testing, DataRobot strongly recommends that you ensure your model passes testing in the Custom Model Workshop before deployment.
Test a custom model in DataRobot
Register the custom model¶
After successfully creating and testing a custom inference model in the Custom Model Workshop, you can add it to the Model Registry as a deployment-ready model package.
Deploy the custom model¶
After you register a custom inference model in the Model Registry, you can deploy it. Deployed custom models make predictions using API calls to a dedicated prediction server managed by DataRobot.