Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Prepare custom models for deployment

Custom inference models allow you to bring your own pretrained models to DataRobot. By uploading a model artifact, you can create, test, and deploy custom inference models to a centralized deployment hub. DataRobot supports models built with a variety of coding languages, including Python, R, and Java.

See the associated feature considerations for additional information

Topic Describes
The Custom Model Workshop How you can bring your own pretrained models into DataRobot as custom inference models and deploy these models to a centralized deployment hub.
Create a custom model How to create and assemble custom inference models in the Custom Model Workshop.
Integrate a remote repo containing a custom model How to connect to a remote repository and pull custom model files into the Custom Model Workshop.
Prepare a custom model How to prepare and structure the files required to run custom inference models.
Select or create a custom model environment How to select a custom model from the drop-in environments or create additional custom environments.
Test a custom model locally How to test custom inference models in your local environment using the DataRobot Model Runner (DRUM) tool.
Test a custom model in DataRobot How to test custom inference models in the Custom Model Workshop.
Manage custom models How to delete or share custom models and custom model environments.
Register custom models How to register custom inference models in the Model Registry.
Manage custom model packages How to deploy, share, or archive custom models from the Model Registry.

Feature considerations

  • The creation of deployments using model images cannot be canceled while in progress.
  • Inference models receive raw CSV data and must handle all preprocessing themselves.

  • Custom inference models have no access to the internet and outside networks.

  • A model's existing training data can only be changed if the model is not actively deployed. This restriction is not in place when adding training data for the first time. Also, training data cannot be unassigned; it can only be changed once assigned.

  • The target name can only be changed if a model has no training data and has not been deployed.
  • There is a per-user limit on the number of custom model deployments (30), custom environments (30), and custom environment versions (30) you can have.
  • Custom inference model server start-up is limited to 3 minutes.
  • The file size for training data is limited to 10GB.
  • Dependency management only works with packages in a proper index. Packages from URLs cannot be installed.
  • Unpinned python dependencies are not updated once the dependency image has been built. To update to a newer version, you will need to create a new requirements file with version constraints. We recommend always pinning versions.

Updated October 26, 2022
Back to top