Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Prepare custom models for deployment

Custom inference models allow you to bring your own pre-trained models to DataRobot. By uploading a model artifact to the Custom Model Workshop, you can create, test, and deploy custom inference models to a centralized deployment hub. DataRobot supports models built with a variety of coding languages, including Python, R, and Java. If you've created a model outside of DataRobot and you want to upload your model to DataRobot, you need to define two components:

  • Model content: The compiled artifact, source code, and additional supporting files related to the model.

  • Model environment: The Docker image where the model will run. Model environments can be either drop-in or custom, containing a Docker file and any necessary supporting files. DataRobot provides a variety of built-in environments. Custom environments are only required to accommodate very specialized models and use cases.

Note

Custom inference models are not custom DataRobot models. They are user-defined models created outside of DataRobot and assembled in the Custom Model Workshop for deployment, monitoring, and governance.

See the associated feature considerations for additional information.

Custom Model Workshop

Topic Describes
Custom Model Workshop How you can bring your own pre-trained models into DataRobot as custom inference models and deploy these models to a centralized deployment hub.
Create custom models How to create custom inference models in the Custom Model Workshop.
Manage custom model dependencies How to manage model dependencies from the workshop and update the base drop-in environments to support your model code.
Manage custom model resource usage How to configure the resources a model consumes to facilitate smooth deployment and minimize potential environment errors in production.
Add custom model versions How to create a new version of the model and/or environment after updating the file contents with new package versions, different preprocessing steps, updated hyperparameters, and more.
Add training data to a custom model How to add training data to a custom inference model for deployment.
Add files from a remote repo to a custom model How to connect to a remote repository and pull custom model files into the Custom Model Workshop.
Test a custom model in DataRobot How to test custom inference models in the Custom Model Workshop.
Manage custom models How to delete or share custom models and custom model environments.
Register custom models How to register custom inference models in the Model Registry.

Custom model assembly

Topic Describes
Custom model assembly How to assemble the files required to run custom inference models.
Custom model components How to identify the components required to run custom inference models.
Assemble structured custom models How to use DRUM to assemble and validate structured custom models compatible with DataRobot.
Assemble unstructured custom models How to use DRUM to assemble and validate unstructured custom models compatible with DataRobot.
DRUM CLI tool How to download and install DataRobot user model (DRUM) to work with Python, R, and Java custom models and to quickly test custom models, and custom environments locally before uploading into DataRobot.
Test a custom model locally How to test custom inference models in your local environment using the DataRobot Model Runner (DRUM) tool.

Custom model environments

Topic Describes
Custom model environments How to select a custom model environment from the drop-in environments or create additional custom environments.
Drop-in environments How to select the appropriate DataRobot drop-in environment when creating a custom model.
Custom environments How to assemble, validate, and upload a custom environment.

Feature considerations

  • The creation of deployments using model images cannot be canceled while in progress.
  • Inference models receive raw CSV data and must handle all preprocessing themselves.
  • A model's existing training data can only be changed if the model is not actively deployed. This restriction is not in place when adding training data for the first time. Also, training data cannot be unassigned; it can only be changed once assigned.
  • The target name can only be changed if a model has no training data and has not been deployed.
  • There is a per-user limit on the number of custom model deployments (30), custom environments (30), and custom environment versions (30) you can have.
  • Custom inference model server start-up is limited to 3 minutes.
  • The file size for training data is limited to 1.5GB.
  • Dependency management only works with packages in a proper index. Packages from URLs cannot be installed.
  • Unpinned python dependencies are not updated once the dependency image has been built. To update to a newer version, you will need to create a new requirements file with version constraints. DataRobot recommends always pinning versions.
  • SaaS AI Platform only: Custom inference models have no access to the internet and outside networks.

Updated March 5, 2024