Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Custom model components

To create and upload a custom model, you need to define two components—the model’s content and an environment where the model’s content will run:

  • The model content is code written in Python or R. To be correctly parsed by DataRobot, the code must follow certain criteria. The model artifact's structure should match the library used by the model. In addition, it should use the appropriate custom hooks for Python, R, and Java models. (Optional) You can add files that will be uploaded and used together with the model’s code (for example, you might want to add a separate file with a dictionary if your custom model contains text preprocessing).

  • The model environment is defined using a Docker file and additional files that will allow DataRobot to build an image where the model will run. There are a variety of built-in environments; you only need to build your own environment when you need to install Linux packages. For more detailed information, see the section on custom model environments.

At a high level, the steps to define a custom model with these components include:

  1. Define and test model content locally (i.e., on your computer).

  2. (Optional) Create a container environment where the model will run.

  3. Upload the model content and environment (if applicable) into DataRobot.

Model content

To define a custom model, create a local folder containing the files listed in the table below (detailed descriptions follow the table).

Tip

To ensure your assembled custom model folder has the correct contents, you can find examples of these files in the DataRobot model template repository on GitHub.

File Description Required
Model artifact file
or
custom.py/custom.R file
Provide a model artifact and/or a custom code file.
  • Model artifact: a serialized model artifact with a file extension corresponding to the chosen environment language.
  • Custom code: custom capabilities implemented with hooks (or functions) that enable DataRobot to run the code and integrate it with other capabilities.
Yes
model-metadata.yaml A file describing model's metadata, including input/output data requirements. You can supply a schema that can then be used to validate the model when building and training a blueprint. A schema lets you specify whether a custom model supports or outputs:
  • Certain data types
  • Missing values
  • Sparse data
  • A certain number of columns
Required when a custom model outputs non-numeric data. If not provided, a default schema is used.
requirements.txt A list of Python or R packages to add to the base environment. This list pre-installs Python or R packages that the custom model is using but are not a part of the base environment No
Additional files Other files used by the model (for example, a file that defines helper functions used inside custom.py). No

For Python, provide a list of packages with their versions (1 package per row). For example:

numpy>=1.16.0, <1.19.0
pandas==1.1.0
scikit-learn==0.23.1
lightgbm==3.0.0
gensim==3.8.3
sagemaker-scikit-learn-extension==1.1.0

For R, provide a list of packages without versions (1 package per row). For example:

dplyr
stats

Model code

To define a custom model using DataRobot’s framework, your custom model should include a model artifact corresponding to the chosen environment language, custom code in a custom.py (for Python models) or custom.R (for R models) file, or both. If you provide only the custom code (without a model artifact), you must use the load_model hook. The following hooks can be used in your custom code:

Hook (Function) Unstructured/Structured Purpose
init() Both Initialize the model run by loading model libraries and reading model files. This hook is executed only once at the beginning of a run.
load_model() Both Load all supported and trained objects from multiple artifacts, or load a trained object stored in an artifact with a format not natively supported by DataRobot. This hook is executed only once at the beginning of a run.
read_input_data() Structured Customize how the model reads data; for example, with encoding and missing value handling.
transform() Structured Define the logic used by custom transformers and estimators to generate transformed data.
score() Structured Define the logic used by custom estimators to generate predictions.
score_unstructured Unstructured Define the output of a custom estimator and returns predictions on input data. Do not use this hook for transform models.
post_process() Structured Define the post processing steps applied to the model's predictions.

Note

These hooks are executed in the order listed.

For more information on defining a custom model's code, see the hooks for structured custom models or unstructured custom models.

Model metadata

To define metadata, create a model-metadata.yaml file and put it in the top level of the model/model directory. The file specifies additional information about a custom model.

Model environment

There are multiple options for defining the environment where a custom model runs. You can:

  • Choose from a variety of drop-in environments.

  • Modify a drop-in environment to include missing Python or R packages by specifying the packages in the model's requirements.txt file. If provided, the requirements.txt file must be uploaded together with the custom.py or custom.R file in the model content. If model content contains subfolders, it must be placed in the top folder.

  • Build a custom environment if you need to install Linux packages.

    When creating a custom model with a custom environment, the environment used must be compatible with the model contents, as it defines the model's runtime environment. To ensure you follow the compatibility guidelines:

    • Use or modify the custom environment templates that are compatible with your custom models.

    • Reference the guidelines for building your own environment. DataRobot recommends using an environment template, not building your own environment except for specific use cases; for example, if you don't want to use DRUM, but want to implement your own prediction server.


Updated February 16, 2024