Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Custom model components

To create and upload a custom model, you need to define two components—the model’s content and an environment where the model’s content will run:

  • The model content is code written in Python or R. To be correctly parsed by DataRobot, the code must follow certain criteria. The model artifact's structure should match the library used by the model. In addition, it should use the appropriate custom hooks for Python, R, and Java models. (Optional) You can add files that will be uploaded and used together with the model’s code (for example, you might want to add a separate file with a dictionary if your custom model contains text preprocessing).

  • The model environment is defined using a Docker file and additional files that will allow DataRobot to build an image where the model will run. There are a variety of built-in environments; you only need to build your own environment when you need to install Linux packages. For more detailed information, see the section on custom model environments.

At a high level, the steps to define a custom model with these components include:

  1. Define and test model content locally (i.e., on your computer).

  2. (Optional) Create a container environment where the model will run.

  3. Upload the model content and environment (if applicable) into DataRobot.

Model content

To define a custom model, create a local folder containing the files listed in the table below (detailed descriptions follow the table).

Tip

To ensure your assembled custom model folder has the correct contents, you can find examples of these files in the DataRobot model template repository on GitHub.

File Description Required
Model artifact file
or
custom.py/custom.R file
Provide a model artifact and/or a custom code file.
  • Model artifact: a serialized model artifact with a file extension corresponding to the chosen environment language.
  • Custom code: custom capabilities implemented with hooks (or functions) that enable DataRobot to run the code and integrate it with other capabilities.
Yes
model-metadata.yaml A file describing model's metadata, including input/output data requirements and runtime parameters. You can supply a schema that can then be used to validate the model when building and training a blueprint. A schema lets you specify whether a custom model supports or outputs:
  • Certain data types
  • Missing values
  • Sparse data
  • A certain number of columns
Required when a custom model outputs non-numeric data. If not provided, a default schema is used.
requirements.txt A list of Python or R packages to add to the base environment. This list pre-installs Python or R packages that the custom model is using but are not a part of the base environment No
Additional files Other files used by the model (for example, a file that defines helper functions used inside custom.py). No

For Python, provide a list of packages with their versions (1 package per row). For example:

numpy>=1.16.0, <1.19.0
pandas==1.1.0
scikit-learn==0.23.1
lightgbm==3.0.0
gensim==3.8.3
sagemaker-scikit-learn-extension==1.1.0

For R, provide a list of packages without versions (1 package per row). For example:

dplyr
stats

Model code

To define a custom model using DataRobot’s framework, your custom model should include a model artifact corresponding to the chosen environment language, custom code in a custom.py (for Python models) or custom.R (for R models) file, or both. If you provide only the custom code (without a model artifact), you must use the load_model hook. A hook is a function called by the custom model framework during a specific time in the custom model lifecycle. The following hooks can be used in your custom code:

Hook (Function) Unstructured/Structured Purpose
init() Both Initialize the model run by loading model libraries and reading model files. This hook is executed only once at the beginning of a run.
load_model() Both Load all supported and trained objects from multiple artifacts, or load a trained object stored in an artifact with a format not natively supported by DataRobot. This hook is executed only once at the beginning of a run.
read_input_data() Structured Customize how the model reads data; for example, with encoding and missing value handling.
transform() Structured Define the logic used by custom transformers and estimators to generate transformed data.
score() Structured Define the logic used by custom estimators to generate predictions.
score_unstructured() Unstructured Define the output of a custom estimator and returns predictions on input data. Do not use this hook for transform models.
post_process() Structured Define the post-processing steps applied to the model's predictions.

Custom model hook execution order

These hooks are executed in the order listed, as each hook represents a step in the custom model lifecycle.

For more information on defining a custom model's code, see the hooks for structured custom models or unstructured custom models.

Model metadata

To define a custom model's metadata and input validation schema, create a model-metadata.yaml file and add it to the top level of the model/model directory. The file specifies additional information about a custom model, including runtime parameters through runtimeParameterDefinitions.

Model environment

There are multiple options for defining the environment where a custom model runs. You can:

  • Choose from a variety of drop-in environments.

  • Modify a drop-in environment to include missing Python or R packages by specifying the packages in the model's requirements.txt file. If provided, the requirements.txt file must be uploaded together with the custom.py or custom.R file in the model content. If model content contains subfolders, it must be placed in the top folder.

  • Build a custom environment if you need to install Linux packages.

    When creating a custom model with a custom environment, the environment used must be compatible with the model contents, as it defines the model's runtime environment. To ensure you follow the compatibility guidelines:

    • Use or modify the custom environment templates that are compatible with your custom models.

    • Reference the guidelines for building your own environment. DataRobot recommends using an environment template, not building your own environment except for specific use cases; for example, if you don't want to use DRUM, but want to implement your own prediction server.

  • By default, when creating a model version, if the selected execution environment does not change, the version of that execution environment persists from the previous custom model version, even if a newer environment version is available. For more information on how to ensure the custom model version uses the latest version of the execution environment, see Trigger base execution environment update.

Trigger base execution environment update

To override the default behavior for execution environment version selection, where the execution environment version persists between custom model versions even when a new environment version is available, you must temporarily change the Base Environment setting. To do this, create a new custom model version using a different Base Environment setting, then create a new custom model version, switching back to the intended Base Environment. After this change, the latest version of the custom model uses the latest version of the execution environment.


Updated October 11, 2024