Skip to content

Assemble structured custom models

DataRobot provides built-in support for a variety of libraries to create models that use conventional target types. If your model is based on one of these libraries, DataRobot expects your model artifact to have a matching file extension:

Library File Extension Example
Scikit-learn *.pkl sklearn-regressor.pkl
Xgboost *.pkl xgboost-regressor.pkl
PyTorch *.pth torch-regressor.pth
tf.keras (tensorflow>=2.2.1) *.h5 keras-regressor.h5
ONNX *.onnx onnx-regressor.onnx
pmml *.pmml pmml-regressor.pmml
Library File Extension Example
Caret *.rds brnn-regressor.rds
Library File Extension Example
datarobot-prediction *.jar dr-regressor.jar
h2o-genmodel *.java GBM_model_python_1589382591366_1.java (pojo)
h2o-genmodel *.zip GBM_model_python_1589382591366_1.zip (mojo)
h2o-genmodel-ext-xgboost *.java XGBoost_2_AutoML_20201015_144158.java
h2o-genmodel-ext-xgboost *.zip XGBoost_2_AutoML_20201015_144158.zip
h2o-ext-mojo-pipeline *.mojo ...

Note

  • DRUM supports models with DataRobot-generated Scoring Code and models that implement either the IClassificationPredictor or IRegressionPredictor interface from the DataRobot-prediction library. The model artifact must have a .jar extension.

  • You can define the DRUM_JAVA_XMX environment variable to set JVM maximum heap memory size (-Xmx java parameter): DRUM_JAVA_XMX=512m.

  • If you export an H2O model as POJO, you cannot rename the file; however, this limitation doesn't apply to models exported as MOJO—they may be named in any fashion.

  • The h2o-ext-mojo-pipeline requires an h2o driverless AI license.

  • Support for DAI Mojo Pipeline has not been incorporated into tests for the build of datarobot-drum.

If your model doesn't use one of the following libraries, you must create an unstructured custom model.

Compare the characteristics and capabilities of the two types of custom models below:

Model type Characteristics Capabilities
Structured
  • Uses a target type known to DataRobot (e.g., regression, binary classification, multiclass, and anomaly detection).
  • Required to conform to a request/response schema.
  • Accepts structured input and output data.
  • Full deployment capabilities.
  • Accepts training data after deployment.
Unstructured
  • Uses a custom target type, unknown to DataRobot.
  • Not required to conform to a request/response schema.
  • Accepts unstructured input and output data.
  • Limited deployment capabilities. Doesn't support data drift and accuracy statistics, challenger models, or humility rules.
  • Doesn't accept training data after deployment.

Structured custom model requirements

If your custom model uses one of the supported libraries, make sure it meets the following requirements:

  • Data sent to a model must be usable for predictions without additional pre-processing.
  • Regression models must return a single floating point per row of prediction data.
  • Binary classification models must return one floating point value <= 1.0 or two floating point values that sum to 1.0 per row of prediction data.
    • Single-value output is assumed to be the positive class probability.
    • For multi-value, it is assumed that the first value is the negative class probability and the second is the positive class probability.
  • There must be a single pkl/pth/h5 file present.

Data format

When working with structured models DataRobot supports data as files of csv, sparse, or arrow format. DataRobot doesn't sanitize missing or abnormal (containing parentheses, slashes, symbols, etc. ) column names.

Structured custom model hooks

To define a custom model using DataRobot’s framework, your artifact file should contain hooks (or functions) to define how a model is trained and how it scores new data. DataRobot automatically calls each hook and passes the parameters based on the project and blueprint configuration. However, you have full flexibility to define the logic that runs inside each hook. If necessary, you can include these hooks alongside your model artifacts in your model folder in a file called custom.py for Python models or custom.R for R models.

Include all required custom model code in hooks

Custom model hooks are callbacks passed to the custom model. All code required by the custom model must be in a custom model hook—the custom model can't access any code provided outside a defined custom model hook. In addition, you can't modify the input arguments of these hooks as they are predefined.

Note

Training and inference hooks can be defined in the same file.

The following sections describe each hook, with examples.

Type annotations in hook signatures

The following hook signatures are written with Python 3 type annotations. The Python types match the following R types:

Python type R type Description
DataFrame data.frame A numpy DataFrame or R data.frame.
None NULL Nothing
str character String
Any An R object The deserialized model.
*args, **kwargs ... These are keyword arguments, not types; they serve as placeholders for additional parameters.

init()

The init hook is executed only once at the beginning of the run to allow the model to load libraries and additional files for use in other hooks.

init(**kwargs) -> None

init() input

Input parameter Description
**kwargs Additional keyword arguments. code_dir is the path where the model code is stored.

init() example

def init(code_dir):
    global g_code_dir
    g_code_dir = code_dir
init <- function(...) {
    library(brnn)
    library(glmnet)
}

init() output

The init() hook does not return anything.


load_model()

The load_model() hook is executed only once at the beginning of the run to load one or more trained objects from multiple artifacts. It is only required when a trained object is stored in an artifact that uses an unsupported format or when multiple artifacts are used. The load_model() hook is not required when there is a single artifact in one of the supported formats:

  • Python: .pkl, .pth, .h5, .joblib
  • Java: .mojo
  • R: .rds
load_model(code_dir: str) -> Any

load_model() input

Input parameter Description
code_dir Additional keyword arguments. code_dir is the path where the model code is stored.

load_model() example

def load_model(code_dir):
    model_path = "model.pkl"
    model = joblib.load(os.path.join(code_dir, model_path))
    return model
load_model <- function(input_dir) {
    readRDS(file.path(input_dir, "model_name.rds"))
}

load_model() output

The load_model() hook returns a trained object (of any type).


read_input_data()

The read_input_data hook customizes how the model reads data; for example, with encoding and missing value handling.

read_input_data(input_binary_data: bytes) -> Any

read_input_data() input

Input parameter Description
input_binary_data Data passed through the --input parameter in drum score mode, or a payload submitted to the drum server /predict endpoint.

read_input_data() example

def read_input_data(input_binary_data):
    global prediction_value
    prediction_value += 1
    return pd.read_csv(io.BytesIO(input_binary_data))
read_input_data <- function(input_binary_data) {
    input_text_data <- stri_conv(input_binary_data, "utf8")
    read.csv(text=gsub("\r","", input_text_data, fixed=TRUE))
}

read_input_data() output

The read_input_data() hook must return a pandas DataFrame or R data.frame; otherwise, you must write your own score method.


transform()

The transform() hook defines the output of a custom transform and returns transformed data. Do not use this hook for estimator models. This hook can be used in both transformer and estimator tasks:

  • For transformers, this hook applies transformations to the data provided and passes it to downstream tasks.

  • For estimators, this hook applies transformations to the prediction data before making predictions.

transform(data: DataFrame, model: Any) -> DataFrame

transform() input

Input parameter Description
data A pandas DataFrame (Python) or R data.frame containing the data that the custom model should transform. Missing values are indicated with NaN in Python and NA in R, unless otherwise overridden by the read_input_data hook.
model A trained object DataRobot loads from the artifact (typically, a trained transformer) or loaded through the load_model hook.

transform() example

def transform(data, model):
    data = data.fillna(0)
    return data
transform <- function(data, model) {
    data[is.na(data)] <- 0
    data
}

transform() output

The transform() hook returns a pandas DataFrame or R data.frame with transformed data.


score()

The score() hook defines the output of a custom estimator and returns predictions on input data. Do not use this hook for transform models.

score(data: DataFrame, model: Any, **kwargs: Dict[str, Any]) -> DataFrame

score() input

Input parameter Description
data A pandas DataFrame (Python) or R data.frame containing the data the custom model will score. If the transform hook is used, data will be the transformed data.
model A trained object loaded from the artifact by DataRobot or loaded through the load_model hook.
**kwargs Additional keyword arguments. For a binary classification model, it contains the positive and negative class labels as the following keys:
  • positive_class_label
  • negative_class_label

score() examples

def score(data: pd.DataFrame, model: Any, **kwargs: Dict[str, Any]) -> pd.DataFrame:
    predictions = model.predict(data)
    predictions_df = pd.DataFrame(predictions, columns=[kwargs["positive_class_label"]])
    predictions_df[kwargs["negative_class_label"]] = (
        1 - predictions_df[kwargs["positive_class_label"]]
    )

    return predictions_df
score <- function(data, model, ...){
    scores <- predict(model, newdata = data, type = "prob")
    names(scores) <- c('0', '1')
    return(scores)
}

score() output

The score() hook should return a pandas DataFrame (or R data.frame or tibble) of the following format:

  • For regression or anomaly detection projects, the output must have a numeric column named Predictions.

  • For binary or multiclass projects, the output must have one column per class label, with class names used as column names. Each cell must contain the floating-point class probability of the respective class and these rows must sum up to 1.0.

Additional output columns

Availability information

Additional output in prediction responses for custom models is off by default. Contact your DataRobot representative or administrator for information on enabling this feature.

Feature flag: Enable Additional Custom Model Output in Prediction Responses

The score() hook can return any number of extra columns, containing data of types string, int, float, bool, or datetime. When additional columns are returned through the score() method, the prediction response is as follows:

  • For a tabular response (CSV), the additional columns are returned as part of the response table or dataframe.
  • For a JSON response, the extraModelOutput key is returned alongside each row. This key is a dictionary containing the values of each additional column in the row.
Examples: Return extra columns

The following score hooks for various target types return extra columns (containing random data for illustrative purposes) alongside the prediction data:

Regression
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
def score(data, model, **kwargs):
    predictions = []
    int_extra_data = []
    str_extra_data = []

    for index in range(data.shape[0]):
        predictions.append(round(random(), 2))

        int_extra_data.append(randint(0,100))
        str_extra_data.append(f'str-{randint(0,100)}')


    return pd.DataFrame(
        {
            "Predictions": predictions,
            "Extra Integer": int_extra_data,
            "Extra String": str_extra_data,
        }
    )
Multiclass
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def score(data: pd.DataFrame, model: Any, **kwargs: Dict[str, Any]) -> pd.DataFrame:
    positive_label = kwargs["positive_probabilities"]
    negative_label = kwargs["negative_probabilities"]

    positive_probabilities = []
    negative_probabilities = []
    int_extra_data = []
    str_extra_data = []
    for index in range(data.shape[0]):
        probability = round(random(), 2)

        positive_probabilities.append(probability)
        negative_probabilities.append(1 - probability)

        int_extra_data.append(randint(0,100))
        str_extra_data.append(f'str-{randint(0,100)}')


    return pd.DataFrame(
        {
            positive_label: positive_probabilities,
            negative_label: negative_probabilities,
            "Extra Integer": int_extra_data,
            "Extra String": str_extra_data,
        }
    )
Generative AI
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
def score(data: pd.DataFrame, model, **kwargs):
    prompt_column_name = RuntimeParameters.get('PROMPT_COLUMN_NAME')
    target_column_name = os.environ['TARGET_NAME']

    target_data = []
    int_extra_data = []
    str_extra_data = []
    query_rows = data[prompt_column_name].tolist()
    for index, query in enumerate(query_rows):
        target_data.append(f'Answer {index} for query "{query}"')
        int_extra_data.append(randint(0,100))
        str_extra_data.append(f'str-{randint(0,100)}')

    return pd.DataFrame(
        {
            target_column_name: target_data,
            "Extra Integer": int_extra_data,
            "Extra String": str_extra_data,
        }
    )

chat()

The chat() hook allows custom models to implement the Bolt-on Governance API to provide access to chat history and streaming response. When using the Bolt-on Governance API with a deployed LLM blueprint, see LLM availability for the recommended values of the model parameter. Alternatively, specify a reserved value, model="datarobot-deployed-llm", to let the LLM blueprint select the relevant model ID automatically when calling the LLM provider's services.

chat(completion_create_params: CompletionCreateParams, model: Any) -> ChatCompletion | Iterator[ChatCompletionChunk]

In Workbench, when adding a deployed LLM that implements the chat function, the playground uses the Bolt-on Governance API as the preferred communication method. Enter the Chat model ID associated with the LLM blueprint to set the model parameter for requests from the playground to the deployed LLM. Alternatively, enter datarobot-deployed-llm to let the LLM blueprint select the relevant model ID automatically when calling the LLM provider's services.

chat() input

Input parameter Description
completion_create_params An object containing all the parameters required to create the chat completion. For more information, review the following types from the OpenAI Python API library: CompletionCreateParams, ChatCompletion, and ChatCompletionChunk.
model The deserialized model loaded by DRUM or by load_model, if supplied.

chat() example

def chat(completion_create_params, model):
    openai_client = model
    return openai_client.chat.completions.create(**completion_create_params)

chat() output

The chat() hook returns a ChatCompletion object if streaming is disabled and Iterator[ChatCompletionChunk] if streaming is enabled. If there are prompt guards configured, the first chunk of the stream contains the prompt guard moderations information (accessible via datarobot_moderations on the chunk). For every response guard configured that can be applied to a chunk (all guards except faithfulness, NeMo, Rouge-1, Agent goal accuracy, and task adherence), each intermediate chunk (except the last chunk) has moderations information for those guards accessible via datarobot_moderations. For the last chunk, all response guards information is accessible via datarobot_moderations.

Association ID

As of DRUM v1.16.16, every chat completion response automatically generates and returns an association ID (as datarobot_association_id). The same association ID gets passed to any other configured custom metrics for the deployed LLM.

A custom association ID can be optionally specified for chat requests in place of the auto-generated ID by setting datarobot_association_id in the extra_body field of the chat request. The extra_body field is a standard way to add more parameters to an OpenAI chat request, allowing the chat client to pass model-specific parameters to an LLM.

When making a chat request to a DataRobot-deployed text generation or agentic workflow custom model, values can also be reported for arbitrary custom metrics defined for the deployment by setting datarobot_metrics in the extra_body field. If the field datarobot_association_id is found in extra_body, DataRobot uses that value instead of the automatically generated one. If the field datarobot_metrics is found in extra_body, DataRobot reports a custom metric for all the name:value pairs found inside. A matching custom metric for each name must already be defined for the deployment. If the reported value is a string, the custom metric must be the multiclass type, with the reported value matching one of the classes.

Association ID requirement

The deployed custom model must have an association ID column defined for DataRobot to process custom metrics from chat requests, regardless of whether extra_body is specified. Moderation must be configured for the custom model for the metrics to be processed.

The following example shows how to set the association ID and custom metric values using extra_body:

from openai import OpenAI

openai_client = OpenAI(
    base_url="https://<your-datarobot-instance>/api/v2/deployments/{deployment_id}/",
    api_key="<your_api_key>",
)

extra_body = {
    # These values pass through to the LLM
    "llm_id": "azure-gpt-6",
    # If set here, replaces the auto-generated association ID
    "datarobot_association_id": "my_association_id_0001",
    # DataRobot captures these for custom metrics
    "datarobot_metrics": {
        "field1": 24,
        "field2": "example"
    }
}

completion = openai_client.chat.completions.create(
    model="datarobot-deployed-llm",
    messages=[
        {"role": "system", "content": "Explain your thoughts using at least 100 words."},
        {"role": "user", "content": "What would it take to colonize Mars?"},
    ],
    max_tokens=512,
    extra_body=extra_body
)

print(completion.choices[0].message.content)
Moderations

Moderation guardrails help your organization block prompt injection and hateful, toxic, or inappropriate prompts and responses. Moderation library now supports streaming response. In order for the chat() hook to return datarobot_moderations, the deployed LLM must be running in an execution environment that has the moderation library installed, and the custom model code directory must contain moderation_config.yaml to configure the moderations.

The example below shows what is present in ChatCompletion if streaming = False and in ChatCompletionChunk if streaming = True and moderation is enabled.

datarobot_moderations={
'Prompt tokens_latency': 0.20357584953308105,
'Prompts_token_count': 8,
'ROUGE-1_latency': 0.028343677520751953,
'Response tokens_latency': 0.0007507801055908203,
'Responses_rouge_1': 1.0,
'Responses_token_count': 1,
'action_promptText': '',
'action_resultText': '',
'association_id': '3d7d525b-9e99-42a4-a641-70254e924a76',
'blocked_promptText': False, 'blocked_resultText': False,
'datarobot_confidence_score': 1.0,
'datarobot_latency': 4.249604940414429,
'datarobot_token_count': 1,
'moderated_promptText': 'Now divide the result by 2.',
'replaced_promptText': False,
'replaced_resultText': False,
'reported_promptText': False,
'reported_resultText': False,
'unmoderated_resultText': '10'
}
Citations

In order for the chat() hook to return citations, the deployed LLM must have a vector database associated with it. The chat() hook returns keys related to citations and accessible to custom models.

For example:

citations=[
    {
        'content': 'ISS science results have Earth-based \napplications, including understanding our \nclimate, contributing to the treatment of \ndisease, improving existing materials, and \ninspiring the future generation of scientists, \nclinicians, technologists, engineers, \nmathematicians, artists, and explorers.\nBENEFITS\nFOR HUMANITY\nDISCOVERY\nEXPLORATION',
        'link': 'Space_Station_Annual_Highlights/iss_2020_highlights.pdf:10',
        'metadata':
        {
            'chunk_id': '953',
            'content': 'ISS science results have Earth-based \napplications, including understanding our \nclimate, contributing to the treatment of \ndisease, improving existing materials, and \ninspiring the future generation of scientists, \nclinicians, technologists, engineers, \nmathematicians, artists, and explorers.\nBENEFITS\nFOR HUMANITY\nDISCOVERY\nEXPLORATION',
            'page': 10,
            'similarity_score': 0.46,
            'source': 'Space_Station_Annual_Highlights/iss_2020_highlights.pdf'
        },
        'vector': None
    },
]

get_supported_llm_models()

DataRobot custom models support the OpenAI "List Models" API. To customize your model's response to this API, implement the get_supported_llm_models() hook in custom.py.

def get_supported_llm_models(model: Any):

get_supported_llm_models() input

Input parameter Description
model Optional. A model ID to compare against.

get_supported_llm_models() example

def get_supported_llm_models(model: Any):
    _ = model
    return [
        Model(
            id="datarobot_llm_id",
            created=1744854432,
            object="model",
            owned_by="tester@datarobot.com",
        )
    ]

You can retrieve the supported models for a custom model using the OpenAI client or the DataRobot REST API:

from openai import OpenAI

API_KEY = '<datarobot API token>'
CHAT_API_URL = 'https://app.datarobot.com/api/v2/deployments/<id>/'

def list_models():
    openai_client = OpenAI(
        base_url=CHAT_API_URL,
        api_key=API_KEY,
        _strict_response_validation=False
    )
    response = openai_client.models.list()
    print("listing models...")
    print(response.to_dict())
$ curl "https://app.datarobot.com/api/v2/deployments/<id>/models" \
-H "Authorization: Bearer <datarobot API token>"

{"data":[{"created":1744854432,"id":"datarobot_llm_id","object":"model","owned_by":"tester@datarobot.com"}],"object":"list"}

In the DRUM repository, you can view a simple text generation model that supports the OpenAI API /chat and /models endpoints through the chat() and get_supported_llm_models() hooks.

get_supported_llm_models () output

If your custom.py does not implement get_supported_llm_models(), the custom model returns a one item list based on the LLM_ID runtime parameter, if it exists. Custom models exported from Playground blueprints have this parameter already set to the LLM you selected for the blueprint. If get_supported_llm_models() is not defined, and the LLM_ID runtime parameter is not defined, then the /models API returns an empty list. Support for /models is in DRUM 1.16.12 or later.


post_process()

The post_process hook formats the prediction data returned by DataRobot or the score hook when it doesn't match the output format expectations.

post_process(predictions: DataFrame, model: Any) -> DataFrame

post_process() input

Input parameter Description
predictions A pandas DataFrame (Python) or R data.frame containing the scored data produced by DataRobot or the score hook.
model A trained object loaded from the artifact by DataRobot or loaded through the load_model hook.

post_process() example

def post_process(predictions, model):
    return predictions + 1
post_process <- function(predictions, model) {
    names(predictions) <- c('0', '1')
}

post_process() output

The post_process hook returns a pandas DataFrame (or R data.frame or tibble) of the following format:

  • For regression or anomaly detection projects, the output must have a single numeric column named Predictions.

  • For binary or multiclass projects, the output must have one column per class, with class names used as column names. Each cell must contain the probability of the respective class, and each row must sum up to 1.0.