Skip to content

アプリケーション内で をクリックすると、お使いのDataRobotバージョンに関する全プラットフォームドキュメントにアクセスできます。

Assemble structured custom models

DataRobot provides built-in support for a variety of libraries to create models that use conventional target types. If your model is based on one of these libraries, DataRobot expects your model artifact to have a matching file extension:

ライブラリ ファイル拡張子
Scikit-learn *.pkl sklean-regressor.pkl
Xgboost *.pkl xgboost-regressor.pkl
PyTorch *.pth torch-regressor.pth
tf.keras (tensorflow>=2.2.1) *.h5 keras-regressor.h5
ONNX *.onnx onnx-regressor.onnx
pmml *.pmml pmml-regressor.pmml
ライブラリ ファイル拡張子
Caret *.rds brnn-regressor.rds
ライブラリ ファイル拡張子
datarobot-prediction *.jar dr-regressor.jar
h2o-genmodel *.java (pojo)
h2o-genmodel *.zip (mojo)
h2o-genmodel-ext-xgboost *.java
h2o-genmodel-ext-xgboost *.zip
h2o-ext-mojo-pipeline *.mojo


  • DRUMは、DataRobotで生成された「スコアリングコード」を含むモデルと、DataRobot-predictionライブラリIClassificationPredictorまたはIRegressionPredictorインターフェイスのいずれかを実装するモデルをサポートします。 モデルアーティファクトには、.jar拡張子が必要です。

  • You can define the DRUM_JAVA_XMX environment variable to set JVM maximum heap memory size (-Xmx java parameter): DRUM_JAVA_XMX=512m.

  • If you export an H2O model as POJO, you cannot rename the file; however, this limitiation doesn't apply to models exported as MOJO—they may be named in any fashion.

  • The h2o-ext-mojo-pipeline requires an h2o driverless AI license.

  • Support for DAI Mojo Pipeline has not been incorporated into tests for the build of datarobot-drum.

If your model doesn't use one of the following libraries, you must create an unstructured custom model.

Compare the characteristics and capabilities of the two types of custom models below:

モデルタイプ 特性 Capabilities
  • Uses a target type known to DataRobot (e.g., regression, binary classification, multiclass, and anomaly detection).
  • Required to conform to a request/response schema.
  • Accepts structured input and output data.
  • Full deployment capabilities.
  • Accepts training data after deployment.
  • Uses a custom target type, unknown to DataRobot.
  • Not required to conform to a request/response schema.
  • Accepts unstructured input and output data.
  • Limited deployment capabilities. Doesn't support data drift and accuracy statistics, challenger models, or humility rules.
  • Doesn't accept training data after deployment.

Structured custom model requirements

If your custom model uses one of the supported libraries, make sure it meets the following requirements:

  • Data sent to a model must be usable for predictions without additional pre-processing.
  • 連続値モデルは、予測データの行ごとに単一の浮動小数点を返す必要があります。
  • Binary classification models must return one floating point value <= 1.0 or two floating point values that sum to 1.0 per row of prediction data.
    • Single-value output is assumed to be the positive class probability.
    • For multi-value, it is assumed that the first value is the negative class probability and the second is the positive class probability.
  • There must be a single pkl/pth/h5 file present.

Data format

When working with structured models DataRobot supports data as files of csv, sparse, or arrow format. DataRobot doesn't sanitize missing or abnormal (containing parentheses, slashes, symbols, etc. ) column names.

Structured custom model hooks

To define a custom model using DataRobot’s framework, your artifact file should contain hooks (or functions) to define how a model is trained and how it scores new data. DataRobotは各フックを自動的に呼び出し、プロジェクトおよびブループリントの設定に基づいてパラメーターを渡します。 しかし、各フック内で実行するロジックを定義できる柔軟性があります。 If necessary, you can include these hooks alongside your model artifacts in your model folder in a file called for Python models or custom.R for R models.


Training and inference hooks can be defined in the same file.


Type annotations in hook signatures

The following hook signatures are written with Python 3 type annotations. The Python types match the following R types:

Python type R type 説明
DataFrame data.frame A numpy DataFrame or R data.frame.
None NULL Nothing
str character 文字列
Any An R object The deserialized model.
*args, **kwargs These are keyword arguments, not types; they serve as placeholders for additional parameters.


The init hook is executed only once at the beginning of the run to allow the model to load libraries and additional files for use in other hooks.

init(**kwargs) -> None 

init() 入力

入力パラメーター 説明
**kwargs An additional keyword argument. code_dir provides a link, passed through the --code_dir parameter, to the folder where the model code is stored.



def init(code_dir):
    global g_code_dir
    g_code_dir = code_dir 
init <- function() {

init() 出力



The load_model() hook is executed only once at the beginning of the run to load one or more trained objects from multiple artifacts. トレーニング済みのオブジェクトがサポートされていない形式を使用するアーティファクトに保存されている場合、または複数のアーティファクトが使用される場合にのみ必要です。 The load_model() hook is not required when there is a single artifact in one of the supported formats:

  • Python:.pkl.pth.h5.joblib
  • Java:.mojo
  • R:.rds
load_model(code_dir: str) -> Any 

load_model() 入力

入力パラメーター 説明
code_dir A link, passed through the --code_dir parameter, to the directory where the model artifact and additional code are provided.



def load_model(code_dir):
    model_path = "model.pkl"
    model = joblib.load(os.path.join(code_dir, model_path)) 
load_model <- function(input_dir) {
    readRDS(file.path(input_dir, "model_name.rds"))

load_model() 出力



The read_input_data hook customizes how the model reads data; for example, with encoding and missing value handling.

read_input_data(input_binary_data: bytes) -> Any 


入力パラメーター 説明
input_binary_data Data passed through the --input parameter in drum score mode, or a payload submitted to the drum server /predict endpoint.


def read_input_data(input_binary_data):
    global prediction_value
    prediction_value += 1
    return pd.read_csv(io.BytesIO(input_binary_data)) 
read_input_data <- function(input_binary_data) {
    input_text_data <- stri_conv(input_binary_data, "utf8")
    read.csv(text=gsub("\r","", input_text_data, fixed=TRUE))


The read_input_data() hook must return a pandas DataFrame or R data.frame; otherwise, you must write your own score method.


transform()フックはカスタム変換の出力を定義し、変換されたデータを返します。 推定器モデルにこのフックを使用しないでください。 This hook can be used in both transformer and estimator tasks:

  • For transformers, this hook applies transformations to the data provided and passes it to downstream tasks.

  • For estimators, this hook applies transformations to the prediction data before making predictions.

transform(data: DataFrame, model: Any) -> DataFrame 

transform() 入力

入力パラメーター 説明
data A pandas DataFrame (Python) or R data.frame containing the data that the custom model should transform. Missing values are indicated with NaN in Python and NA in R, unless otherwise overridden by the read_input_data hook.
model A trained object DataRobot loads from the artifact (typically, a trained transformer) or loaded through the load_model hook.



def transform(data, model):
    data = data.fillna(0)
    return data 
transform <- function(data, model) {
    data[] <- 0

transform() 出力

The transform() hook returns a pandas DataFrame or R data.frame with transformed data.


score()フックは、カスタム推定器の出力を定義し、出力データに予測を返します。 変換モデルにこのフックは使用しないでください。

score(data: DataFrame, model: Any, **kwargs: Dict[str, Any]) -> DataFrame 

score() 入力

入力パラメーター 説明
data カスタムモデルがスコアリングするデータを含むPandasのDataFrame(Python)またはR data.frame。 If the transform hook is used, data will be the transformed data.
model A trained object loaded from the artifact by DataRobot or loaded through the load_model hook.
**kwargs Additional keyword arguments. For a binary classification model, it contains the positive and negative class labels as the following keys:
  • positive_class_label
  • negative_class_label



def score(data: pd.DataFrame, model: Any, **kwargs: Dict[str, Any]) -> pd.DataFrame:
    predictions = model.predict(data)
    predictions_df = pd.DataFrame(predictions, columns=[kwargs["positive_class_label"]])
    predictions_df[kwargs["negative_class_label"]] = (
        1 - predictions_df[kwargs["positive_class_label"]]

    return predictions_df 
score <- function(data, model,){
    scores <- predict(model, newdata = data, type = "prob")
    names(scores) <- c('0', '1')

score() 出力

The score() hook should return a pandas DataFrame (or R data.frame or tibble) of the following format:

  • 連続値または異常検知プロジェクトの場合、出力に予測という単一の数値列が必要です。

  • 二値または多クラスプロジェクトについては、出力にはクラスごとに1つの列があり、クラス名を列名として使用する必要があります。 各セルには各クラスの確率が含まれている必要があります。また、各行の合計が1.0になる必要があります。


The post_process hook formats the prediction data returned by DataRobot or the score hook when it doesn't match the output format expectations.

post_process(predictions: DataFrame, model: Any) -> DataFrame 


入力パラメーター 説明
predictions A pandas DataFrame (Python) or R data.frame containing the scored data produced by DataRobot or the score hook.
model A trained object loaded from the artifact by DataRobot or loaded through the load_model hook.


def post_process(predictions, model):
    return predictions + 1 
post_process <- function(predictions, model) {
    names(predictions) <- c('0', '1')


The post_process hook returns a pandas DataFrame (or R data.frame or tibble) of the following format:

  • 連続値または異常検知プロジェクトの場合、出力に予測という単一の数値列が必要です。

  • 二値または多クラスプロジェクトについては、出力にはクラスごとに1つの列があり、クラス名を列名として使用する必要があります。 各セルには各クラスの確率が含まれている必要があります。また、各行の合計が1.0になる必要があります。

更新しました May 3, 2023
Back to top