Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Test custom models locally

Availability information

To access the DataRobot Model Runner tool, contact your DataRobot representative.

The DataRobot Model Runner tool, named DRUM, is a tool that allows you to test Python, R, and Java custom models locally. The test verifies that a custom model can successfully run and make predictions before you upload it to DataRobot. However, this testing is only for development purposes. DataRobot recommends that any custom model you wish to deploy is also tested in the Custom Model Workshop after uploading it.

Before proceeding, reference the guidelines for setting up a custom model or environment folder.

Note

The DataRobot Model Runner tool supports Python, R, and Java custom models.

Reference the DRUM readme for details about additional functionality, including:

  • Autocompletion
  • Custom hooks
  • Performance tests
  • Running models with a prediction server
  • Running models inside a Docker container

Model requirements

In addition to the required folder contents, DRUM requires the following for your serialized model:

  • Regression models must return a single floating point per row of prediction data.
  • Binary classification models must return two floating point values that sum to 1.0 per row of prediction data.
  • The first value must be the positive class probability, and the second the negative class probability.
  • There is a single pkl/pth/h5 file present.

Run tests with the DataRobot CM Runner

Use the following commands to execute local tests for your custom model:

List all possible arguments
drum -help

Test a custom binary classification model
drum score -m ~/custom_model/ --input <input-dataset-filename.csv>  [--positive-class-label <labelname>] [--negative-class-label <labelname>] [--output <output-filename.csv>] [--verbose]

# Use --verbose for a more detailed output. Make batch predictions with a custom binary classification model. Optionally, specify an output file. Otherwise, predictions are returned to the command line.
Example: Test a custom binary classification model
drum score -m ~/custom_model/ --input 10k.csv  --positive-class-label yes --negative-class-label no --output 10k-results.csv --verbose

Test a custom regression model
drum score -m ~/custom_model/ --input <input-dataset-filename.csv> [--output <output-filename.csv>] [--verbose]

# Use --verbose for a more detailed output. Make batch predictions with a custom regression model. Optionally, specify an output file. Otherwise, predictions are returned to the command line.
Example: Test a custom regression model
drum score -m ~/custom_model/ --input fast-iron.csv --verbose

# This is an example that does not include an output command, so the prediction results return in the command line.

Updated February 8, 2023