Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Custom model proxies for external models

Availability information: Self-Managed only

Custom model proxies for external models are only available on the Self-Managed AI Platform and are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.

Feature flags: Enable Proxy Models

Now available as a preview feature, you can create a custom model as a proxy for an externally hosted model. To create a proxy model, you:

  1. (Optional) Add runtime parameters to the custom model through the model metadata (model-metadata.yaml).

  2. Add proxy code to the custom model through the custom model file (custom.py).

  3. Create a proxy model in the Custom Model Workshop.

Add proxy code

The custom model you create as a proxy for an external model should contain custom code in the custom.py file to connect the proxy model with the externally hosted model; this code is the proxy code. See the custom model assembly documentation for more information on writing custom model code.

The proxy code in the custom.py file should do the following:

  • Import the necessary modules and, optionally, the runtime parameters from model-metadata.yaml.

  • Connect the custom model to an external model via an HTTPs connection or the network protocol required by your external model.

  • Request predictions and convert prediction data as necessary.

To simplify the reuse of proxy code, you can add runtime parameters through your model metadata in the model-metadata.yaml file:

model-metadata.yaml
name: runtime-parameter-example
type: inference
targetType: regression
runtimeParameterDefinitions:
- fieldName: endpoint
  type: string
  description: The name of the endpoint.
- fieldName: API_KEY
  type: credential
  description: The HTTP basic credential containing the endpoint's API key in the password field (the username field is ignored).

If you define runtime parameters in the model metadata, you can import them into the custom.py file to use in your proxy code. After importing these parameters, you can assign them to variables in your proxy code. This allows you to create a prediction request to connect to and retrieve prediction data from the external model. The following example outlines the basic structure of a custom.py file:

custom.py
# Import modules required to make a prediction request.
import json
import ssl
import urllib.request
import pandas as pd
# Import SimpleNamespace to create an object to store runtime parameter variables.
from types import SimpleNamespace
# Import RuntimeParameters to use the runtime parameters set in the model metadata.
from datarobot_drum import RuntimeParameters

# Override the default load_model hook to read the runtime parameters.
def load_model(code_dir):
    # Assign runtime parameters to variables.
    api_key = RuntimeParameters.get("API_KEY")["password"]
    endpoint = RuntimeParameters.get("endpoint")

    # Create scoring endpoint URL.
    url = f"https://{endpoint}.example.com/score"

    # Return an object containing the variables necessary to make a prediction request.
    return SimpleNamespace(**locals())

# Write proxy code to request and convert scoring data from the external model.
def score(data, model, **kwargs):
    # Call make_remote_prediction_request.
    # Convert prediction data as necessary.

def make_remote_prediction_request(payload, url, api_key):
    # Connect to the scoring endpoint URL.
    # Request predictions from the external model.

Create a proxy model

To create a custom model as a proxy for an external model, you can add a new proxy model to the Custom Model Workshop. A proxy model contains the proxy code you created to connect with your external model, allowing you to use features like compliance documentation, challenger analysis, and custom model tests with a model running on infrastructure outside of DataRobot.

To add a Proxy model through the Custom Model Workshop:

  1. Click Model Registry > Custom Model Workshop.

  2. On the Models tab, click + Add new model.

  3. In the Add Custom Inference Model dialog box, select Proxy, and then add the model information.

    Field Description
    Model name Name the custom model.
    Target type / Target name Select the target type (binary classification, regression, multiclass classification, anomaly detection, or unstructured) and enter the name of the target feature.
    Positive class label / Negative class label These fields only display for binary classification models. Specify the value to be used as the positive class label and the value to be used as the negative class label.
    For a multiclass classification model, these fields are replaced by a field to enter or upload the target classes in .csv or .txt format.
  4. Click Show Optional Fields and, if necessary, enter a prediction threshold, the language used to build the model, and a description.

  5. After completing the fields, click Add Custom Model.

  6. In the Assemble tab, under Model Environment on the right, select a model environment by clicking the Base Environment dropdown menu on the right and selecting an environment. The model environment is used for testing and deploying the custom model.

    Note

    The Base Environment pulldown menu includes drop-in model environments, if any exist, as well as custom environments that you can create.

  7. Under Model on the left, add proxy model content by dragging and dropping files or browsing. Alternatively, select a remote integrated repository.

    If you click Browse local file, you have the option of adding a Local Folder. The local folder is for dependent files and additional assets required by your model, not the model itself. Even if the model file is included in the folder, it will not be accessible to DataRobot unless the file exists at the root level. The root file can then point to the dependencies in the folder.

    Note

    You must also upload the model requirements and a start_server.sh file to your model's folder unless you are pairing the model with a drop-in environment.

  8. On the Assemble tab, next to Resource settings, click the edit icon () to activate the required Network access for the proxy model.

  9. If you provide runtime parameters in the model metadata, after you build the environment and create a new version, you can configure the parameters on the Assemble tab under Runtime Parameters.

  10. Finally, you can register the custom model to create a proxy model you can use to generate compliance documentation. You can then deploy the proxy model to set up challenger analysis and run custom model tests on the external model.


Updated April 5, 2024