# External Predictions

> External Predictions - Through the **External Predictions** advanced option tab, you can bring
> external model(s) into the DataRobot AutoML environment, view them on the Leaderboard, and run a
> subset of DataRobot's evaluative insights for comparison against DataRobot models.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.596052+00:00` (UTC).

## Primary page

- [External Predictions](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html): Full documentation for this topic (HTML).

## Sections on this page

- [Workflow overview](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html#workflow-overview): In-page section heading.
- [Prepare the dataset](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html#prepare-the-dataset): In-page section heading.
- [Set advanced options](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html#set-advanced-options): In-page section heading.
- [Add an external model](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html#add-an-external-model): In-page section heading.
- [Evaluate the external model](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html#evaluate-the-external-model): In-page section heading.
- [Bias and fairness testing](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/external-preds.html#bias-and-fairness-testing): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [Modeling](https://docs.datarobot.com/en/docs/classic-ui/modeling/index.html): Linked from this page.
- [Build models](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/index.html): Linked from this page.
- [Advanced options](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/index.html): Linked from this page.
- [identify the partition column](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/partitioning.html#configure-model-validation): Linked from this page.
- [Lift Chart](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/lift-chart-classic.html): Linked from this page.
- [Residuals](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/residuals-classic.html): Linked from this page.
- [ROC Curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/index.html): Linked from this page.
- [Profit Curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/profit-curve-classic.html): Linked from this page.
- [Model comparison](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/other/model-compare.html): Linked from this page.
- [Model compliance documentation](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/compliance-classic/compliance-tab.html): Linked from this page.
- [Bias and Fairness > Settings](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/adv-opt/fairness-metrics.html#configure-metrics-and-mitigation-post-autopilot): Linked from this page.
- [Per-Class Bias](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/bias/per-class.html): Linked from this page.
- [Cross-Class Accuracy](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/bias/cross-acc.html): Linked from this page.

## Documentation content

# External Predictions

Through the External Predictions advanced option tab, you can bring external model(s) into the DataRobot AutoML environment, view them on the Leaderboard, and run a subset of DataRobot's evaluative insights for comparison against DataRobot models. This feature:

- Helps to understand how a model trained outside of DataRobot compares in terms of accuracy with DataRobot-trained models from the prediction values.
- Provides DataRobot’s trust and explainability visualizations for externally trained model(s) to provide better model understanding, compliance, and fairness results.

## Workflow overview

To bring external models into DataRobot, follow this workflow:

1. Prepare the dataset .
2. Set advanced options .
3. Add an external model .
4. Evaluate the external model .
5. Enable bias testing (binary classification only).

## Prepare the dataset

To set up the project, ensure the uploaded dataset has the following two columns:

- A column containing the values thatidentify the partition column, either cross validation or train/validation/holdout (TVH). If cross-validation is used, the values represent the folds, for example (5 CV fold example):1,2,3,4, and5. For TVH, the values are typicallyT,V, andH. This column will later be referenced in the advanced optionPartition Featurestrategy. In the following example, the column is namedpartition_column.
- A column of external model prediction values ("external predictions column"). The descriptions below use the nameModel1_outputas an example of the prediction values.

> [!NOTE] Note
> External model prediction values must be numeric. For binary classification projects, the prediction values must be between `[0.0, 1.0]`. For regression projects, the prediction values must be between `(-inf, inf)`.

## Set advanced options

To prepare for modeling:

1. Open theExternal Predictionstab in advanced options. Enter the external predictions column name(s) from your dataset (up to 100 columns). You are prompted to ensure thatPartitioningis set.
2. ClickSet Partition Featureto open the appropriate tab. From thePartitioningtab:

## Add an external model

You can add an external model on the Leaderboard either:

- As an individual model using Manual mode.
- As one of many models using full Autopilot, Quick, or Comprehensive mode. In this case, the external model is added at the end of the model recommendation process.

For example, to add a single external model:

1. From theStartpage, change the modeling mode toManual. (This allows you to select your external model from the Repository.) ClickStartto begin EDA2.
2. Once EDA2 finishes, open theDatapage. In the Importance column, the external model prediction values column,Model1_Output, is labeledExternaland the partition feature,partition_column, is labeledPartition.
3. Open the model Repository, search forModel1_Output, and select it. Notice in the resulting task setup fields, the feature list and sample size are not available for modification. This is because DataRobot cannot know which features from the training data, or what sample size, were used to train the external model.
4. ClickRun Task.

## Evaluate the external model

When model building finishes, the model becomes available on the Leaderboard for comparison and further investigation. It is marked with the EXTERNAL PREDICTIONS label:

> [!NOTE] Note
> The Leaderboard metric score (such as LogLoss) will be consistent with the equivalent validation, cross validation, and holdout metric scores calculated by scikit-learn.

The following insights are supported:

| Insight | Project type |
| --- | --- |
| Lift Chart | All |
| Residuals | Regression |
| ROC Curve | Classification |
| Profit Curve | Classification |
| Model comparison | All |
| Model compliance documentation | All; note only a subset of sections are generated due to the limited knowledge DataRobot has of the external model. |
| Bias and Fairness | Classification; see below. |

## Bias and fairness testing

Additionally, if the dataset creates a binary classification project, you can set up Bias and Fairness options for bias testing of the external model.

1. Complete the fields on theBias and Fairness > Settingspage. ClickSaveand DataRobot retrieves the necessary data.
2. Open thePer-Class Biastab to help identify if a model is biased, and if so, how much and who it's biased towards or against.
3. Open theCross-Class Accuracytab to view calculated evaluation metrics and ROC curve-related scores, segmented by class, for each protected feature.
