# Challengers tab

> Challengers tab - How to use the Challengers tab to submit challenger models that shadow a deployed
> model and replay predictions made against the deployed model. If a challenger outperforms the
> deployed model, you can replace the model.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.574773+00:00` (UTC).

## Primary page

- [Challengers tab](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html): Full documentation for this topic (HTML).

## Sections on this page

- [Enable challenger models](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#enable-challenger-models): In-page section heading.
- [Select a challenger model](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#select-a-challenger-model): In-page section heading.
- [Add challengers to a deployment](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#add-challengers-to-a-deployment): In-page section heading.
- [Replay predictions](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#replay-predictions): In-page section heading.
- [Schedule prediction replay](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#schedule-prediction-replay): In-page section heading.
- [View challenger job history](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#view-challenger-job-history): In-page section heading.
- [Challenger models overview](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#challenger-models-overview): In-page section heading.
- [Challenger performance metrics](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#challenger-performance-metrics): In-page section heading.
- [Predictions chart](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#predictions-chart): In-page section heading.
- [Accuracy chart](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#accuracy-chart): In-page section heading.
- [Data Errors chart](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#data-errors-chart): In-page section heading.
- [Challenger model comparisons](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#challenger-model-comparisons): In-page section heading.
- [Generate model comparisons](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#generate-model-comparisons): In-page section heading.
- [View model comparisons](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#view-model-comparisons): In-page section heading.
- [Replace champion with challenger](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#replace-champion-with-challenger): In-page section heading.
- [Challengers for external deployments](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#challengers-for-external-deployments): In-page section heading.
- [Add challenger models to external deployments](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#add-challenger-models-to-external-deployments): In-page section heading.
- [Add external challenger comparison dataset](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#add-external-challenger-comparison-dataset): In-page section heading.
- [Manage challengers for external deployments](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#manage-challengers-for-external-deployments): In-page section heading.
- [Challenger promotion to champion](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#challenger-promotion-to-champion): In-page section heading.
- [Champion demotion to challenger](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#champion-demotion-to-challenger): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [MLOps](https://docs.datarobot.com/en/docs/classic-ui/mlops/index.html): Linked from this page.
- [Performance monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/index.html): Linked from this page.
- [prediction row storage](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/challengers-settings.html): Linked from this page.
- [creating a deployment](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/add-deploy-info.html#challenger-analysis): Linked from this page.
- [Feature Discovery](https://docs.datarobot.com/en/docs/classic-ui/data/transform-data/feature-discovery/fd-overview.html): Linked from this page.
- [Unstructured custom inference](https://docs.datarobot.com/en/docs/api/code-first-tools/drum/unstructured-custom-models.html): Linked from this page.
- [modeling process](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/build-basic/model-data.html): Linked from this page.
- [custom model](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/registry/reg-create.html#add-a-custom-inference-model): Linked from this page.
- [Model Registry](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/registry/index.html): Linked from this page.
- [Organization](https://docs.datarobot.com/en/docs/platform/admin/admin-overview.html#what-are-organizations): Linked from this page.
- [Owners](https://docs.datarobot.com/en/docs/reference/misc-ref/roles-permissions.html#deployment-roles): Linked from this page.
- [date slider](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html#use-the-time-range-and-resolution-dropdowns): Linked from this page.
- [Deployments > Prediction Jobs](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#manage-prediction-jobs): Linked from this page.
- [Prediction environments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/prediction-env/index.html): Linked from this page.
- [Accuracy chart](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#accuracy-chart): Linked from this page.
- [accuracy](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-accuracy.html): Linked from this page.
- [set an association ID](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html#select-an-association-id): Linked from this page.
- [data error rate](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html): Linked from this page.
- [stacked predictions](https://docs.datarobot.com/en/docs/reference/pred-ai-ref/data-partitioning.html#what-are-stacked-predictions): Linked from this page.
- [snapshotted](https://docs.datarobot.com/en/docs/reference/glossary/index.html#snapshot): Linked from this page.
- [dual lift chart](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/other/model-compare.html#dual-lift-chart): Linked from this page.
- [lift chart](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/lift-chart-classic.html): Linked from this page.
- [ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html): Linked from this page.
- [remote prediction environments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/ext-model-prep/ext-pred-env.html): Linked from this page.

## Documentation content

# Challengers tab

> [!NOTE] Availability information
> The Challengers tab is a feature exclusive to DataRobot MLOps users. Contact your DataRobot representative for information on enabling it.

During model development, many models are often compared to one another until one is chosen to be deployed into a production environment. The Challengers tab provides a way to continue model comparison post-deployment. You can submit challenger models that shadow a deployed model and replay predictions made against the deployed model. This allows you to compare the predictions made by the challenger models to the currently deployed model (the "champion") to determine if there is a superior DataRobot model that would be a better fit.

## Enable challenger models

To enable challenger models for a deployment, you must enable the Challengers tab and [prediction row storage](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/challengers-settings.html). To do so, configure the deployment's data drift settings either when [creating a deployment](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/add-deploy-info.html#challenger-analysis) or on the [Challengers > Settings](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/challengers-settings.html) tab. If you enable Challenger models, prediction row storage is automatically enabled for the deployment. It cannot be turned off, as it is required for challengers.

> [!NOTE] Availability information
> To enable challengers and replay predictions against them, the deployed model must support target drift tracking and not be a [Feature Discovery](https://docs.datarobot.com/en/docs/classic-ui/data/transform-data/feature-discovery/fd-overview.html) or [Unstructured custom inference](https://docs.datarobot.com/en/docs/api/code-first-tools/drum/unstructured-custom-models.html) model.

## Select a challenger model

Before adding a challenger model to a deployment, you must first build and select the model to be added as a challenger. Complete the [modeling process](https://docs.datarobot.com/en/docs/classic-ui/modeling/build-models/build-basic/model-data.html) and choose a model from the Leaderboard, or deploy a [custom model](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/registry/reg-create.html#add-a-custom-inference-model) as a model package. When selecting a challenger model, consider the following:

- It must have the same target type as the champion model.
- It cannot be the same Leaderboard model as an existing champion or challenger; each challenger must be a unique model. If you create multiple model packages from the same Leaderboard model, you can't use those models as challengers in the same deployment.
- It cannot be a Feature Discovery model.
- It does not need to be trained on the same feature list as the champion model; however, it must share some features, and, to successfully replay predictions , you must send the union of all features required for champion and challengers.
- It does not need to be built from the same project as the champion model.

When you have selected a model to serve as a challenger, from the Leaderboard, navigate to Predict  > Deploy and click Register to deploy. This creates a [registered model version](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/registry/reg-create.html#access-registered-models-and-versions) for the selected model in the [Model Registry](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/registry/index.html), so you can add the model to a deployment as a challenger.

## Add challengers to a deployment

To add a challenger model to a deployment, navigate to the Challengers tab and select + Add challenger model > Select existing model. You can add up to four challengers to each deployment. This means that in total, with the champion model included, up to five models can be compared during challenger analysis.

> [!NOTE] Note
> The selection list contains only model packages where the target type and name are the same as the champion model.

The modal prompts you to select a model package from the registry to serve as the challenger model. Choose the model to add and click Select model version.

DataRobot verifies that the model shares features and a target type with the champion model. Once verified, click Add Challenger. The model is now added to the deployment as a challenger.

## Replay predictions

After adding a challenger model, you can replay stored predictions made with the champion model for all challengers, allowing you to compare performance metrics such as predicted values, accuracy, and data errors across each model.

To replay predictions, select Update challenger predictions.

The champion model computes and stores up to 100,000 prediction rows per hour. The challengers replay the first 10,000 rows of the prediction requests made for each hour within the time range specified by the [date slider](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html#use-the-time-range-and-resolution-dropdowns). Note that for time series deployments, this limit does not apply. All prediction data is used by the challengers to compare statistics.

After predictions are made, click Refresh on the date slider to view an updated display of [performance metrics](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#challenger-performance-metrics) for the challenger models.

## Schedule prediction replay

You can replay predictions with challengers on a periodic schedule instead of doing so manually. Navigate to a deployment's [Challengers > Settings](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/challengers-settings.html) tab, enable the Automatically replay challengers toggle, and configure the preferred cadence and time of day for replaying predictions:

> [!NOTE] Note
> Only the deployment [Owner](https://docs.datarobot.com/en/docs/reference/misc-ref/roles-permissions.html#deployment-roles) can schedule challenger replay.

Once enabled, the replay will trigger at the configured time for all challengers. Note that if you have a deployment with prediction requests made in the past and choose to add challengers at the current time, the scheduled job scores the newly added challenger models upon the next run cycle.

## View challenger job history

After adding one or more challenger models and replaying predictions, you can view challenger prediction jobs for a deployment's challengers on the [Deployments > Prediction Jobs](https://docs.datarobot.com/en/docs/classic-ui/predictions/batch/batch-dep/batch-pred-jobs.html#manage-prediction-jobs) page.

To view challenger prediction jobs, click Job History.

The Prediction Jobs page opens and is filtered to display challenger jobs for the deployment you accessed the Job History from.

## Challenger models overview

The Challengers tab displays information about the champion model and each challenger.

|  | Element | Description |
| --- | --- | --- |
| (1) | Display Name | The display name for each model. Use the pencil icon to edit the display name. This field is useful for describing the purpose or strategy of each challenger (e.g., "reference model," "former champion," "reduced feature list"). |
| (2) | Challenger models | The list of challenger models. Each model is associated with a color. These colors allow you to compare the models using visualization tools. |
| (3) | Model data | The metadata for each model, including the project name, model name, and the execution environment type. |
| (4) | Prediction Environment | The environment a model uses to make deployment predictions. For more information, see Prediction environments. |
| (5) | Accuracy | The model's accuracy metric calculation for the selected date range and, for challengers, a comparison with the champion's accuracy metric calculation. Use the Accuracy metric dropdown menu to compare different metrics. For more information on model accuracy, see the Accuracy chart. |
| (6) | Training Data | The filename of the data used to train the model. |
| (7) | Actions | The actions available for each model:Replace: Promotes a challenger to the champion (the currently deployed model) and demotes the current champion to a challenger model. Remove: Removes the model from the deployment as a challenger. Only challengers can be deleted; a champion must be demoted before it can be deleted. |

### Challenger performance metrics

After prediction data is replayed for challenger models, you can examine the charts displayed below that capture the various performance metrics recorded for each model.

Each model is listed with its corresponding color. Uncheck a model's box to stop displaying the model's performance data on the charts.

#### Predictions chart

The Predictions chart records the average predicted value of the target for each model over time. Hover over a point to compare the average value for each model at a specific point in time.

For binary classification projects, use the Class dropdown to select the class for which you want to analyze the average predicted values. The chart also includes a toggle that allows you to switch between continuous and binary modes. Continuous mode shows the positive class predictions as probabilities between 0 and 1 without taking the prediction threshold into account. Binary mode takes the prediction threshold into account and shows, for all predictions made, the percentage for each possible class.

#### Accuracy chart

The Accuracy chart records the change in a selected [accuracy](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-accuracy.html) metric value (LogLoss in this example) over time. These metrics are identical to those used for the evaluation of the model before deployment. Use the dropdown to change the accuracy metric. You can select from [any of the supported metrics](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-accuracy.html#available-accuracy-metrics) for the deployment's modeling type.

> [!NOTE] Important
> You must [set an association ID](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/accuracy-settings.html#select-an-association-id) before making predictions to include those predictions in accuracy tracking.

#### Data Errors chart

The Data Errors chart records the [data error rate](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html) for each model over time. Data error rate measures the percentage of requests that result in a 4xx error (problems with the prediction request submission).

## Challenger model comparisons

MLOps allows you to compare challenger models against each other and against the currently deployed model (the "champion") to ensure that your deployment uses the best model for your needs. After evaluating DataRobot's model comparison visualizations, you can replace the champion model with a better-performing challenger.

DataRobot renders visualizations based on a dedicated comparison dataset, which you select, ensuring that you're comparing predictions based on the same dataset and partition while still allowing you to train champion and challenger models on different datasets. For example, you may train a challenger model on an updated snapshot of the same data source used by the champion.

> [!WARNING] Warning
> Make sure your comparison dataset is out-of-sample for the models being compared (i.e., it doesn't include the training data from any models included in the comparison).

### Generate model comparisons

After you [enable challengers](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#enable-challenger-models) and [add one or more challengers](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#add-challengers-to-a-deployment) to a deployment, you can generate comparison data and visualizations.

1. On theDeploymentspage, locate and expand the deployment with the champion and challenger models you want to compare.
2. Click theChallengerstab.
3. On theChallengers Summarytab, if necessary,add a challenger modelandreplay the predictionsfor challengers.
4. Click theModel Comparisontab. The following table describes the elements of theModel Comparisontab: ElementDescription1Model 1Defaults to the champion model—the currently deployed model. Click to select a different model to compare.2Model 2Defaults to the first challenger model in the list. Click to select a different model to compare. If the list doesn't contain a model you want to compare to Model 1, click theChallengers Summarytab to add a new challenger.3Open model packageClick to view the model's details. The details display in theModel Packagestab in the Model Registry.4Promote to championIf the challenger model in the comparison is the best model, clickPromote to championto replace the deployed model (the "champion") with this model.5Add comparison datasetSelect a dataset for generating insights on both models. Be sure to select a dataset that is out-of-sample for both models (seestacked predictions). Holdout and validation partitions for Model 1 and Model 2 are available as options if these partitions exist for the original model. By default, the holdout partition for Model 1 is selected. To specify a different dataset, click+ Add comparison datasetand choose a local file or asnapshotteddataset from the AI Catalog.6Prediction environmentSelect aprediction environmentfor scoring both models.7Model InsightsCompare model predictions, metrics, and more.
5. Scroll to theModel Insightssection of the Challengers page and clickCompute insights.

You can generate new insights using a different dataset by clicking + Add comparison dataset, then selecting Compute insights again.

### View model comparisons

Once you compute model insights, the Model Insights page displays the following tabs depending on the project type:

> [!NOTE] Note
> Multiclass classification projects only support accuracy comparison.

|  | Accuracy | Dual lift | Lift | ROC | Predictions Difference |
| --- | --- | --- | --- | --- | --- |
| Regression | ✔ | ✔ | ✔ |  | ✔ |
| Binary | ✔ | ✔ | ✔ | ✔ | ✔ |
| Multiclass | ✔ |  |  |  |  |
| Time series | ✔ | ✔ | ✔ |  | ✔ |

**Accuracy:**
After DataRobot computes model insights for the deployment, you can compare model accuracy.

Under Model Insights, click the Accuracy tab to compare accuracy metrics:

[https://docs.datarobot.com/en/docs/images/challenger-compare-accuracy.png](https://docs.datarobot.com/en/docs/images/challenger-compare-accuracy.png)

The two columns show the metrics for each model. Highlighted numbers represent favorable values. In this example, the champion, Model 1, outperforms Model 2 for most metrics shown.

For time series projects, you can evaluate accuracy metrics by applying the following filters:

Forecast distance
: View accuracy for the selected
forecast distance
row within the
forecast window
range.
For all
x
series
: View accuracy scores by metric. This view reports scores in all available accuracy metrics for both models across the entire
time series
range (
x
).
Per series
: View accuracy scores by series within a
multiseries
comparison dataset. This view reports scores in a single accuracy metric (selected in the
Metric
dropdown menu) for each
Series ID
(e.g., store number) in the dataset for both models.

For multiclass projects, you can evaluate accuracy metrics by applying the following filters:

For all
x
classes
: View accuracy scores by metric. This view reports scores in all available accuracy metrics for both models across the entire
multiclass
range (
x
).
Per class
: View accuracy scores by class within a
multiclass classification
problem. This view reports scores in a single accuracy metric (selected in the
Metric
dropdown menu) for each
Class
(e.g., buy, sell, or hold) in the dataset for both models.

**Dual lift:**
A [dual lift chart](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/other/model-compare.html#dual-lift-chart) is a visualization comparing two selected models against each other. This visualization can reveal how models underpredict or overpredict the actual values across the distribution of their predictions. The prediction data is evenly distributed into equal size bins in increasing order.

To view the dual lift chart for the two models being compared, under Model Insights, click the Dual lift tab:

[https://docs.datarobot.com/en/docs/images/challenger-compare-dual-lift.png](https://docs.datarobot.com/en/docs/images/challenger-compare-dual-lift.png)

The curves for the two models represented on this chart maintain the color they were assigned when added to the deployment (as either a champion or challenger). To interact with the dual lift chart, you can hide the model curves and the actual curve.

The
+
icons in the plot area of the chart represent the models' predicted values. Click the
+
icon next to a model name in the header to hide or show the curve for a particular model.
The orange
o
icons in the plot area of the chart represent the actual values. Click the orange
o
icon next to
Actual
to hide or show the curve representing the actual values.

**Lift:**
A [lift chart](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/lift-chart-classic.html) depicts how well a model segments the target population and how capable it is of predicting the target, allowing you to visualize the model's effectiveness.

To view the lift chart for the models being compared, under Model Insights, click the Lift tab:

[https://docs.datarobot.com/en/docs/images/challenger-compare-lift.png](https://docs.datarobot.com/en/docs/images/challenger-compare-lift.png)

The curves for the two models represented on this chart maintain the color they were assigned when added to the deployment (as either a champion or challenger).

**ROC:**
> [!NOTE] Note
> The ROC tab is only available for binary classification projects.

An [ROC curve](https://docs.datarobot.com/en/docs/classic-ui/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve-classic.html) plots the true-positive rate against the false-positive rate for a given data source. Use the ROC curve to explore classification, performance, and statistics for the models you're comparing.

To view the ROC curves for the models being compared, under Model Insights, click the ROC tab:

[https://docs.datarobot.com/en/docs/images/challenger-compare-roc.png](https://docs.datarobot.com/en/docs/images/challenger-compare-roc.png)

The curves for the two models represented on this chart maintain the color they were assigned when added to the deployment (as either a champion or challenger). You can update the prediction thresholds for the models by clicking the pencil icons.

**Predictions Difference:**
Click the Predictions Difference tab to compare the predictions of two models on a row-by-row basis. The histogram shows the percentage of predictions that fall within the match threshold you specify in the Prediction match threshold field (along with the corresponding numbers of rows).

The header of the histogram displays the percentage of predictions:

Between the positive and negative values of the match threshold (shown in green)
Greater than the upper (positive) match threshold (shown in red)
Less than the lower (negative) match threshold (shown in red)

[https://docs.datarobot.com/en/docs/images/challenger-compare-predictions-diff-1.png](https://docs.datarobot.com/en/docs/images/challenger-compare-predictions-diff-1.png)

How are bin sizes calculated?
The size of the
Predictions Difference
bins in the histogram depends on the
Prediction match threshold
you set. The value of the prediction match threshold bin is equal to the difference between the upper match threshold (positive) and the lower match threshold (negative). The default prediction match threshold value is 0.0025, so for that value, the center bin is 0.005 (0.0025 + |-0.0025|). The bins on either side of the central bin are ten times larger than the previous bin. The last bin on either end expands to fit the full Prediction Difference range. For example, based on the default
Prediction match threshold
, the bin sizes would be as follows (where x is the difference between 250 and the maximum Prediction Difference):
Bin -5
Bin -4
Bin -3
Bin -2
Bin -1
Bin 0
Bin 1
Bin 2
Bin 3
Bin 4
Bin 5
Range
(−250 + x) to −25
−25 to −2.5
−2.5 to −0.25
−0.25 to −0.025
−0.025 to −0.0025
−0.0025 to +0.0025
+0.0025 to +0.025
+0.025 to +0.25
+0.25 to +2.5
+2.5 to +25
+25 to (+250 + x)
Size
225 + x
22.5
2.25
0.225
0.0225
0.005
0.0225
0.225
2.25
22.5
225 + x

If many matches dilute the histogram, you can toggle Scale y-axis to ignore perfect matches to focus on the mismatches.

The bottom section of the Predictions Difference tab shows the 1000 most divergent predictions (in terms of absolute value).

[https://docs.datarobot.com/en/docs/images/challenger-compare-predictions-diff-2.png](https://docs.datarobot.com/en/docs/images/challenger-compare-predictions-diff-2.png)

The Difference column shows how far apart the predictions are.


### Replace champion with challenger

After comparing models, if you find a model that outperforms the deployed model, you can set it as the new champion.

1. Evaluate the comparison model insights to determine the best-performing model.
2. If a challenger model outperforms the deployed model, clickPromote to champion.
3. Select aReplacement Reasonand clickAccept and Replace.

The challenger model is now the champion (deployed) model.

## Challengers for external deployments

External deployments with [remote prediction environments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/ext-model-prep/ext-pred-env.html) can also use the Challengers tab. Remote models can serve as the champion model, and you can compare them to DataRobot and custom models serving as challengers.

The [workflow](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#enable-challenger-models) for adding challenger models is largely the same; however, there are unique differences for external deployments outlined below.

### Add challenger models to external deployments

To enable challenger support, access an external deployment (one created with an external model package). In the Settings tab, under the Data Drift header, enable challenger models and [prediction row storage](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment-settings/challengers-settings.html).

The Challengers tab is now accessible. To add challenger models to the deployment, navigate to the tab and click Add challenger model > Select existing model.

Select a model package for the challenger you want to add (custom and DataRobot models only). Additionally, you must indicate a prediction environment used by the model package; this details where the model runs predictions. DataRobot or custom model can only use a DataRobot prediction environment for challengers models (unlike the champion model, deployed to an external prediction environment). When you have chosen the desired prediction environment, click Select.

The tab updates to display the model package you wish to add, verifying that the features used in the model package match the deployed model. Select Add challenger.

The model package is now serving as a challenger model for the remote deployment.

### Add external challenger comparison dataset

To compare an external model challenger, you need to provide a dataset that includes the actuals and the prediction results. When you upload the comparison dataset, you can specify a column containing the prediction results.

To add a comparison dataset for an external model challenger, follow the [Generate model comparisons](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/challengers.html#generate-model-comparions) process, and on the Model Comparison tab, upload your comparison dataset with a Prediction column identifier. Make sure the prediction dataset you provide includes the prediction results generated by the external model at the location identified by the Prediction column.

### Manage challengers for external deployments

You can manage challenger models for remote deployments with various actions:

- To edit the prediction environment used by a challenger, select the pencil icon and choose a new prediction environment from the dropdown.
- To replace the deployed model with a challenger, the challenger must have a compatible prediction environment. Once replaced, the championdoes notbecome a challenger because remote models are ineligible.

#### Challenger promotion to champion

A deployment's champion can't switch between an external prediction environment and a DataRobot prediction environment. When a challenger replaces a champion running in an external prediction environment, that challenger inherits the external environment of the former champion. If the Management Agent isn't configured in the external prediction environment, you must manually deploy the new champion in the external environment to continue making predictions.

#### Champion demotion to challenger

If the former champion isn't an external model package, it is compatible with DataRobot hosting and can become a challenger. In that scenario, the former champion moves to a DataRobot prediction environment where the deployment can replay the champion's predictions against it.
