# Challenger models

> Challenger models - Create and manage challenger models to compare against the deployed champion
> with the Python API client.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.281278+00:00` (UTC).

## Primary page

- [Challenger models](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html): Full documentation for this topic (HTML).

## Sections on this page

- [Best practices](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#best-practices): In-page section heading.
- [When to use challengers](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#when-to-use-challengers): In-page section heading.
- [Challenger management](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#challenger-management): In-page section heading.
- [Prerequisites](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#prerequisites): In-page section heading.
- [Create a challenger model](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#create-a-challenger-model): In-page section heading.
- [Basic challenger model creation](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#basic-challenger-model-creation): In-page section heading.
- [Create a challenger that waits for completion](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#create-a-challenger-that-waits-for-completion): In-page section heading.
- [List challengers](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#list-challengers): In-page section heading.
- [List all challengers for a deployment](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#list-all-challengers-for-a-deployment): In-page section heading.
- [Get a specific challenger](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#get-a-specific-challenger): In-page section heading.
- [Access challenger properties](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#access-challenger-properties): In-page section heading.
- [Update a challenger model](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#update-a-challenger-model): In-page section heading.
- [Update a challenger model name](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#update-a-challenger-model-name): In-page section heading.
- [Update the prediction environment](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#update-the-prediction-environment): In-page section heading.
- [Delete a challenger](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#delete-a-challenger): In-page section heading.
- [Manage challenger settings](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#manage-challenger-settings): In-page section heading.
- [Get challenger settings](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#get-challenger-settings): In-page section heading.
- [Update challenger settings](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#update-challenger-settings): In-page section heading.
- [Work with challenger predictions](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#work-with-challenger-predictions): In-page section heading.
- [Understanding challenger predictions](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#understanding-challenger-predictions): In-page section heading.
- [Score challenger models](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#score-challenger-models): In-page section heading.
- [Common workflows](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#common-workflows): In-page section heading.
- [Create multiple challengers from different models](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#create-multiple-challengers-from-different-models): In-page section heading.
- [Compare challenger information](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#compare-challenger-information): In-page section heading.
- [Clean up old challenger models](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#clean-up-old-challenger-models): In-page section heading.
- [Replace a challenger with a new model](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#replace-a-challenger-with-a-new-model): In-page section heading.
- [Considerations](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/challengers.html#considerations): In-page section heading.

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [Developer learning](https://docs.datarobot.com/en/docs/api/dev-learning/index.html): Linked from this page.
- [Python API client user guide](https://docs.datarobot.com/en/docs/api/dev-learning/python/index.html): Linked from this page.
- [MLOps](https://docs.datarobot.com/en/docs/api/dev-learning/python/mlops/index.html): Linked from this page.
- [Challenger models](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-challengers.html): Linked from this page.

## Documentation content

# Challenger models

[Challenger models](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-challengers.html) are alternative models that you can compare against your currently deployed model (the champion model). This allows you to test new models in production, compare their performance, and make data-driven decisions about whether to replace the champion with a better-performing challenger. This page describes how to create, manage, and work with challenger models.

## Best practices

### When to use challengers

- A/B testing: Test new models against the current champion in production.
- Model validation: Validate that a new model performs well before replacing the champion.
- Performance comparison: Compare multiple model candidates simultaneously.
- Risk mitigation: Test models with different characteristics (e.g., different algorithms, feature sets).

### Challenger management

1. Naming convention: Use descriptive names that indicate the model type or purpose (e.g., "XGBoost_v2_Challenger").
2. Limit active challenger models: Too many challengers can impact prediction performance; typically 2-3 challengers is sufficient.
3. Monitor performance: Regularly review challenger performance metrics before deciding to promote one to champion.
4. Clean up: Remove challengers that are no longer being evaluated to keep your deployment clean.
5. Document: Keep track of why each challenger was created and what makes it different from the champion.

### Prerequisites

Before creating a challenger model, ensure you have:

- A registered model version (model package) ready to use.
- An appropriate prediction environment configured.
- Challenger models enabled for the deployment.
- An understanding of what you want to test or compare.

## Create a challenger model

To create a challenger model, you need a deployment, a model package (registered model version), and a prediction environment. The challenger will use the specified model package and prediction environment to make predictions alongside the champion model.

### Basic challenger model creation

```
import datarobot as dr

deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')

# Get a model package (registered model version) to use as challenger
project = dr.Project.get('6527eb38b9e5dead5fc12491')
model = project.get_models()[0]
registered_model_version = dr.RegisteredModelVersion.create_for_leaderboard_item(
    model_id=model.id,
    name="Challenger Model Version",
    registered_model_name='My Registered Model'
)

# Get a prediction environment
prediction_environments = dr.PredictionEnvironment.list()
prediction_environment = prediction_environments[0]

# Create the challenger
challenger = dr.Challenger.create(
    deployment_id=deployment.id,
    model_package_id=registered_model_version.id,
    prediction_environment_id=prediction_environment.id,
    name='Elastic-Net Classifier Challenger'
)
```

### Create a challenger that waits for completion

By default, challenger creation is asynchronous. You can specify a maximum wait time for the creation to complete:

```
# Wait up to 600 seconds for creation to complete
challenger = dr.Challenger.create(
    deployment_id=deployment.id,
    model_package_id=registered_model_version.id,
    prediction_environment_id=prediction_environment.id,
    name='Random Forest Challenger',
    max_wait=600
)
```

## List challengers

You can retrieve all challengers associated with a deployment.

### List all challengers for a deployment

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
challengers = dr.Challenger.list(deployment_id=deployment.id)
```

You can also use the `list_challengers()` method:

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
challengers = deployment.list_challengers()

for challenger in challengers:
    print(f"{challenger.name}: {challenger.id}")
```

## Get a specific challenger

To retrieve a single challenger by its ID:

```
challenger = dr.Challenger.get(
    deployment_id='5c939e08962d741e34f609f0',
    challenger_id='5c939e08962d741e34f609f1'
)
```

### Access challenger properties

```
challenger = dr.Challenger.get(
    deployment_id=deployment.id,
    challenger_id='5c939e08962d741e34f609f1'
)
```

## Update a challenger model

You can update a challenger model's name and prediction environment.

### Update a challenger model name

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
challenger = deployment.list_challengers()[0]
challenger.update(name='Updated Challenger Name')
```

### Update the prediction environment

```
# Get a different prediction environment
prediction_environments = dr.PredictionEnvironment.list()
new_environment = prediction_environments[1]

challenger.update(prediction_environment_id=new_environment.id)
```

To update both the name and prediction environment:

```
challenger.update(
    name='Final Challenger Name',
    prediction_environment_id=new_environment.id
)
```

## Delete a challenger

To remove a challenger from a deployment:

```
challenger = dr.Challenger.get(
    deployment_id=deployment.id,
    challenger_id='5c939e08962d741e34f609f1'
)
challenger.delete()

# Verify deletion
challengers = deployment.list_challengers()
challenger_ids = [c.id for c in challengers]
```

## Manage challenger settings

You can enable or disable challenger models for a deployment and configure challenger-related settings.

### Get challenger settings

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
settings = deployment.get_challenger_models_settings()
```

### Update challenger settings

To enable or disable challenger models for a deployment:

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
deployment.update_challenger_models_settings(challenger_models_enabled=True)
```

To disable challenger models:

```
deployment.update_challenger_models_settings(challenger_models_enabled=False)
```

## Work with challenger predictions

Challengers make predictions alongside the champion model, allowing you to compare their performance.

### Understanding challenger predictions

When challengers are enabled, predictions made to the deployment will also be scored by the challenger models. This allows you to:

- Compare prediction outputs between champion and challengers.
- Monitor challenger performance metrics.
- Make informed decisions about model replacement.

### Score challenger models

You can trigger challenger scoring for existing prediction requests:

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
# Score challengers for predictions with a specific timestamp
deployment.score_challenger_predictions(timestamp='2024-01-15T10:00:00Z')
```

## Common workflows

### Create multiple challengers from different models

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
project = dr.Deployment.get(deployment_id=deployment.id).model['project_id']
project = dr.Project.get(project)

# Get top 3 models from the project
models = project.get_models()[:3]
prediction_environment = dr.PredictionEnvironment.list()[0]

challengers = []
for i, model in enumerate(models):
    registered_model_version = dr.RegisteredModelVersion.create_for_leaderboard_item(
        model_id=model.id,
        name=f"Challenger {i+1}",
        registered_model_name=f'Challenger Model {i+1}'
    )
    challenger = dr.Challenger.create(
        deployment_id=deployment.id,
        model_package_id=registered_model_version.id,
        prediction_environment_id=prediction_environment.id,
        name=f'{model.model_type} Challenger'
    )
    challengers.append(challenger)

print(f"Created {len(challengers)} challengers")
```

### Compare challenger information

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
challengers = deployment.list_challengers()

print("Challenger Comparison:")
print(f"{'Name':<40} {'Model Type':<30} {'Model Package ID':<20}")
print("-" * 90)
for challenger in challengers:
    model_type = challenger.model.get('type', 'Unknown')
    model_package_id = challenger.model_package.get('id', 'Unknown')
    print(f"{challenger.name:<40} {model_type:<30} {model_package_id:<20}")
```

### Clean up old challenger models

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')
challengers = deployment.list_challengers()

# Delete challengers older than a certain date or based on criteria
# Example: delete challengers with specific naming pattern
for challenger in challengers:
    if 'Old' in challenger.name:
        print(f"Deleting challenger: {challenger.name}")
        challenger.delete()
```

### Replace a challenger with a new model

```
deployment = dr.Deployment.get(deployment_id='5c939e08962d741e34f609f0')

# Get existing challenger
old_challenger = deployment.list_challengers()[0]

# Create new model version
# Get a different model
new_model = project.get_models()[1]
new_registered_model_version = dr.RegisteredModelVersion.create_for_leaderboard_item(
    model_id=new_model.id,
    name="Updated Challenger Version",
    registered_model_name='Challenger Model'
)

# Delete old challenger
old_challenger.delete()

# Create new challenger with same name but new model
new_challenger = dr.Challenger.create(
    deployment_id=deployment.id,
    model_package_id=new_registered_model_version.id,
    prediction_environment_id=old_challenger.prediction_environment['id'],
    name=old_challenger.name
)
```

## Considerations

- Challenger creation is an asynchronous process. The max_wait parameter controls how long to wait for creation to complete.
- A deployment can have multiple challenger models active simultaneously.
- Challengers use the same prediction requests as the champion, allowing for direct comparison.
- The champion model (the currently deployed model) cannot be deleted while challengers exist that reference it.
- Challenger models must use compatible prediction environments with the deployment.
- Model packages used for challengers must have the same target type and compatible settings as the champion model.
