Deployment overview¶
When you select a deployment from the Deployments dashboard, DataRobot opens the Overview page for that deployment. The Overview page provides a model- and environment-specific summary that describes the deployment, including the information you supplied when creating the deployment and any model replacement activity.
Details¶
The Details section of the Overview tab lists an array of information about the deployment, including the deployment's model and environment-specific information. At the top of the Overview page, you can view the deployment name and description; click the edit icon to update this information.
Note
The information included in this list differs for deployments using custom models and external environments. It can also include information dependent on the target type.
Field | Description |
---|---|
Deployment ID | The ID number of the current deployment. Click the copy icon to save it to your clipboard. |
Predictions | A visual representation of the relative prediction frequency, per day, over the past week. |
Importance | The importance level assigned during deployment creation. Click the edit icon to update the deployment importance. |
Approval status | The deployment's approval policy status for governance purposes. |
Prediction environment | The environment on which the deployed model makes predictions. |
Build environment | The build environment used by the deployment's current model (e.g., DataRobot, Python, R, or Java). |
Flags | Indicators providing a variety of deployment metadata, including deployment status—Active, Inactive, Errored, Warning, Launching—and deployment type—Batch, LLM. |
Target type | The type of prediction the model makes. For Classification model deployments, you can also see the Positive Class, Negative Class, and Prediction Threshold. |
Target | The feature name of the target used by the deployment's current model. |
Modeling features | The features included in the model's feature list. Click View details to review the list of features sorted by importance. |
Created by | The name of the user who created the model. |
Last prediction | The number of days since the last prediction. Hover over the field to see the full date and time. |
Custom model information | |
Custom model | The name and version of the custom model registered and deployed from the model workshop. |
Custom environment | The name and version of the custom model environment on which the registered custom model runs. |
Resource bundle | Preview feature. The CPU or GPU bundle selected for the custom model in the resource settings. |
Resource replicas | Preview feature. The number of replicas defined for the custom model in the resource settings. |
External model information | |
Deployment Console URL | The URL of the deployment in the NextGen Console. |
External Predictions URL | The URL of the external prediction environment for the external model. |
Generative model information | |
Target | The feature name of the target column used by the deployment's current generative model. This feature is the generative model's answer to a prompt; for example, resultText , answer , completion , etc. |
Prompt column name | The feature name of the prompt column used by the deployment's current generative model. This feature is the prompt the generative model responds to; for example, promptText , question , prompt , etc. |
Related items¶
The Related items section contains a list of the assets associated with a deployment. Depending on the currently deployed model, you can see different related items. Click Show more to reveal all related items:
Field | Description |
---|---|
Registered model | The name and ID of the registered model associated with the deployment. Click to open the model directory to the registered model. |
Registered model version | The name and ID of the registered model version associated with the deployment. Click to open the model directory to the registered model version. |
DataRobot NextGen model information | |
Use Case | The name and ID of the Use Case in which the deployment's current model was created. Click to open the Use Case in Workbench. |
Experiment | The name and ID of the experiment in which the deployment's current model was created. Click to open the experiment in Workbench. |
Model | The name and ID of the deployment's current model. Click to open the model overview in a Workbench experiment. You can view the model ID of any models deployed in the past from the deployment logs (History > Logs). |
DataRobot Classic model information | |
Project | The name and ID of the project in which the deployment's current model was created. Click to open the project. |
Model | The name and ID of the deployment's current model. Click to open the model blueprint. You can view the Model ID of any models deployed in the past from the deployment logs (History > Logs). |
Custom model information | |
Custom model | The name, version, and ID of the custom model associated with the deployment. Click to open the model workshop to the Assemble tab for the custom model. |
Custom model version | The version and ID of the custom model version associated with the deployment. Click to open the model workshop to the Versions tab for the custom model. |
Training dataset | The filename and ID of the training dataset used to create the currently deployed custom model. |
External model information | |
Training dataset | The filename and ID of the training dataset used to create the currently deployed external model. |
Note
If you don't have access to a related item, a lock icon appears at the end of the item's row.
Evaluation and moderation¶
Availability information
Evaluation and moderation guardrails are a premium feature. Contact your DataRobot representative or administrator for information on enabling this feature.
Feature flag: Enable Moderation Guardrails (Premium), Enable Global Models in the Model Registry (Premium), Enable Additional Custom Model Output in Prediction Responses
When a text generation model with guardrails is registered and deployed, you can view Evaluation and moderation section the deployment's Overview tab:
Tags¶
In the Tags section, click + Add new and enter a Name and a Value for each key-value pair you want to tag the deployment with. Deployment tags can help you categorize and search for deployments in the dashboard.
Runtime parameters¶
Preview
The ability to edit custom model runtime parameters on a deployment is on by default.
Feature flag: Enable Editing Custom Model Runtime-Parameters on Deployments
On a custom model deployment's Overview tab, you can access the Runtime parameters section. If the deployed custom model defines runtime parameters through runtimeParameterDefinitions
in the model-metadata.yaml
file, you can manage these parameters in this section. To do this, first make sure the deployment is inactive, then, click Edit:
Each runtime parameter's row includes the following controls:
Icon | Setting | Description |
---|---|---|
Edit | Open the Edit a Key dialog box to edit the runtime parameter's Value. | |
Reset to default | Reset the runtime parameter's Value to the defaultValue set in the model-metadata.yaml file (defined in the source custom model). |
If you edit any of the runtime parameters, to save your changes, click Apply.
For more information on how to define runtime parameters and use them in custom model code, see the Define custom mode runtime parameters documentation.
History¶
Tracking deployment events in a deployment's History section is essential when a deployed model supports a critical use case. You can maintain deployment stability by monitoring the Governance and Logs events. These events include when the model was deployed or replaced. The deployment history links these events to the user responsible for the change.
Many organizations, especially those in highly regulated industries, need greater control over model deployment and management. Administrators can define deployment approval policies to facilitate this enhanced control. However, by default, there aren't any approval requirements before deploying.
You can find a deployment's available governance log details under History > Governance, including an audit trail for any deployment approval policies triggered for the deployment.
When a model begins to experience data or accuracy drift, you should collect a new dataset, train a new model, and replace the old model. The details of this deployment lifecycle are recorded, including timestamps for model creation and deployment and a record of the user responsible for the recorded action. Any user with deployment owner permissions can replace the deployed model.
You can find a deployment's model-related events under History > Logs, including the creation and deployment dates and any model replacement events. Each model replacement event reports the replacement date and justification (if provided). In addition, you can find and copy the model ID of any previously deployed model.