# Generative model monitoring

> Generative model monitoring - The text generation target type for DataRobot custom and external
> models is compatible with generative Large Language Models (LLMs), allowing you to deploy generative
> models, make predictions, monitor model performance statistics, export data, and create custom
> metrics.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-04-24T16:03:56.578089+00:00` (UTC).

## Primary page

- [Generative model monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html): Full documentation for this topic (HTML).

## Sections on this page

- [Create and deploy a generative custom model](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#create-and-deploy-a-generative-custom-model): In-page section heading.
- [Add a generative custom model](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#add-a-generative-custom-model): In-page section heading.
- [Assemble and deploy a generative custom model](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#assemble-and-deploy-a-generative-custom-model): In-page section heading.
- [Create and deploy an external generative model](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#create-and-deploy-an-external-generative-model): In-page section heading.
- [Monitor a deployed generative model](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#monitor-a-deployed-generative-model): In-page section heading.
- [Data drift for generative models](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#feature-details-for-generative-models): In-page section heading.

## Related documentation

- [Classic UI documentation](https://docs.datarobot.com/en/docs/classic-ui/index.html): Linked from this page.
- [MLOps](https://docs.datarobot.com/en/docs/classic-ui/mlops/index.html): Linked from this page.
- [Performance monitoring](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/index.html): Linked from this page.
- [enable public network access for custom models](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/custom-model-resource-mgmt.html): Linked from this page.
- [Custom metrics](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-custom-metrics.html): Linked from this page.
- [during](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-directory/nxt-deploy-models.html#custom-metrics): Linked from this page.
- [after](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-custom-metrics-settings.html): Linked from this page.
- [Custom Model Workshop](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/index.html): Linked from this page.
- [implement the Bolt-on Governance API](https://docs.datarobot.com/en/docs/api/code-first-tools/drum/structured-custom-models.html#chat): Linked from this page.
- [testing](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/custom-model-test.html): Linked from this page.
- [deploying](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/deploy-custom-inf-model.html): Linked from this page.
- [drop-in model environments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-environments/drop-in-environments.html): Linked from this page.
- [custom environments](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-environments/custom-environments.html#create-a-custom-environment): Linked from this page.
- [remote repository](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/custom-model-repos.html): Linked from this page.
- [runtime parameters](https://docs.datarobot.com/en/docs/api/code-first-tools/drum/custom-model-runtime-parameters.html): Linked from this page.
- [libraries (and versions)](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/custom-model-dependencies.html): Linked from this page.
- [add training data](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/custom-model-training-data.html): Linked from this page.
- [provide the model information](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/custom-model-reg.html): Linked from this page.
- [configure the deployment settings](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/deploy-methods/add-deploy-info.html): Linked from this page.
- [make predictions](https://docs.datarobot.com/en/docs/api/dev-learning/python/predictions/index.html): Linked from this page.
- [monitoring agent](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/mlops-agent/index.html): Linked from this page.
- [add the required information](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/ext-model-prep/ext-model-reg.html): Linked from this page.
- [locate and deploy the generative model](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/registry/reg-deploy.html): Linked from this page.
- [service health](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html): Linked from this page.
- [usage](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-usage.html): Linked from this page.
- [deployment data](https://docs.datarobot.com/en/docs/api/reference/sdk/data-exploration.html): Linked from this page.
- [custom metrics](https://docs.datarobot.com/en/docs/api/reference/sdk/custom-metrics.html): Linked from this page.
- [data drift](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html): Linked from this page.

## Documentation content

# Generative model monitoring

> [!NOTE] Availability information
> Monitoring support for generative models is a premium feature. Contact your DataRobot representative or administrator for information on enabling this feature.

Using the text generation target type for custom and external models, a premium LLMOps feature, deploy generative Large Language Models (LLMs) to make predictions, monitor service, usage, and data drift statistics, and create custom metrics. DataRobot supports LLMs through two deployment methods:

- Create a text generation model as a custom inference model in DataRobot: Create and deploy a text generation model using DataRobot's Custom Model Workshop, calling the LLM's API to generate text instead of performing inference directly and allowing DataRobot MLOps to access the LLM's input and output for monitoring. To call the LLM's API, you shouldenable public network access for custom models.
- Monitor a text generation model running externally: Create and deploy a text generation model on your infrastructure (local or cloud), using the monitoring agent to communicate the input and output of your LLM to DataRobot for monitoring.

> [!TIP] Custom metrics for evaluation and moderation require an association ID
> For the metrics added when you configure evaluations and moderations, to view data on the Custom metrics tab, ensure that you set an association ID and enable prediction storage before you start making predictions through the deployed LLM.If you don't set an association ID and provide association IDs alongside the LLM's predictions, the metrics for the moderations won't be calculated on the [Custom metrics](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-monitoring/nxt-custom-metrics.html) tab.After you define the association ID, you can enable automatic association ID generation to ensure these metrics appear on the Custom metrics tab. You can enable this setting [during](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-model-directory/nxt-deploy-models.html#custom-metrics) or [after](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-settings/nxt-custom-metrics-settings.html) deployment.

## Create and deploy a generative custom model

Custom inference models are user-created, pretrained models that you can upload to DataRobot (as a collection of files) via the [Custom Model Workshop](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/custom-models/custom-model-workshop/index.html). You can then upload a model artifact to create, test, and deploy custom inference models to DataRobot's centralized deployment hub.

Generative custom models can also [implement the Bolt-on Governance API](https://docs.datarobot.com/en/docs/api/code-first-tools/drum/structured-custom-models.html#chat), which makes them particularly useful for building conversational applications.

### Add a generative custom model

To add a generative model to the Custom Model Workshop:

1. ClickModel Registry > Custom Model Workshopand, on theModelstab, click+ Add new model.
2. In theAdd Custom Inference Modeldialog box, underTarget type, clickText Generation.
3. Enter aModel nameandTarget name. In addition, you can clickShow Optional Fieldsto define the language used to build the model and provide a description.
4. ClickAdd Custom Model. The new custom model opens to theAssembletab.

### Assemble and deploy a generative custom model

To assemble, test, and deploy a generative model from the Custom Model Workshop:

1. On the right side of theAssembletab, underModel Environment, select a model environment from theBase Environmentlist. The model environment is used fortestinganddeployingthe custom model. NoteTheBase Environmentpulldown menu includesdrop-in model environments, if any exist, as well ascustom environmentsthat you can create.
2. On the left side of theAssembletab, underModel, drag and drop files or clickBrowse local filesto upload your LLM's custom model artifacts. Alternatively, you can import model files from aremote repository. ImportantIf you clickBrowse local files, you have the option of adding aLocal Folder. The local folder should contain dependent files and additional assets required by your model, not the model itself. If the model file is included in the folder, it will not be accessible to DataRobot. Instead, the model file must exist at the root level. The root file can then point to the dependencies in the folder. A basic LLM assembled in the Custom Model Workshop should include the following files: FileContentscustom.pyThecustom model code, calling the LLM service's API throughpublic network access for custom models.model-metadata.yamlTheruntime parametersrequired by the generative model.requirements.txtThelibraries (and versions)required by the generative model. The dependencies fromrequirements.txtappear underModel Environmentin theModel Dependenciesbox.
3. After you add the required model files,add training data. To provide a training baseline for drift monitoring, you should upload a dataset containingat least20 rows of prompts and responses relevant to the topic your generative model is intended to answer questions about. These prompts and responses can be taken from documentation, manually created, or generated.
4. Next, click theTesttab, click+ New test, and then clickStart testto run theStartupandPrediction errortests, the only tests supported for theText Generationtarget type.
5. ClickRegister to deploy,provide the model information, and clickAdd to registry. The model opens on theRegistered Modelstab.
6. In the registered model version header, clickDeploy, and thenconfigure the deployment settings. You can nowmake predictionsas you would with any other DataRobot model.

## Create and deploy an external generative model

External model packages allow you to register and deploy external generative models. You can use the [monitoring agent](https://docs.datarobot.com/en/docs/classic-ui/mlops/deployment/mlops-agent/index.html) to access MLOps monitoring capabilities with these model types.

To create and deploy a model package for an external generative model:

1. ClickModel Registryand on theRegistered Modelstab, clickAdd new packageand selectNew external model package.
2. In theRegister new external modeldialog box, from thePrediction typelist, clickText generationandadd the required informationabout the agent-monitored generative model. To provide a training baseline for drift monitoring, in theTraining datafield, you should upload a dataset containingat least20 rows of prompts and responses relevant to the topic your generative model is intended to answer questions about. These prompts and responses can be taken from documentation, manually created, or generated.
3. After you define all fields for the model package, clickRegister. The package is registered in theModel Registryand is available for use.
4. From theModel Registry > Registered Modelstab,locate and deploy the generative model.
5. Adddeployment information and complete the deployment.

## Monitor a deployed generative model

To monitor a generative model in production, you can view [service health](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/service-health.html) and [usage](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/deploy-usage.html) statistics, export [deployment data](https://docs.datarobot.com/en/docs/api/reference/sdk/data-exploration.html), create [custom metrics](https://docs.datarobot.com/en/docs/api/reference/sdk/custom-metrics.html), and identify [data drift](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html).

**Service Health:**
[https://docs.datarobot.com/en/docs/images/text-generation-service-health.png](https://docs.datarobot.com/en/docs/images/text-generation-service-health.png)

**Usage:**
[https://docs.datarobot.com/en/docs/images/text-generation-usage.png](https://docs.datarobot.com/en/docs/images/text-generation-usage.png)

**Data Exploration:**
[https://docs.datarobot.com/en/docs/images/text-generation-data-export.png](https://docs.datarobot.com/en/docs/images/text-generation-data-export.png)

**Custom Metrics:**
[https://docs.datarobot.com/en/docs/images/text-generation-custom-metrics.png](https://docs.datarobot.com/en/docs/images/text-generation-custom-metrics.png)

**Data Drift:**
[https://docs.datarobot.com/en/docs/images/text-generation-data-drift.png](https://docs.datarobot.com/en/docs/images/text-generation-data-drift.png)


### Data drift for generative models

To monitor drift in a generative model's prediction data, DataRobot compares new prompts and responses to the prompts and responses in the training data you uploaded during model creation. To provide an adequate training baseline for comparison, the uploaded training dataset should contain at least 20 rows of prompts and responses relevant to the topic your model is intended to answer questions about. These prompts and responses can be taken from documentation, manually created, or generated.

On the Data Drift tab for a generative model, you can view the [Feature Drift vs. Feature Importance](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html#feature-drift-vs-feature-importance-chart), [Feature Details](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/generative-model-monitoring.html#feature-details-for-generative-models), and [Drift Over Time](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html#drift-over-time-chart) charts:

To learn how to adjust the Data Drift dashboard to focus on the model, time period, or feature you're interested in, see the [Configure the Data Drift dashboard](https://docs.datarobot.com/en/docs/classic-ui/mlops/monitor/data-drift.html#configure-the-data-drift-dashboard) documentation.

The Feature Details chart includes new functionality for text generation models, providing a word cloud visualizing differences in the data distribution for each token in the dataset between the training and scoring periods. By default, the Feature Details chart includes information about the question (or prompt) and answer (or model completion/output):

| Feature | Description |
| --- | --- |
| question | A word cloud visualizing the difference in data distribution for each user prompt token between the training and scoring periods and revealing how much each token contributes to data drift in the user prompt data. |
| answer | A word cloud visualizing the difference in data distribution for each model output token between the training and scoring periods and revealing how much each token contributes to data drift in the model output data. |

> [!NOTE] Note
> The feature names for the generative model's input and output depend on the feature names in your model's data; therefore, the question and answer features in the example above will be replaced by the names of the input and output columns in your model's data.

You can also designate other features for data drift tracking; for example, you could decide to track the model's temperature, monitoring the level of creativity in the generative model's responses from high creativity (1) to low (0).

To interpret the feature drift word cloud for a text feature like question or answer, hover over a user prompt or model output token to view the following details:

| Chart element | Description |
| --- | --- |
| Token | The tokenized text represented by the word in the word cloud. Text size represents the token's drift contribution and text color represents the dataset prevalence. Stop words are hidden from this chart. |
| Drift contribution | How much this particular token contributes to the feature's drift value, as reported in the Feature Drift vs. Feature Importance chart. |
| Data distribution | How much more often this particular token appears in the training data or the predictions data. Blue: This token appears X% more often in training data.Red: This token appearsX% more often in predictions data. |

> [!TIP] Tip
> When your pointer is over the word cloud, you can scroll up to zoom in and view the text of smaller tokens.
