Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Enable accuracy monitoring

You can monitor a deployment for accuracy using the Accuracy tab, which lets you analyze the performance of the model deployment over time, using standard statistical measures and exportable visualizations.

To enable accuracy monitoring, you need:

To activate the Accuracy tab for a deployment, first select an association ID, a foreign key that links predictions with future results (actuals).

Select an association ID

In the Settings > Data tab for a deployment, the Inference section has a field for the column name containing the association IDs:

The association ID functions as a foreign key for your prediction dataset so you can later match up actuals with those predictions. It corresponds to an event for which you want to track the outcome. For example, you may want to track a series of loans to see if any of them have defaulted or not.

The column name entered in the Association ID field must match the column name that contains the association IDs in the prediction dataset for your model. Each cell for this column in your prediction dataset will contain a unique ID that pairs to a corresponding unique ID in the actuals payload (the association IDs).

For example, look at this sample dataset of transactions:

The third column, transaction_num, is the column containing your association IDs. The unique ID in each row associates the other features in that row (transaction_amnt and annual_inc in this example). By entering transaction_num as the column containing association IDs, you allow DataRobot to use these unique IDs to associate each row of prediction data and predicted outcome with the actual outcome later. Therefore, transaction_num is what you would enter in the Association ID field in the Settings tab.

You can set the column name containing the association IDs on a deployment at any time, whether predictions have been made against that deployment or not. Once set, you can only update the association ID if you have not yet made predictions that include that ID. Once predictions have been made using that ID, you cannot change it.

Association IDs (the contents in each row for the designated column name) must be shorter than 128 characters, or they will be truncated to that size. If truncated, uploaded actuals will require the truncated association IDs for your actuals in order to properly generate accuracy statistics.

Association IDs for time series deployments

For time series deployments, prediction requests already contain the data needed to uniquely identify individual predictions. Therefore, it is important to consider the feature used as an association ID.

For single-series deployments, DataRobot recommends using the Forecast Date column as the association ID because it is the date you are making predictions for. For example, if today is June 15th, 2022 and you are forecasting daily total sales for a store, you may wish to know what the sales will be on July 15th, 2022. You will have a single actual total sales figure for this date, so you can use “2022-07-15” as the association ID (the forecast date).

For multiseries deployments, DataRobot recommends using a custom column containing Forecast Date + Series ID as the association ID. If a single model can predict daily total sales for a number of stores, then you can use, for example, the association ID “2022-07-15 1234” for sales on July 15th, 2022 for store #1234.

For all time series deployments, you may want to forecast the same date multiple times as the date approaches. For example, you might forecast daily sales 30 days in advance, then again 14 days in advance, and again 7 days in advance. These forecasts all have the same forecast date, and therefore the same association ID.

Be aware that models may produce different forecasts when predicting closer to the forecast date. Predictions for multiple forecast distances are each tracked individually so that accuracy can be properly calculated for each forecast distance.

Enable target monitoring

In order to enable accuracy monitoring, you must also enable target monitoring in the the Inference section of the Data tab:

If target monitoring is turned off, a message displays on the Accuracy tab to remind you to enable target monitoring.

Add actuals

You can directly upload datasets containing actuals to a deployment from the Settings > Data tab (described here) or from the API. The deployment's prediction data must correspond to the actuals data you upload. Review the row limits for uploading actuals before proceeding.

  1. To use actuals with your deployment, select choose file under the Actuals header. Either upload a file directly or select the file from the AI Catalog. If you upload a local file, it will be added to the AI Catalog after uploading.

    Actuals must be snapshotted to use them with a deployment. Confirm that the snapshot operation was successful before proceeding. (If it was not, you will see a yellow “Not snapshotted” warning.)

  2. Once uploaded, complete the fields that populate in the Actuals section. Each field has a dropdown menu that allows you to select any of the columns from your dataset.

    Field Description
    Actual Response The column in your dataset that contains the actual values.
    Association ID The column that contains the association IDs.
    Was acted on (optional) The column that indicates if the prediction was acted on in a way that could have affected the actual outcome (the values for rows in this column must be "yes" or "no"). For example, if a hospital patient is predicted to be readmitted in 30 days, extra procedures or new medication might be given to mitigate this problem, influencing the actual outcome of the prediction. In this case, the "Was acted on" field would use the "Mitigation" column from the dataset, containing "yes" or "no" values.
    Timestamp (optional) The column name containing the date/time when the actual values were obtained, formatted according to RFC 3339 (for example, 2018-04-12T23:20:50.52Z).


Note that the column names for the association ID in the prediction and the actuals datasets do not need to match. The only requirement is that each dataset contain a column that includes an identifier that does match the other dataset. For example, if the column "event id" contains the association ID in the prediction dataset that you will use to identify a row and match to actual result, enter "event id" as the association ID in the Inference section. In the Actuals section, enter (in the Association ID field) the column name that contains the identifier that matches the prediction identifier ("associationId" in the example above).

Once complete, click Save 1 change at the top of the page. After doing this and making predictions with a dataset containing the association ID entered in the Inference section, the Accuracy page is enabled for your deployment.

Upload actuals with the API

This workflow outlines how to enable the Accuracy tab for deployments using the DataRobot API.

  1. From the Settings > Data tab, access the Inference section.

  2. In the field Association ID, enter the column name containing the association IDs in your prediction dataset.

  3. Enable Require association ID in prediction requests. This requires your prediction dataset to have a column name that matches the name you entered in the Association ID field. You will get an error if the column is missing.


    Note that you can set an association ID and not toggle on this setting if you are sending prediction requests that do not include the association ID and you do not want them to error. However, until it is enabled you cannot monitor accuracy for your deployment.

  4. Make predictions using a dataset that includes the association ID.

  5. Submit the actual values via the DataRobot API (for details, refer to the API documentation by signing in to DataRobot, clicking the question mark on the upper right, and selecting API Documentation; in the API documentation, select Deployments > Submit Actuals - JSON).

    Note that the actuals payload must contain the column names associationId and actualValue. Use the optional column wasActedOn to indicate if the prediction was acted on in a way that could have affected the actual outcome. If you submit multiple actuals with the same association ID value, either in the same or a subsequent request, DataRobot uses the latest actuals value. Review the row limits for uploading actuals before proceeding.

    Use the following snippet in the API to submit the actual values:

    import requests
    API_TOKEN = ''
    USERNAME = ''
    DEPLOYMENT_ID = '5cb314xxxxxxxxxxxa755'
    LOCATION = ''
    def submit_actuals(data, deployment_id):
        headers = {'Content-Type': 'application/json', 'Authorization': 'Token {}'.format(API_TOKEN)}
        url = '{location}/api/v2/deployments/{deployment_id}/actuals/fromJSON/'.format(
            deployment_id=deployment_id, location=LOCATION
        resp =, json=data, headers=headers)
        if resp.status_code >= 400:
            raise RuntimeError(resp.content)
        return resp.content
    def main():
        deployment_id = DEPLOYMENT_ID
        payload = {
            'data': [
                    'actualValue': 1,
                    'associationId': '5d8138fb9600000000000000',  # str
                    'wasActedOn': False,  # optional bool
                    'actualValue': 0,
                    'associationId': '5d8138fb9600000000000001',
                    'wasActedOn': False,
        submit_actuals(payload, deployment_id)
    if __name__ == "__main__":

After submitting at least 100 actuals for a non-time series deployment (there is no minimum for time series deployments) and making predictions with corresponding association IDs, the Accuracy tab becomes available for your deployment.

Actuals upload limit

The number of actuals you can upload to a deployment is limited per request and per hour. These limits vary depending on the endpoint used:

The fromJSON endpoint:

  • 10,000 rows per request
  • 10,000,000 rows per hour

The fromDataset endpoint:

  • 5,000,000 rows per request
  • 10,000,000 rows per hour

Updated May 31, 2022
Back to top