Prediction API snippets¶
DataRobot provides sample Python code containing the necessary commands and identifiers needed to submit a CSV or JSON file for scoring. This code is can be used with the Prediction API.
You can also read below for more information on:
- Disabling data drift tracking for individual prediction requests
- Using the monitoring snippet with deployments
To use the Prediction API Scripting Code, follow the sample and make the necessary changes when you want to integrate the model, via API, into your production application.
Copy the sample code and modify as necessary. The following table describes some of the relevant elements on the page:
|Content||Description||In actual code...|
|Prediction type (1)||Determines the prediction method used (single or batch jobs). The snippet updates accordingly.||Click the format to toggle.|
|Interface (2)||Determines the type of prediction script to make. Either: 2. CLI: a standalone script using the DataRobot API Client 4. API Client: an example snippet using the Python-based DataRobot API Client 6. HTTP: an example snippet using the raw Python-based http requests Click the format to toggle.|
|The code overview screen (3)||Displays the code needed to download and run on your local machine.||Edit to fit your needs.|
To deploy the code, copy the sample and either:
Disable data drift¶
You can disable data drift tracking for individual prediction requests by applying a unique header to the request. This may be useful for example, in the case where you are using synthetic data that does not have real-world consequences.
Insert the header,
X-DataRobot-Skip-Drift-Tracking=1, into the request snippet. For example:
headers['X-DataRobot-Skip-Drift-Tracking'] = '1' requests.post(url, auth=(USERNAME, API_KEY), data=data, headers=headers)
Once applied, drift tracking will not be calculated for the request. However, service stats are still provided (data errors, system errors, execution time, and more).
When you create an external model deployment, you are notified that the deployment requires the use of monitoring snippets to report deployment statistics with the MLOps agent.
You can follow the link at the bottom of the page, or navigate to Predictions > Monitoring for your deployment to view the snippet:
The monitoring snippet is designed to configure your MLOps library to send a model's statistics to DataRobot MLOps, and represent those statistics in the deployment. Use this functionality to report back Scoring Code metrics to your deployment.
To instrument your Scoring Code with a deployment, select the Java language and copy the snippet to your clipboard when you are ready to use it. For further instructions, reference the Quick Start guide available in the MLOps agent internal documentation.
If you have not yet configured the MLOps agent to monitor your deployment, a download of the MLOps agent tarball is available from a link in the Monitoring tab. Additional documentation for setting up the agent is included in the tarball.