The following sections describe the components to making predictions in DataRobot:
- Using the Prediction API
- Using Scoring Code
- Using batch Scoring
- Using the UI to make predictions
- Using the Portable Prediction Server
DataRobot’s exportable models and independent prediction environment option, which allows a user to export a model from a model building environment to a dedicated and isolated prediction environment, is not available for Managed AI Cloud deployments.
DataRobot offers several different methods for getting predictions on new data (also known as scoring) from a model.
Predictions with the UI¶
The simplest method for making predictions is to use the UI to score data. This option is great, for example, for those who use Excel to create quarterly reports. You can upload new data to score, let DataRobot make predictions, and then download the results.
Prediction server to deploy a model¶
You can use a DataRobot prediction server with a REST API to access a more automated, real-time scoring method. This method can be easily integrated with other IT systems, applications, or code to query a DataRobot model and return predictions. The prediction server can be hosted in both cloud and on-premise environments. You can also use the Portable Prediction Server, which runs disconnected from main installation environments outside of DataRobot.
Scoring Code to deploy a model¶
You can export Scoring Code from DataRobot in Java or Python to make predictions. Scoring Code is portable and executable in any computing environment. This method is useful for low-latency applications that cannot fully support REST API performance or lack network access.
Make predictions and monitor model health¶
If you use any of the methods mentioned above, DataRobot allows you to deploy a model and monitor its prediction output and performance over a selected time period.
A critical part of the model management process is to identify when a model starts to deteriorate and quickly address it. Once trained, models can then make predictions on new data that you provide. However, prediction data changes over time—businesses expand to new cities, new products enter the market, policy or processes change—any number of changes can occur. This can result in data drift, the term used to describe when newer data moves away from the original training data, which can result in poor or unreliable prediction performance over time.
Use the deployment dashboard to analyze a model's performance metrics: prediction response time, model health, accuracy, data drift analysis, and more. When models deteriorate, the common action to take is to retrain a new model. Deployments allow you to replace models without re-deploying them, so not only do you not need to change your code but DataRobot can track and represent the entire history of a model used for a particular use case.