Deploy custom inference models¶
While you can deploy your custom inference model to an environment without testing, DataRobot strongly recommends your model pass testing before deployment.
To deploy a custom inference model:
Navigate to Model Registry > Custom Model Workshop > Models and select the model you want to deploy.
In the Assemble tab, click the Deploy link in the middle of the screen.
If your model is not tested, you are prompted to Test now or to Deploy package without testing. DataRobot recommends testing that your model can make predictions prior to deploying.
After uploading your model, you are directed to the deployment information page. Most information for your custom model is automatically provided.
Under the Model header, provide functional validation data. This data is a partition of the model's training data and is used to evaluate model performance.
Once a custom inference model is deployed, it can make predictions using API calls to a dedicated prediction server managed by DataRobot. You can find more information about using the prediction API in the Predictions documentation.
When you deploy a custom model, it generates log reports unique to this type of deployment, allowing you to debug custom code and troubleshoot prediction request failures from within DataRobot.
To view the logs for a deployed model, navigate to the deployment, open the actions menu, and select View Logs.
You can access two types of logs:
Runtime Logs are used to troubleshoot failed prediction requests (via the Predictions tab or the API). The logs are captured from the Docker container running the deployed custom model and contain up to 1 MB of data. The logs are cached for 5 minutes after you make a prediction request. You can re-request the logs by clicking Refresh.
Deployment logs are automatically captured if the custom model fails while deploying. The logs are stored permanently as part of the deployment.
Note that DataRobot only provides logs from inside the Docker container from which the custom model runs. Therefore, it is possible in cases where a custom model fails to deploy or fails to execute a prediction request that no logs will be available. This is because the failures occurred outside of the Docker container.
Use the Search bar to find specific references within the logs. Click Download Log to save a local copy of the logs.