After you create and configure a deployment, you can use the settings tabs for individual features to add or update deployment functionality:
|Set up service health monitoring||Enable segmented analysis to assess service health, data drift, and accuracy statistics by filtering them into unique segment attributes and values.|
|Set up data drift monitoring||Enable data drift monitoring on a deployment's Data Drift Settings tab.|
|Set up accuracy monitoring||Enable accuracy monitoring on a deployment's Accuracy Settings tab.|
|Set up fairness monitoring||Enable fairness monitoring on a deployment's Fairness Settings tab.|
|Set up humility rules||Enable humility monitoring by creating rules which enable models to recognize, in real-time, when they make uncertain predictions or receive data they have not seen before.|
|Configure retraining||Enable Automated Retraining for a deployment by defining the general retraining settings and then creating retraining policies.|
|Configure challengers||Enable challenger comparison by configuring a deployment to store prediction request data at the row level and replay predictions on a schedule.|
|Review predictions settings||Review the Predictions Settings tab to view details about your deployment's inference data.|
|Enable data export||Enable data export to compute and monitor custom business or performance metrics.|
|Set up custom metrics monitoring||Enable custom metrics monitoring by defining the "at risk" and "failing" thresholds for the custom metrics you created.|
|Set prediction intervals for time series deployments||Enable prediction intervals in the prediction response for deployed time series models.|
Updated August 1, 2023
Was this page helpful?
Great! Let us know what you found helpful.
What can we do to improve the content?
Thanks for your feedback!