Calibration for XGBoost probabilities¶
Customer asked a stumper for me. They mentioned that XGBoost probabilities for binary classifiers can sometimes be off base and need to be “calibrated”. I’ll admit this is over my head, but is this just the XGBoost loss function?
Is it important for the probabilities to be properly calibrated, or good to have? What's the use case?
Robot 3, who almost certainly knows the technical answer about the loss function.
It’s 90/10 unbalanced and is likely some sort of medical device failure or bad outcome.
We use Logloss for our XGBoost models, which usually leads to pretty well calibrated models.
If we used a different loss function, we would need to calibrate (but we don’t).
We’ve investigated ourselves and determined that using Logloss is a good solution.
Should we have a DR link that explains "we've thought about this, and here's our answer"?
There was a great question from this morning on the calibration of the probabilities for XGBoost models. I discussed this with some of the data scientists who work on core modeling. Based on their research on this issue, Using the LogLoss loss function generally produces well-calibrated probabilities and this is default function for unbalanced binary classification datasets.
For other optimization metrics, calibration may be necessary and is not done by DataRobot at this time.
If they wanted to, they could add a calibration step in the blueprint like here:
Maybe worth noting that another quick way to check calibration is by looking at the lift chart. Not the 100% answer but still helps.