Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Calibration for XGBoost probabilities

Robot 1

Customer asked a stumper for me. They mentioned that XGBoost probabilities for binary classifiers can sometimes be off base and need to be “calibrated”. I’ll admit this is over my head, but is this just the XGBoost loss function?

Robot 2

Is it important for the probabilities to be properly calibrated, or good to have? What's the use case? CC @Robot 3, who almost certainly knows the technical answer about the loss function.

Robot 1

It’s 90/10 unbalanced and is likely some sort of medical device failure or bad outcome.

Robot 3

We use Logloss for our XGBoost models, which usually leads to pretty well calibrated models.

If we used a different loss function, we would need to calibrate (but we don’t).

We’ve investigated ourselves and determined that using Logloss is a good solution.

Robot 2

Should we have a DR link that explains "we've thought about this, and here's our answer"?

Robot 1

How about:

There was a great question from this morning on the calibration of the probabilities for XGBoost models. I discussed this with some of the data scientists who work on core modeling. Based on their research on this issue, Using the LogLoss loss function generally produces well-calibrated probabilities and this is default function for unbalanced binary classification datasets.

For other optimization metrics, calibration may be necessary and is not done by DataRobot at this time.

Robot 4

If they wanted to, they could add a calibration step in the blueprint like here:

Robot 5

Maybe worth noting that another quick way to check calibration is by looking at the lift chart. Not the 100% answer but still helps.


Updated September 18, 2023
Back to top