The Cross-Class Accuracy tab calculates, for each protected feature, evaluation metrics and ROC curve-related scores segmented by class. Use these metrics to better understand how well the model is performing, and its behavior on a given protected feature/class segment.
Cross-Class Accuracy table¶
Use the Cross-Class Accuracy table to understand the model's accuracy performance for each protected class. Change the protected feature using the dropdown at the top.
The table below describes each accuracy metric:
|Optimization metric (LogLoss in this example)
|Displays the optimization metric selected on the Data page before model building.
|Reports the model's accuracy score, computed based on precision and recall.
|AUC (Area under the curve)
|Measures how well the model can distinguish between classes.
|Measures the percentage of correctly classified instances.
The above example compares LogLoss (the project's optimization metric) between male and female. The score for females is lower, meaning the model is better at predicting salary rate correctly for females than males.