I am training ML logistic classifier to classify two classes using python scikit-learn. They are in an extremely imbalanced data (about 14300:1). I'm getting almost 100% accuracy and ROC-AUC, but 0% in precision, recall, and f1 score. I understand that accuracy is usually not useful in very imbalanced data, but why is the ROC-AUC measure is close to perfect as well?
from sklearn.metrics import roc_curve, auc
# Get ROC
y_score = classifierUsed2.decision_function(X_test)
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_score)
roc_auc = auc(false_positive_rate, true_positive_rate)
print 'AUC-'+'=',roc_auc
1= class1
0= class2
Class count:
0 199979
1 21
Accuracy: 0.99992
Classification report:
precision recall f1-score support
0 1.00 1.00 1.00 99993
1 0.00 0.00 0.00 7
avg / total 1.00 1.00 1.00 100000
Confusion matrix:
[[99992 1]
[ 7 0]]
AUC= 0.977116255281
The above is using logistic regression, below is using decision tree, the decision matrix looks almost identical, but the AUC is a lot different.
1= class1
0= class2
Class count:
0 199979
1 21
Accuracy: 0.99987
Classification report:
precision recall f1-score support
0 1.00 1.00 1.00 99989
1 0.00 0.00 0.00 11
avg / total 1.00 1.00 1.00 100000
Confusion matrix:
[[99987 2]
[ 11 0]]
AUC= 0.4999899989
One must understand crucial difference between AUC ROC and "point-wise" metrics like accuracy/precision etc. ROC is a function of a threshold. Given a model (classifier) that outputs the probability of belonging to each class, we predict the class that has the highest probability (support). However, sometimes we can get better scores by changing this rule and requiring one support to be 2 times bigger than the other to actually classify as a given class. This is often true for imbalanced datasets. This way you are actually modifying the learned prior of classes to better fit your data. ROC looks at "what would happen if I change this threshold to all possible values" and then AUC ROC computes the integral of such a curve.
Consequently: