I am building the model, having 12 parameters and {0,1} labels using logistic regression in sklearn. I need to be very confident about label 0, I am ok if some '0' will be missclassified to 1. The purpose of this, that I would like to exclude the data from the processing if the data is classifies to 0.
How can I tune the parameters?
You are basically looking for specificity, which is defined as the TN/(TN+FP)
, where TN is True Negative and FP is False Positive. You can read more about this in this blog post and more in detail here. To implement this you need to use make_scorer along with confusion_matrix metric in sklearn as follows :
from sklearn.metrics import confusion_matrix
from sklearn.metrics import make_scorer
def get_TN_rate(y_true,y_pred):
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
specificity = float(tn)/(float(tn)+float(fp))
return specificity
tn_rate = make_scorer(get_TN_rate,greater_is_better=True)
Now you can use tn_rate
as a scoring function to train your classifier.