I am building the model, having 12 parameters and {0,1} labels using logistic regression in sklearn. I need to be very confident about label 0, I am ok if some '0' will be missclassified to 1. The purpose of this, that I would like to exclude the data from the processing if the data is classifies to 0.
How can I tune the parameters?
You are basically looking for specificity, which is defined as the
TN/(TN+FP)
, where TN is True Negative and FP is False Positive. You can read more about this in this blog post and more in detail here. To implement this you need to use make_scorer along with confusion_matrix metric in sklearn as follows :Now you can use
tn_rate
as a scoring function to train your classifier.