I am new in machine learning and in scikit-learn.
My problem:
(Please, correct any type of missconception)
I have a dataset which is a BIG JSON, I retrieve it and store it in a trainList
variable.
I pre-process it in order to be able to work with it.
Once I have done that, I start the classification:
- I use kfold cross validation method in order to obtain the mean accuracy and I train a classifier.
- I make the predicctions and I obtain the accuracy and confusion matrix of that fold.
- After this, I would like to obtain the True Positive(TP), True Negative(TN), False Positive(FP) and False Negative(FN) values. I would use these paramters to obtain the Sensitivity and the specificity and I would them and the total of the TPs to a HTML in order to show a chart with the TPs of each label.
Code:
The variables I have for the moment:
trainList #It is a list with all the data of my dataset in JSON form
labelList #It is a list with all the labels of my data
Most part of the method:
#I transform the data from JSON form to a numerical one
X=vec.fit_transform(trainList)
#I scale the matrix (don't know why but without it, it makes an error)
X=preprocessing.scale(X.toarray())
#I generate a KFold in order to make cross validation
kf = KFold(len(X), n_folds=10, indices=True, shuffle=True, random_state=1)
#I start the cross validation
for train_indices, test_indices in kf:
X_train=[X[ii] for ii in train_indices]
X_test=[X[ii] for ii in test_indices]
y_train=[listaLabels[ii] for ii in train_indices]
y_test=[listaLabels[ii] for ii in test_indices]
#I train the classifier
trained=qda.fit(X_train,y_train)
#I make the predictions
predicted=qda.predict(X_test)
#I obtain the accuracy of this fold
ac=accuracy_score(predicted,y_test)
#I obtain the confusion matrix
cm=confusion_matrix(y_test, predicted)
#I should calculate the TP,TN, FP and FN
#I don't know how to continue
if you have more than one classes in your classifier, you might want to use pandas-ml at that part. Confusion Matrix of pandas-ml give more detailed information. check that
Here's a fix to invoketheshell's buggy code (which currently appears as the accepted answer):
For the multi-class case, everything you need can be found from the confusion matrix. For example, if your confusion matrix looks like this:
Then what you're looking for, per class, can be found like this:
Using pandas/numpy, you can do this for all classes at once like so:
If you have two lists that have the predicted and actual values; as it appears you do, you can pass them to a function that will calculate TP, FP, TN, FN with something like this:
From here I think you will be able to calculate rates of interest to you, and other performance measure like specificity and sensitivity.
According to scikit-learn documentation,
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html#sklearn.metrics.confusion_matrix
By definition a confusion matrix C is such that C[i, j] is equal to the number of observations known to be in group i but predicted to be in group j.
Thus in binary classification, the count of true negatives is C[0,0], false negatives is C[1,0], true positives is C[1,1] and false positives is C[0,1].