I am using the LIBSVM library, a library for Support vector Machines compatible with both Python and Matlab to perform classification in a digit recognition algorithm and also a face recognition algorithm.
I am facing a very weird problem while performing SVM classification. The accuracy rate of both training and testing varies drastically when I run the program in different computers using the same code base, same interpreter (Python in my case) and same training and testing data.
Here is the code for only the function call for classification of the digit recognition algorithm (Here Number of Training images = 1409 and Number of Test Images = 997):
from svmutil import *
"""Data for training"""
train_features, train_labels = load_data('ocr_data/training/')
"""Data for testing"""
test_features, test_labels = load_data('ocr_data/testing/')
"""Training a linear-SVM classifier"""
train_features = train_features.tolist()
test_features = test_features.tolist()
problem = svm_problem(train_labels, train_features)
parameter = svm_parameter('-t 0')
model = svm_train(problem, parameter)
svm_save_model('ocr.model', model)
res = svm_predict(train_labels, train_features, model)
res = svm_predict(test_labels, test_features, model)
The output for accuracy after training and testing are completed is given below for 4gb RAM/64 bit OS Windows 8:
Accuracy = 97.0901% (1368/1409) (classification)
Accuracy = 35.6068% (355/997) (classification)
A different output for the same in Laptop of 2gb RAM/32-bit OS Windows 7:
Accuracy = 100.00% (1409/1409) (classification)
Accuracy = 99.29% (990/997) (classification)
There are other variable accuracy outputs for some other computers which i am not including. I have also faced the same problem with another face recognition algorithm programmed in Matlab.
Is this is a generalized problem with SVM or am I doing something wrong. Also if this is a frequent problem with SVM classification please let me know what is the actual reason for this. Any solution is appreciated. Thanks.