Pickling a trained classifier yields different res

2019-06-06 01:47发布

问题:

I'm trying to pickle a trained SVM classifier from the Scikit-learn library so that I don't have to train it over and over again. But when I pass the test data to the classifier loaded from the pickle, I get unusually high values for accuracy, f measure, etc. If the test data is passed directly to the classifier which is not pickled, it gives much lower values. I don't understand why pickling and unpickling the classifier object is changing the way it behaves. Can someone please help me out with this?

I'm doing something like this:

from sklearn.externals import joblib
joblib.dump(grid, 'grid_trained.pkl')

Here, grid is the trained classifier object. When I unpickle it, it acts very different from when it is directly used.

回答1:

There should not be any difference as @AndreasMueller stated, here's a modified example from http://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#loading-the-20-newgroups-dataset using pickle:

from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB

# Set labels and data
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)

# Vectorize data
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)

# TF-IDF transformation
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)

# Train classifier
clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)

# Tag new data
docs_new = ['God is love', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)

answers = [(doc, twenty_train.target_names[category]) for doc, category in zip(docs_new, predicted)]


# Pickle the classifier
import pickle
with open('clf.pk', 'wb') as fout:
    pickle.dump(clf, fout)

# Let's clear the classifier
clf = None

with open('clf.pk', 'rb') as fin:
    clf = pickle.load(fin)

# Retag new data
docs_new = ['God is love', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)

answers_from_loaded_clf = [(doc, twenty_train.target_names[category]) for doc, category in zip(docs_new, predicted)]

assert answers_from_loaded_clf == answers
print "Answers from freshly trained classifier and loaded pre-trained classifer are the same !!!"

It's the same when using sklearn.externals.joblib too:

# Pickle the classifier
from sklearn.externals import joblib
joblib.dump(clf, 'clf.pk')

# Let's clear the classifier
clf = None

# Loads the pretrained classifier
clf = joblib.load('clf.pk')

# Retag new data
docs_new = ['God is love', 'OpenGL on the GPU is fast']
X_new_counts = count_vect.transform(docs_new)
X_new_tfidf = tfidf_transformer.transform(X_new_counts)
predicted = clf.predict(X_new_tfidf)

answers_from_loaded_clf = [(doc, twenty_train.target_names[category]) for doc, category in zip(docs_new, predicted)]

assert answers_from_loaded_clf == answers
print "Answers from freshly trained classifier and loaded pre-trained classifer are the same !!!"