Addressing synonyms in Supervised Learning for Tex

2019-07-20 04:33发布

问题:

I am using scikit-learn supervised learning method for text classification. I have a training dataset with input text fields and the categories they belong to. I use tf-idf, SVM classifier pipeline for creating the model. The solution works well for normal testcases. But if a new text is entered which has synoynmous words as in the training set, the solution fails to classify correctly. For e.g: the word 'run' might be there in the training data but if I use the word 'sprint' to test, the solution fails to classify correctly.

What is the best approach here? Adding all synonyms for all words in training dataset doesn't look like a scalable approach to me

回答1:

You should look into word vectors and dense document embeddings. Right now you are passing scikit-learn a matrix X, where each row is a numerical representation of a document in your dataset. You are getting this representation with tf-idf but as you noticed this doesn't capture word similarities and you are also having issues with out of vocabulary words.

A possible improvement is to represent each word with a dense vector of lets say dimension 300, in such a way that words with similar meaning are close in this 300 dimensional space. Fortunately you don't need to build these vectors from scratch (look up gensim word2vec and spacy). Another good thing is that by using word embeddings pre-trained on very large corpus like Wikipedia you are incorporating a lot of linguistic information about the world into your algorithm that you couldn't infer from your corpus otherwise (like the fact that sprint and run are synonyms).

Once you get good and semantic numeric representation for words you need to get a vector representation for each document. The simplest way would be to average the word vectors of each word in the sentence.

Example pseudocode to get you started:

>>> import spacy

>>> nlp = spacy.load('en')
>>> doc1 = nlp('I had a good run')
>>> doc1.vector
array([  6.17495403e-02,   2.07064897e-02,  -1.56451517e-03,
         1.02607915e-02,  -1.30429687e-02,   1.60102192e-02, ...

Now lets try a different document:

>>> doc2 = nlp('I had a great sprint')
>>> doc2.vector
array([ 0.02453461, -0.00261007,  0.01455955, -0.01595449, -0.01795897,
   -0.02184369, -0.01654281,  0.01735667,  0.00054854, ...

>>> doc2.similarity(doc1)
0.8820845113100807

Note how the vectors are similar (in the sense of cosine similarity) even when the words are different. Because the vectors are similar, a scikit-learn classifier will learn to assign them to the same category. With a tf-idf representation this would not be the case.

This is how you can use these vectors in scikit-learn:

X = [nlp(text).vector for text in corpus]
clf.fit(X, y)