How could I deal with the sparse feature with high

2019-08-05 00:00发布

问题:

I have a twitter-like(another micro blog) data set with 1.6 million datapoints and tried to predict the its retweet numbers based on its content. I extracted its keyword and use the keywords as the bag of words feature. Then I got 1.2 million dimension feature. The feature vector is very sparse,usually only ten dimension in one data point. And I use SVR to do the regression. Now it has taken 2 days. I think the training time might take quite a long time. I don't know if I do this task like this is normal. Is there any way or is it necessary to optimize this problem?
BTW. If in this case , I don't use any kernel and the machine is 32GB RAM and i-7 16 cores. How long the training time will be in estimation? I used the lib pyml.

回答1:

You need to find a dimensionality reduction approach that works for your problem.

I've worked on a similar problem to yours and I found that Information Gain worked well, but there are others.

I found this paper (Fabrizio Sebastiani, Machine Learning in Automated Text Categorization, ACM Computing Surveys, Vol. 34, No.1, pp.1-47, 2002) to be a good theoretical treatment of text classification, including feature reduction by a variety of methods from the simple (Term Frequency) to the complex (Information-Theoretic).

These functions try to capture the intuition that the best terms for ci are the ones distributed most differently in the sets of positive and negative examples of ci. However, interpretations of this principle vary across different functions. For instance, in the experimental sciences χ2 is used to measure how the results of an observation differ (i.e., are independent) from the results expected according to an initial hypothesis (lower values indicate lower dependence). In DR we measure how independent tk and ci are. The terms tk with the lowest value for χ2(tk, ci) are thus the most independent from ci; since we are interested in the terms which are not, we select the terms for which χ2(tk, ci) is highest.

These techniques help you choose terms that are most useful in separating the training documents into the given classes; the terms with the highest predictive value for your problem.

I've been successful using Information Gain for feature reduction and found this paper (Entropy based feature selection for text categorization Largeron, Christine and Moulin, Christophe and Géry, Mathias - SAC - Pages 924-928 2011) to be a very good practical guide.

Here the authors present a simple formulation of entropy-based feature selection that's useful for implementation in code:

Given a term tj and a category ck, ECCD(tj , ck) can be computed from a contingency table. Let A be the number of documents in the category containing tj ; B, the number of documents in the other categories containing tj ; C, the number of documents of ck which do not contain tj and D, the number of documents in the other categories which do not contain tj (with N = A + B + C + D):

Using this contingency table, Information Gain can be estimated by:

This approach is easy to implement and provides very good Information-Theoretic feature reduction.

You needn't use a single technique either; you can combine them. Ter-Frequency is simple, but can also be effective. I've combined the Information Gain approach with Term Frequency to do feature selection successfully. You should experiment with your data to see which technique or techniques work most effectively.



回答2:

At first you can simply remove all words with high frequency and all words with low frequency, because both of them don't tell you much about content of a text, then you have to do a word-stemming.

After that you can try to reduce dimensionality of your space, with Feature hashing, or some more advance dimensionality reduction trick (PCA, ICA), or even both of them.