I'm learning classification. I read about using vectors. But I can't find an algorithm to translate a text with words to a vector. Is it about generating a hash of the words and adding a 1 to the hash location in the vector?
相关问题
- How to conditionally scale values in Keras Lambda
- Trying to understand Pytorch's implementation
- Bulding a classification model in R studio with ke
- ParameterError: Audio buffer is not finite everywh
- Create class intervals in r and sum values
相关文章
- How to use cross_val_score with random_state
- How to measure overfitting when train and validati
- McNemar's test in Python and comparison of cla
- How to disable keras warnings?
- Invert MinMaxScaler from scikit_learn
- How should I vectorize the following list of lists
- ValueError: Unknown metric function when using cus
- keras image preprocessing unbalanced data
When most people talk about turning text into a feature vector, all they mean is recording the presence of the word (token).
Two main ways to encode a vector. One is explicit, where you have a
0
for each word that is not present (but is in your vocabulary). The other way is implicit---like a sparse matrix (but just a single vector)---where you only encode terms with a frequency value>= 1
.Bag of words model
The main article that explains this the best is most likely the bag of words model, which is used extensively for natural language processing applications.
Explicit BoW vector example:
Suppose you have the vocabulary:
{brown, dog, fox, jumped, lazy, over, quick, the, zebra}
The sentence
"the quick brown fox jumped over the lazy dog"
could be encoded as:<1, 1, 1, 1, 1, 1, 1, 2, 0>
Remember, position is important.
The sentence
"the zebra jumped"
---even though it is shorter in length---would then be encoded as:<0, 0, 0, 1, 0, 0, 0, 1, 1>
The problem with the explicit approach is that if you have hundreds of thousands of vocabulary terms, each document will also have hundreds of thousands of terms (with mostly zero values).
Implicit BoW vector example:
In this case, the sentence
"the zebra jumped"
could be encoded as:<'jumped': 1, 'the': 1, 'zebra': 1>
where the order is arbitrary.
If you are learning classification I would start with the easier and more intuitive bag of words representation of your text.
If you are however interested in using a feature hashing method, particularly if you have a large set of data, I would suggest this article which describes the use of hashing in text representation and classification.