converting a text corpus to a text document with v

2019-09-07 12:15发布

问题:

I have a text corpus with say 5 documents, every document is separated with each other by /n. I want to provide an id to every word in the document and calculate its respective tfidf score. for example, suppose we have a text corpus named "corpus.txt" as follows:-

"Stack over flow text vectorization scikit python scipy sparse csr" while calculating the tfidf using

mylist =list("corpus.text")
vectorizer= CountVectorizer
x_counts = vectorizer_train.fit_transform(mylist) 
tfidf_transformer = TfidfTransformer()
x_tfidf = tfidf_transformer.fit_transform(x_counts)

the output is

(0,12) 0.1234 #for 1st document
(1,8) 0.3456  #for 2nd  document
(1,4) 0.8976
(2,15) 0.6754 #for third document
(2,14) 0.2389
(2,3) 0.7823
(3,11) 0.9897 #for fourth document
(3,13) 0.8213
(3,5) 0.7722
(3,6) 0.2211
(4,7) 0.1100 # for fifth document
(4,10) 0.6690
(4,2) 0.0912
(4,9) 0.2345
(4,1) 0.1234

I converted this scipy.sparse.csr matrix into a list of lists to remove the document id, and keeping only the vocabulary_id and its respective tfidf score using:

m = x_tfidf.tocoo()
mydata = {k: v for k, v in zip(m.col, m.data)} 
key_val_pairs = [str(k) + ":" + str(v) for k, v in mydata.items()] 

but the problem is that I am getting an output where the vocabulary_id and its respective tfidf score is arranged in ascending order and without any reference to document.

For example, for the above given corpus my current output(I have dumped into a text file using json) looks like:

1:0.1234
2:0.0912
3:0.7823
4:0.8976
5:0.7722
6:0.2211
7:0.1100
8:0.3456
9:0.2345
10:0.6690
11:0.9897
12:0.1234
13:0.8213
14:0.2389
15:0.6754

whereas I would have want my text file to be like as follows:

12:0.1234
8:0.3456 4:0.8976
15:0.1234 14:0.2389 3:0.7823
11:0.9897 13:0.8213 5:0.7722 6:0.2211
7:0.1100 10:0.6690 2:0.0912 9:0.2345 1:0.1234

any idea how to get it done ?

回答1:

I guess this is what you need. Here corpus is a collection of documents.

from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ["stack over flow stack over flow text vectorization scikit", "stack over flow"]

vectorizer = TfidfVectorizer()
x = vectorizer.fit_transform(corpus) # corpus is a collection of documents

print(vectorizer.vocabulary_) # vocabulary terms and their index
print(x) # tf-idf weights for each terms belong to a particular document

This prints:

{'vectorization': 5, 'text': 4, 'over': 1, 'flow': 0, 'stack': 3, 'scikit': 2}
  (0, 2)    0.33195438857 # first document, word = scikit
  (0, 5)    0.33195438857 # word = vectorization
  (0, 4)    0.33195438857 # word = text
  (0, 0)    0.472376562969 # word = flow
  (0, 1)    0.472376562969 # word = over
  (0, 3)    0.472376562969 # word = stack
  (1, 0)    0.57735026919 # second document
  (1, 1)    0.57735026919
  (1, 3)    0.57735026919

From this information, you can represent the documents in your desired way as following:

cx = x.tocoo()
doc_id = -1
for i,j,v in zip(cx.row, cx.col, cx.data):
    if doc_id == -1:
        print(str(j) + ':' + "{:.4f}".format(v), end=' ')
    else:
        if doc_id != i:
            print()
        print(str(j) + ':' + "{:.4f}".format(v), end=' ')
    doc_id = i

This prints:

2:0.3320 5:0.3320 4:0.3320 0:0.4724 1:0.4724 3:0.4724 
0:0.5774 1:0.5774 3:0.5774