Is there a library that will take a list of documents and en masse compute the nxn matrix of distances - where the word2vec model is supplied? I can see that genism allows you to do this between two documents - but I need a fast comparison across all docs. like sklearns cosine_similarity.
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
The "Word Mover's Distance" (earth-mover's distance applied to groups of word-vectors) is a fairly involved optimization calculation dependent on every word in each document.
I'm not aware of any tricks that would help it go faster when calculating many at once – even many distances to the same document.
So the only thing needed to calculate pairwise distances are nested loops to consider each (order-ignoring unique) pairing.
For example, assuming your list of documents (each a list-of-words) is docs
, a gensim word-vector model in model
, and numpy
imported as np
, you could calculate the array of pairwise distances D with:
D = np.zeros((len(docs), len(docs)))
for i in range(len(docs)):
for j in range(len(docs)):
if i == j:
continue # self-distance is 0.0
if i > j:
D[i, j] = D[j, i] # re-use earlier calc
D[i, j] = model.wmdistance(docs[i], docs[j])
It may take a while, but you'll then have all pairwise distances in array D.