可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I need to calculate the cosine similarity between two lists, let's say for example list 1 which is dataSetI
and list 2 which is dataSetII
. I cannot use anything such as numpy or a statistics module. I must use common modules (math, etc) (and the least modules as possible, at that, to reduce time spent).
Let's say dataSetI
is [3, 45, 7, 2]
and dataSetII
is [2, 54, 13, 15]
. The length of the lists are always equal.
Of course, the cosine similarity is between 0 and 1, and for the sake of it, it will be rounded to the third or fourth decimal with format(round(cosine, 3))
.
Thank you very much in advance for helping.
回答1:
You should try SciPy. It has a bunch of useful scientific routines for example, "routines for computing integrals numerically, solving differential equations, optimization, and sparse matrices." It uses the superfast optimized NumPy for its number crunching. See here for installing.
Note that spatial.distance.cosine computes the distance, and not the similarity. So, you must subtract the value from 1 to get the similarity.
from scipy import spatial
dataSetI = [3, 45, 7, 2]
dataSetII = [2, 54, 13, 15]
result = 1 - spatial.distance.cosine(dataSetI, dataSetII)
回答2:
another version based on numpy
only
from numpy import dot
from numpy.linalg import norm
cos_sim = dot(a, b)/(norm(a)*norm(b))
回答3:
You can use cosine_similarity
function form sklearn.metrics.pairwise
docs
In [23]: from sklearn.metrics.pairwise import cosine_similarity
In [24]: cosine_similarity([[1, 0, -1]], [[-1,-1, 0]])
Out[24]: array([[-0.5]])
回答4:
I don't suppose performance matters much here, but I can't resist. The zip() function completely recopies both vectors (more of a matrix transpose, actually) just to get the data in "Pythonic" order. It would be interesting to time the nuts-and-bolts implementation:
import math
def cosine_similarity(v1,v2):
"compute cosine similarity of v1 to v2: (v1 dot v2)/{||v1||*||v2||)"
sumxx, sumxy, sumyy = 0, 0, 0
for i in range(len(v1)):
x = v1[i]; y = v2[i]
sumxx += x*x
sumyy += y*y
sumxy += x*y
return sumxy/math.sqrt(sumxx*sumyy)
v1,v2 = [3, 45, 7, 2], [2, 54, 13, 15]
print(v1, v2, cosine_similarity(v1,v2))
Output: [3, 45, 7, 2] [2, 54, 13, 15] 0.972284251712
That goes through the C-like noise of extracting elements one-at-a-time, but does no bulk array copying and gets everything important done in a single for loop, and uses a single square root.
ETA: Updated print call to be a function. (The original was Python 2.7, not 3.3. The current runs under Python 2.7 with a from __future__ import print_function
statement.) The output is the same, either way.
CPYthon 2.7.3 on 3.0GHz Core 2 Duo:
>>> timeit.timeit("cosine_similarity(v1,v2)",setup="from __main__ import cosine_similarity, v1, v2")
2.4261788514654654
>>> timeit.timeit("cosine_measure(v1,v2)",setup="from __main__ import cosine_measure, v1, v2")
8.794677709375264
So, the unpythonic way is about 3.6 times faster in this case.
回答5:
I did a benchmark based on several answers in the question and the following snippet is believed to be the best choice:
def dot_product2(v1, v2):
return sum(map(operator.mul, v1, v2))
def vector_cos5(v1, v2):
prod = dot_product2(v1, v2)
len1 = math.sqrt(dot_product2(v1, v1))
len2 = math.sqrt(dot_product2(v2, v2))
return prod / (len1 * len2)
The result makes me surprised that the implementation based on scipy
is not the fastest one. I profiled and find that cosine in scipy takes a lot of time to cast a vector from python list to numpy array.
回答6:
import math
from itertools import izip
def dot_product(v1, v2):
return sum(map(lambda x: x[0] * x[1], izip(v1, v2)))
def cosine_measure(v1, v2):
prod = dot_product(v1, v2)
len1 = math.sqrt(dot_product(v1, v1))
len2 = math.sqrt(dot_product(v2, v2))
return prod / (len1 * len2)
You can round it after computing:
cosine = format(round(cosine_measure(v1, v2), 3))
If you want it really short, you can use this one-liner:
from math import sqrt
from itertools import izip
def cosine_measure(v1, v2):
return (lambda (x, y, z): x / sqrt(y * z))(reduce(lambda x, y: (x[0] + y[0] * y[1], x[1] + y[0]**2, x[2] + y[1]**2), izip(v1, v2), (0, 0, 0)))
回答7:
without using any imports
math.sqrt(x)
can be replaced with
x** .5
without using numpy.dot() you have to create your own dot function using list comprehension:
def dot(A,B):
return (sum(a*b for a,b in zip(A,B)))
and then its just a simple matter of applying the cosine similarity formula:
def cosine_similarity(a,b):
return dot(a,b) / ( (dot(a,a) **.5) * (dot(b,b) ** .5) )
回答8:
You can do this in Python using simple function:
def get_cosine(text1, text2):
vec1 = text1
vec2 = text2
intersection = set(vec1.keys()) & set(vec2.keys())
numerator = sum([vec1[x] * vec2[x] for x in intersection])
sum1 = sum([vec1[x]**2 for x in vec1.keys()])
sum2 = sum([vec2[x]**2 for x in vec2.keys()])
denominator = math.sqrt(sum1) * math.sqrt(sum2)
if not denominator:
return 0.0
else:
return round(float(numerator) / denominator, 3)
dataSet1 = [3, 45, 7, 2]
dataSet2 = [2, 54, 13, 15]
get_cosine(dataSet1, dataSet2)
回答9:
Using numpy compare one list of numbers to multiple lists(matrix):
def cosine_similarity(vector,matrix):
return ( np.sum(vector*matrix,axis=1) / ( np.sqrt(np.sum(matrix**2,axis=1)) * np.sqrt(np.sum(vector**2)) ) )[::-1]
回答10:
You can use this simple function to calculate the cosine similarity:
def cosine_similarity(a, b):
return sum([i*j for i,j in zip(a, b)])/(math.sqrt(sum([i*i for i in a]))* math.sqrt(sum([i*i for i in b])))
回答11:
If you happen to be using PyTorch already, you should go with their CosineSimilarity implementation.
Suppose you have two n
-dimensional numpy.ndarray
s, v1
and v2
, i.e. their shapes are both (n,)
. Here's how you get their cosine similarity:
import torch
import torch.nn as nn
cos = nn.CosineSimilarity()
cos(torch.tensor([v1]), torch.tensor([v2])).item()
Or suppose you have two numpy.ndarray
s w1
and w2
, whose shapes are both (m, n)
. The following gets you a list of cosine similarities, each being the cosine similarity between a row in w1
and the corresponding row in w2
:
cos(torch.tensor(w1), torch.tensor(w2)).tolist()