可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have a collection of n dimensional points and I want to find which 2 are the closest. The best I could come up for 2 dimensions is:
from numpy import *
myArr = array( [[1, 2],
[3, 4],
[5, 6],
[7, 8]] )
n = myArr.shape[0]
cross = [[sum( ( myArr[i] - myArr[j] ) ** 2 ), i, j]
for i in xrange( n )
for j in xrange( n )
if i != j
]
print min( cross )
which gives
[8, 0, 1]
But this is too slow for large arrays. What kind of optimisation can I apply to it?
RELATED:
Euclidean distance between points in two different Numpy arrays, not within
回答1:
Try scipy.spatial.distance.pdist(myArr)
. This will give you a condensed distance matrix. You can use argmin
on it and find the index of the smallest value. This can be converted into the pair information.
回答2:
There's a whole Wikipedia page on just this problem, see: http://en.wikipedia.org/wiki/Closest_pair_of_points
Executive summary: you can achieve O(n log n) with a recursive divide and conquer algorithm (outlined on the Wiki page, above).
回答3:
You could take advantage of the latest version of SciPy's (v0.9) Delaunay triangulation tools. You can be sure that the closest two points will be an edge of a simplex in the triangulation, which is a much smaller subset of pairs than doing every combination.
Here's the code (updated for general N-D):
import numpy
from scipy import spatial
def closest_pts(pts):
# set up the triangluataion
# let Delaunay do the heavy lifting
mesh = spatial.Delaunay(pts)
# TODO: eliminate reduncant edges (numpy.unique?)
edges = numpy.vstack((mesh.vertices[:,:dim], mesh.vertices[:,-dim:]))
# the rest is easy
x = mesh.points[edges[:,0]]
y = mesh.points[edges[:,1]]
dists = numpy.sum((x-y)**2, 1)
idx = numpy.argmin(dists)
return edges[idx]
#print 'distance: ', dists[idx]
#print 'coords:\n', pts[closest_verts]
dim = 3
N = 1000*dim
pts = numpy.random.random(N).reshape(N/dim, dim)
Seems closely O(n):
回答4:
There is a scipy function pdist
that will get you the pairwise distances between points in an array in a fairly efficient manner:
http://docs.scipy.org/doc/scipy/reference/spatial.distance.html
that outputs the N*(N-1)/2 unique pairs (since r_ij == r_ji). You can then search on the minimum value and avoid the whole loop mess in your code.
回答5:
Perhaps you could proceed along these lines:
In []: from scipy.spatial.distance import pdist as pd, squareform as sf
In []: m= 1234
In []: n= 123
In []: p= randn(m, n)
In []: d= sf(pd(p))
In []: a= arange(m)
In []: d[a, a]= d.max()
In []: where(d< d.min()+ 1e-9)
Out[]: (array([701, 730]), array([730, 701]))
With substantially more points you need to be able to somehow utilize the hierarchical structure of your clustering.
回答6:
How fast is it compared to just doing a nested loop and keeping track of the shortest pair? I think creating a huge cross array is what might be hurting you. Even O(n^2) is still pretty quick if you're only doing 2 dimensional points.
回答7:
The accepted answer is OK for small datasets, but its execution time scales as n**2
. However, as pointed out by @payne, an optimal solution can achieve n*log(n)
computation time scaling.
This optial solution can be obtained using sklearn.neighbors.BallTree as follows.
import matplotlib.pyplot as plt
import numpy as np
from sklearn.neighbors import BallTree as tree
n = 10
dim = 2
xy = np.random.uniform(size=[n, dim])
# This solution is optimal when xy is very large
res = tree(xy)
dist, ids = res.query(xy, 2)
mindist = dist[:, 1] # second nearest neighbour
minid = np.argmin(mindist)
plt.plot(*xy.T, 'o')
plt.plot(*xy[ids[minid]].T, '-o')
This procedure scales well for very large sets of xy
values and even for large dimensions dim
(altough the example illustrates the case dim=2
). The resulting output looks like this
An identical solution can be obtained using scipy.spatial.cKDTree, by replacing the sklearn
import with the following Scipy one. Note however that cKDTree
, unlike BallTree
, does not scale well for high dimensions
from scipy.spatial import cKDTree as tree