I have a point cloud C, where each point has an associated value. Lets say the points are in 2-d space, so each point can be represented with the triplet (x, y, v).
I'd like to find the subset of points which are local maxima. That is, for some radius R, I would like to find the subset of points S in C such that for any point Pi (with value vi) in S, there is no point Pj in C within R distance of Pi whose value vj is greater that vi.
I see how I could do this in O(N^2) time, but that seems wasteful. Is there an efficient way to do this?
Side Notes:
- The source of this problem is that I'm trying to find local maxima in a sparse matrix, so in my case x, y are ordered integer indeces - if this simplifies the problem let me know!
- I'm perfectly happy if the solution is just for a manhattan distance or whatever.
- I'm in python, so if there's some kind of nice vectorized numpy way to do this that's just great.
Use a 2D-tree (2D instance of a kD-tree). After N.Log(N) time preprocessing, It will allow you to perform fixed-radius near-neighbor searches around all your points in about Log(N) + K time (K neighbors found on average), for a total of N.Log(N)+ K.N. It will perfectly live with the Manhattan distance.
Following up on Yves' suggestion, here's an answer, which uses scipy's KDTree:
I found this solution, but it's probably O(N^2):
It's all about defining sliding windows/filters over your matrix. All other solutions coming to my mind are rather pointing to absolute maxima (like e.g. histogramming)...
However I hope my ansatz is useful to some extent...
EDIT: here another solution which should be faster than the first, but still O(N^2), and it does not depend on rectilinear gridded data:
Result: