Consider there are some lists of integers as:
#--------------------------------------
0 [0,1,3]
1 [1,0,3,4,5,10,...]
2 [2,8]
3 [3,1,0,...]
...
n []
#--------------------------------------
The question is to merge lists having at least one common element. So the results only for the given part will be as follows:
#--------------------------------------
0 [0,1,3,4,5,10,...]
2 [2,8]
#--------------------------------------
What is the most efficient way to do this on large data (elements are just numbers)?
Is tree
structure something to think about?
I do the job now by converting lists to sets
and iterating for intersections, but it is slow! Furthermore I have a feeling that is so-elementary! In addition, the implementation lacks something (unknown) because some lists remain unmerged sometime! Having said that, if you were proposing self-implementation please be generous and provide a simple sample code [apparently Python is my favoriate :)] or pesudo-code.
Update 1:
Here is the code I was using:
#--------------------------------------
lsts = [[0,1,3],
[1,0,3,4,5,10,11],
[2,8],
[3,1,0,16]];
#--------------------------------------
The function is (buggy!!):
#--------------------------------------
def merge(lsts):
sts = [set(l) for l in lsts]
i = 0
while i < len(sts):
j = i+1
while j < len(sts):
if len(sts[i].intersection(sts[j])) > 0:
sts[i] = sts[i].union(sts[j])
sts.pop(j)
else: j += 1 #---corrected
i += 1
lst = [list(s) for s in sts]
return lst
#--------------------------------------
The result is:
#--------------------------------------
>>> merge(lsts)
>>> [0, 1, 3, 4, 5, 10, 11, 16], [8, 2]]
#--------------------------------------
Update 2:
To my experience the code given by Niklas Baumstark below showed to be a bit faster for the simple cases. Not tested the method given by "Hooked" yet, since it is completely different approach (by the way it seems interesting).
The testing procedure for all of these could be really hard or impossible to be ensured of the results. The real data set I will use is so large and complex, so it is impossible to trace any error just by repeating. That is I need to be 100% satisfied of the reliability of the method before pushing it in its place within a large code as a module. Simply for now Niklas's method is faster and the answer for simple sets is correct of course.
However how can I be sure that it works well for real large data set? Since I will not be able to trace the errors visually!
Update 3: Note that reliability of the method is much more important than speed for this problem. I will be hopefully able to translate the Python code to Fortran for the maximum performance finally.
Update 4:
There are many interesting points in this post and generously given answers, constructive comments. I would recommend reading all thoroughly. Please accept my appreciation for the development of the question, amazing answers and constructive comments and discussion.
Here's an implementation using a disjoint-set data structure (specifically a disjoint forest), thanks to comingstorm's hint at merging sets which have even one element in common. I'm using path compression for a slight (~5%) speed improvement; it's not entirely necessary (and it prevents
find
being tail recursive, which could slow things down). Note that I'm using adict
to represent the disjoint forest; given that the data areint
s, an array would also work although it might not be much faster.This approach is comparable to the other best algorithms on Rik's benchmarks.
My solution, works well on small lists and is quite readable without dependencies.
Benchmarking it:
lists = [[1,2,3],[3,5,6],[8,9,10],[11,12,13]]
%timeit merge_list(lists)
100000 loops, best of 3: 4.9 µs per loop
This would be my updated approach:
Note: During the merging empty lists will be removed.
Update: Reliability.
You need two tests for a 100% reliabilty of success:
Check that all the resulting sets are mutually disjointed:
Check that the merged set cover the original data. (as suggested by katrielalex)
I think this will take some time, but maybe it'll be worth it if you want to be 100% sure.
My attempt:
Benchmark:
These timings are obviously dependent on the specific parameters to the benchmark, like number of classes, number of lists, list size, etc. Adapt those parameters to your need to get more helpful results.
Below are some example outputs on my machine for different parameters. They show that all the algorithms have their strength and weaknesses, depending on the kind of input they get:
I tried to summurize everything that's been said and done about this topic in this question and in the duplicate one.
I tried to test and time every solution (all the code here).
Testing
This is the
TestCase
from the testing module:This test is supposing a list of sets as result, so I couldn't test a couple of sulutions that worked with lists.
I couldn't test the following:
Among the ones I could test, two failed:
Timing
The performances are strongly related with the data test employed.
So far three answers tried to time theirs and others solution. Since they used different testing data they had different results.
Niklas benchmark is very twakable. With his banchmark one could do different tests changing some parameters.
I've used the same three sets of parameters he used in his own answer, and I put them in three different files:
This are the results that I got:
From file:
timing_1.txt
From file:
timing_2.txt
From file:
timing_3.txt
With Sven's testing data I got the following results:
And finally with Agf's benchmark I got:
As I said at the beginning all the code is available at this git repository. All the merging functions are in a file called
core.py
, every function there with its name ending with_merge
will be auto loaded during the tests, so it shouldn't be hard to add/test/improve your own solution.Let me also know if there's something wrong, it's been a lot of coding and I could use a couple of fresh eyes :)
This new function only does the minimum necessary number of disjointness tests, something the other similar solutions fail to do. It also uses a
deque
to avoid as many linear time operations as possible, like list slicing and deletion from early in the list.The less overlap between the sets in a given set of data, the better this will do compared to the other functions.
Here is an example case. If you have 4 sets, you need to compare:
If 1 overlaps with 3, then 2 needs to be re-tested to see if it now overlaps with 1, in order to safely skip testing 2 against 3.
There are two ways to deal with this. The first is to restart the testing of set 1 against the other sets after each overlap and merge. The second is to continue with the testing by comparing 1 with 4, then going back and re-testing. The latter results in fewer disjointness tests, as more merges happen in a single pass, so on the re-test pass, there are fewer sets left to test against.
The problem is to track which sets have to be re-tested. In the above example, 1 needs to be re-tested against 2 but not against 4, since 1 was already in its current state before 4 was tested the first time.
The
disjoint
counter allows this to be tracked.My answer doesn't help with the main problem of finding an improved algorithm for recoding into FORTRAN; it is just what appears to me to be the simplest and most elegant way to implement the algorithm in Python.
According to my testing (or the test in the accepted answer), it's slightly (up to 10%) faster than the next fastest solution.
No need for the un-Pythonic counters (
i
,range
) or complicated mutation (del
,pop
,insert
) used in the other implementations. It uses only simple iteration, merges overlapping sets in the simplest manner, and builds a single new list on each pass through the data.My (faster and simpler) version of the test code: