I have two large files. Their contents looks like this:
134430513
125296589
151963957
125296589
The file contains an unsorted list of ids. Some ids may appear more than one time in a single file.
Now I want to find the intersection part of two files. That is the ids appear in both files.
I just read the two files into 2 sets, s1
and s2
. And get the intersection by s1.intersection(s2)
. But it consumes a lot of memory and seems slow.
So is there any better or pythonic way to do this? If the file contains so many ids that can not be read into a set
with limited memory, what can I do?
EDIT: I read the file into 2 sets using a generator:
def id_gen(path):
for line in open(path):
tmp = line.split()
yield int(tmp[0])
c1 = id_gen(path)
s1 = set(c1)
All of the ids are numeric. And the max id may be 5000000000. If use bitarray, it will consume more memory.
So the algorithm is not necessarily tied to python, but rather generic if you cannot represent all ids in a set in memory. If the range of the integers is limited, an approach would be to use a large bitarray. Now you read the first file and set the integer in the
bitarray
to be present. Now you read the second file, and output all numbers that are also present in thebitarray
.If even this is not sufficient, you can split the range using multiple sweeps. I.e. in the first pass, you only consider integers smaller than 0x200000000 (1GB
bitarray
). Then you reset thebitarray
and read the files again only considering integers from0x200000000
to0x400000000
(and substract0x200000000
before handling the integer).This way you can handle LARGE amounts of data, with reasonable runtime.
A sample for single sweep would be:
AFAIK there is no efficient way to do this with Python, especially if you are dealing with massive amounts of data.
I like rumpel's solution. But please note that bitarray is a C extension.
I would use shell commands to handle this. You can pre-process files to save time & space:
Then you can use
diff
to find out the similarities:Of course it is possible to combine everything into a single command, without creating intermediary files.
UPDATE
According to Can's recommendation,
comm
is the more appropriate command:You need not create both
s1
ands2
. First read in the lines from the first file, convert each line to integer (saves memory), put it ins1
. Then for each line in the second file, convert it to integer, and check if this value is ins1
.That way, you'll save memory from storing strings, and from having two sets.
Others have shown the more idiomatic ways of doing this in Python, but if the size of the data really is too big, you can use the system utilities to sort and eliminate duplicates, then use the fact that a File is an iterator which returns one line at a time, doing something like:
This avoids having more than one line at a time (for each file) in memory (and the system sort should be faster than anything Python can do, as it is optimized for this one task).
which is equivalent to using
intersection
, is the most Pythonic way. You might be able to speed it up by doingsince then you'll be storing and comparing integers rather than strings. This only works if all ids are numeric, of course.
If it's still not fast enough, you can turn to a slightly more imperative style:
If both files are guaranteed not to contain duplicates, you can also use a list to speed things up:
Be sure to measure various solutions, and check whether you're not actually waiting for disk or network input.
for data larger then memory, you can split your data file into 10 files, which contain the same lowest digital.
so all ids in s1.txt that ends with 0 will be saved in s1_0.txt.
Then use set() to find the intersection of s1_0.txt and s2_0.txt, s1_1.txt and s2_1.txt, ...