I'm on a linux machine (Redhat) and I have an 11GB text file. Each line in the text file contains data for a single record and the first n characters of the line contains a unique identifier for the record. The file contains a little over 27 million records.
I need to verify that there are not multiple records with the same unique identifier in the file. I also need to perform this process on an 80GB text file so any solution that requires loading the entire file into memory would not be practical.
I would never recommend that you try to filter such a massive text file in Python. It does not matter how you tackle it you will need to go through some complicated steps to make sure that you do not run out of memory.
The first thing that comes to mind is creating a hash of the lines and then using the hash to find duplicates. Since you save the line number as well you can then directly compare the text to make sure that there are no hash collisions.
But, the easiest solution would be to convert the text file into a database that allows you to quickly sort, search and filter out duplicate items. You can then re-create the text file using that if that is really a requirement.
Assuming I couldn't use a database I'd try something like
Try this:
This will output any duplicate identifiers and how many times they appeared.
Read the file line-by-line, so you don't have to load it all into memory.
For each line (record) create a sha256 hash (32 bytes), unless your identifier is shorter.
Store the hashes/identifiers in an
numpy.array
. That is probably the most compact way to store them. 27 million records times 32 bytes/hash is 864 MB. That should fit into the memory of decent machine these days.To speed up access you could use the first e.g. 2 bytes of the hash as the key of a
collections.defaultdict
and put the rest of the hashes in a list in the value. This would in effect create a hash table with 65536 buckets. For 27e6 records, each bucket would contain on average a list of around 400 entries. It would mean faster searching than a numpy array, but it would use more memory.I haven't tried this on a file quite that large, but ... assuming that the fixed position of the n characters were 7, and that the lines aren't longer than 999+7 characters this might do the job:
Read large text files in Python, line by line without loading it in to memory
The answer to that question was this,
Perhaps that will help you somehow, Good luck.