Considering a really huge file(maybe more than 4GB) on disk,I want to scan through this file and calculate the times of a specific binary pattern occurs.
My thought is:
Use memory-mapped file(CreateFileMap or boost mapped_file) to load the file to the virtual memory.
For each 100MB mapped-memory,create one thread to scan and calculate the result.
Is this feasible?Are there any better method to do so?
Update:
Memory-mapped file would be a good choice,for scaning through a 1.6GB file could be handled within 11s.
thanks.
I would have one thread read the file (possibly as a stream) into an array and have another thread process it. I wouldnt map several at one time because of disk seeks. I would probably have a ManualResetEvent to tell my thread when the next ? bytes are ready to be processed. Assuming your process code is faster then the hdd i would have 2 buffers, one to fill and the other to process and just switch between them each time.
I would do it with asynchronous reads into a double buffer. So when one buffer has been read from file, start reading the next buffer while scanning the first buffer. This means you do CPU and IO in parallel. Another advantage is that you will always have data around data boundaries. However I don't know if double buffering is possible with memory mapped files.
Using a memory mapped file has the additional benefit of avoiding a copy from the filesystem cache memory to the (managed) application memory if you use a read-only view (although you have to use byte* pointers then to access the memory). And instead of creating many threads use one thread to sequentially scan through the file using for example 100MB memory mapped views into the file (don't map the entire file into the process address space at once).
I'd go with only one thread too, not only for HD performance issues, but because you might have trouble managing side effects when splitting your file : what if there's an occurrence of your pattern right where you split your file ?
Multithreading is only going to make this go slower unless you want to scan multiple files with each on a different hard drive. Otherwise you are just going to seek.
I wrote a simple test function using memory mapped files, with a single thread a 1.4 Gb file took about 20 seconds to scan. With two threads, each taking half the file (even 1MB chunks to one thread, odd to the other), it took more than 80 seconds.
That's right, 2 threads was Four times slower than 1 thread!
Here's the code I used, this is the single threaded version, I used a 1 byte scan pattern, so the code to locate matches that straddle map boundaries is untested.
Creating 20 threads, each supposing to handle some 100 MB of the file is likely to only worsen performance since The HD will have to read from several unrelated places at the same time.
HD performance is at its peak when it reads sequential data. So assuming your huge file is not fragmented, the best thing to do would probably be to use just one thread and read from start to end in chunks of a few (say 4) MB.
But what do I know. File systems and caches are complex. Do some testing and see what works best.