Suppose I have a dataset that is an array of 1e12 32-bit ints (4 TB) stored in a file on a 4TB HDD ext4 filesystem..
Consider that the data is most likely random (or at least seems random).
// pseudo-code
for (long long i = 0; i < (1LL << 40); i++)
SetFileIntAt(i) = GetRandInt();
Further, consider that I wish to read individual int elements in an unpredictable order and that the algorithm runs indefinately (it is on-going).
// pseudo-code
while (true)
UseInt(GetFileInt(GetRand(1<<40)));
We are on Linux x86_64, gcc. You can assume system has 4GB of RAM (ie 1000x less than dataset)
The following are two ways to architect access:
(A) mmap the file to a 4TB block of memory, and access it as an int array
(B) open(2) the file and use seek(2) and read(2) to read the ints.
Out of A and B which will have the better performance?, and why?
Is there another design that will give better performance than either A or B?
I'd say performance should be similar if access is truly random. The OS will use a similar caching strategy whether the data page is mapped from a file or the file data is simply cached without an association to RAM.
Assuming cache is ineffective:
- You can use
fadvise
to declare your access pattern in advance and disable readahead.
- Due to address space layout randomization, there might not be a contiguous block of 4 TB in your virtual address space.
- If your data set ever expands, the address space issue might become more pressing.
So I'd go with explicit reads.
On the one hand, you have extensive use of memory swap resulting in minor pagefaults, transparent for the applicative. On the other one, you have numerous system calls, with the known overhead. The Wikipedia page about memory-mapped file seems to be quite clear to me, it browses in an comprehensive manner pros and cons.
I think 64bit architecture + large file call for a memory-mapped file approach, at least to keep from complexifying the applicative; I have been told that complexity often leads to poor performance. However mmap()
is usual for sequential access, which is not the purpose here.
Because this is pure random access, there is few chance that two accesses will be in the same RAM-loaded page. A full 4kb page will be swapped from the HDD to the RAM, just for a 4 bytes data... This is useless loading of buses and will probably result in poor performances.
Hope this help.
Probably for a 4TB linear dataset you don't need a file system. I guess a raw device access may bring some performance benefits.
Also probably there is a way to optimize the queries or the data structure, so that caching could be used more efficiently?
Seek performance highly depends on your file system implementation. Ext4 should be a good choice as it uses extent trees. Also if your file has linear contiguous allocation the extent tree will consist of a single entry, which makes seek trivially efficient.