mmap problem, allocates huge amounts of memory

2020-02-17 05:18发布

问题:

I got some huge files I need to parse, and people have been recommending mmap because this should avoid having to allocate the entire file in-memory.

But looking at 'top' it does look like I'm opening the entire file into the memory, so I think I must be doing something wrong. 'top shows >2.1 gig'

This is a code snippet that shows what I'm doing.

Thanks

#include <stdio.h>
#include <stdlib.h>
#include <err.h>
#include <fcntl.h>
#include <sysexits.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <cstring>
int main (int argc, char *argv[] ) {
  struct stat sb;
  char *p,*q;
  //open filedescriptor
  int fd = open (argv[1], O_RDONLY);
  //initialize a stat for getting the filesize
  if (fstat (fd, &sb) == -1) {
    perror ("fstat");
    return 1;
  }
  //do the actual mmap, and keep pointer to the first element
  p =(char *) mmap (0, sb.st_size, PROT_READ, MAP_SHARED, fd, 0);
  q=p;
  //something went wrong
  if (p == MAP_FAILED) {
    perror ("mmap");
    return 1;
  }
  //lets just count the number of lines
  size_t numlines=0;
  while(*p++!='\0')
    if(*p=='\n')
      numlines++;
  fprintf(stderr,"numlines:%lu\n",numlines);
  //unmap it
  if (munmap (q, sb.st_size) == -1) {
    perror ("munmap");
    return 1;
  }
  if (close (fd) == -1) {
    perror ("close");
    return 1;
  }
  return 0;
}

回答1:

No, what you're doing is mapping the file into memory. This is different to actually reading the file into memory.

Were you to read it in, you would have to transfer the entire contents into memory. By mapping it, you let the operating system handle it. If you attempt to read or write to a location in that memory area, the OS will load the relevant section for you first. It will not load the entire file unless the entire file is needed.

That is where you get your performance gain. If you map the entire file but only change one byte then unmap it, you'll find that there's not much disk I/O at all.

Of course, if you touch every byte in the file, then yes, it will all be loaded at some point but not necessarily in physical RAM all at once. But that's the case even if you load the entire file up front. The OS will swap out parts of your data if there's not enough physical memory to contain it all, along with that of the other processes in the system.

The main advantages of memory mapping are:

  • you defer reading the file sections until they're needed (and, if they're never needed, they don't get loaded). So there's no big upfront cost as you load the entire file. It amortises the cost of loading.
  • The writes are automated, you don't have to write out every byte. Just close it and the OS will write out the changed sections. I think this also happens when the memory is swapped out as well (in low physical memory situations), since your buffer is simply a window onto the file.

Keep in mind that there is most likely a disconnect between your address space usage and your physical memory usage. You can allocate an address space of 4G (ideally, though there may be OS, BIOS or hardware limitations) in a 32-bit machine with only 1G of RAM. The OS handles the paging to and from disk.

And to answer your further request for clarification:

Just to clarify. So If I need the entire file, mmap will actually load the entire file?

Yes, but it may not be in physical memory all at once. The OS will swap out bits back to the filesystem in order to bring in new bits.

But it will also do that if you've read the entire file in manually. The difference between those two situations is as follows.

With the file read into memory manually, the OS will swap parts of your address space (may include the data or may not) out to the swap file. And you will need to manually rewrite the file when your finished with it.

With memory mapping, you have effectively told it to use the original file as an extra swap area for that file/memory only. And, when data is written to that swap area, it affects the actual file immediately. So no having to manually rewrite anything when you're done and no affecting the normal swap (usually).

It really is just a window to the file:

                       



回答2:

You can also use fadvise(2) (and madvise(2), see also posix_fadvise & posix_madvise ) to mark mmaped file (or its parts) as read-once.

#include <sys/mman.h> 

int madvise(void *start, size_t length, int advice);

The advice is indicated in the advice parameter which can be

MADV_SEQUENTIAL 

Expect page references in sequential order. (Hence, pages in the given range can be aggressively read ahead, and may be freed soon after they are accessed.)

Portability: posix_madvise and posix_fadvise is part of ADVANCED REALTIME option of IEEE Std 1003.1, 2004. And constants will be POSIX_MADV_SEQUENTIAL and POSIX_FADV_SEQUENTIAL.



回答3:

top has many memory-related columns. Most of them are based on the size of the memory space mapped to the process; including any shared libraries, swapped out RAM, and mmapped space.

Check the RES column, this is related to the physical RAM currently in use. I think (but not sure) it would include the RAM used to 'cache' the mmap'ped file



回答4:

You may have been offered the wrong advice.

Memory mapped files (mmap) will use more and more memory as you parse through them. When physical memory becomes low, the kernel will unmap sections of the file from physical memory based on its LRU (least recently used) algorithm. But the LRU is also global. The LRU may also force other processes to swap pages to disk, and reduce the disk cache. This can have a severely negative affect on the performance on other processes and the system as a whole.

If you are linearly reading through files, like counting the number of lines, mmap is a bad choice, as it will fill physical memory before release memory back to the system. It would be better to use traditional I/O methods which stream or read in a block at a time. That way memory can be released immediately afterwards.

If you are randomly accessing a file, mmap is an okay choice. But it's not optimal since you would still be relying the kernel's general LRU algorithm, but it’s faster to use than writing your caching mechanism.

In general, I would never recommend anyone use mmap, except for some extreme performance edge cases - like accessing the file from multiple processes or threads at the same time, or when the file is small in relationship to the amount of free available memory.



回答5:

"allocate the whole file in memory" conflates two issues. One is how much virtual memory you allocate; the other is which parts of the file are read from disk into memory. Here you are allocating enough space to contain the whole file. However, only the pages that you touch will actually be changed on disk. And, they will be changed correctly no matter what happens with the process, once you have updated the bytes in the memory that mmap allocated for you. You can allocate less memory by mapping only a section of the file at a time by using the "size" and "offset" parameters of mmap. Then you have to manage a window into the file yourself by mapping and unmapping, perhaps moving the window through the file. Allocating a big chunk of memory takes appreciable time. This can introduce an unexpected delay into the application. If your process is already memory-intensive, the virtual memory may have become fragmented and it may be impossible to find a big enough chunk for a large file at the time you ask. It may therefore necessary to try to do the mapping as early as possible, or to use some strategy to keep a large enough chunk of memory available until you need it.

However, seeing as you specify that you need to parse the file, why not avoid this entirely by organizing your parser to operate on a stream of data? Then the most you will need is some look-ahead and some history, instead of needing to map discrete chunks of the file into memory.



回答6:

The system will certainly try to put all your data in physical memory. What you will conserve is swap.



回答7:

You need to specify a size smaller than the total size of the file in the mmap call, if you don't want the entire file mapped into memory at once. Using the offset parameter, and a smaller size, you can map in "windows" of the larger file, one piece at a time.

If your parsing is a single pass through the file, with minimal lookback or look-forward, then you won't actually gain anything by using mmap instead of standard library buffered I/O. In the example you gave of counting the newlines in the file, it'd be just as fast to do that with fread(). I assume that your actual parsing is more complex, though.

If you need to read from more than one part of the file at a time, you'll have to manage multiple mmap regions, which can quickly get complicated.



回答8:

A little off topic.

I don't quite agree with Mark's answer. Actually mmap is faster than fread.

Despite of taking advantage of the system's disk buffer, fread also has an internal buffer, and in addition, the data will be copied to the user-supplied buffer as it is called.

On the contrary, mmap just return a pointer to the system's buffer. So there is a two-memory-copies-saving.

But using mmap a little dangerous. You must make sure the pointer never goes out of the file, or there will be a segment fault. While in this case fread merely returns zero.



标签: c++ c memory mmap