How does NSMutableData allocate memory?

2019-05-06 12:10发布

When I run the following code, it slowly eats up my memory and even starts using swap:

 long long length = 1024ull * 1024ull * 1024ull * 2ull; // 2 GB

 db = [NSMutableData dataWithLength:length];

 char *array = [db mutableBytes];

 for(long long i = 0; i < length - 1; i++) {
      array[i] = i % 256;
 }

If I run it without the for cycle no memory is used at all:

 long long length = 1024ull * 1024ull * 1024ull * 2ull;
 db = [NSMutableData dataWithLength:length];
 char *array = [db mutableBytes];
 /* for(long long i = 0; i < length - 1; i++) {
      array[i] = i % 256;
 } */

I can only conclude that NSMutableData is only "reserving" memory and when it is accessed then it really "allocates" it. How is it done exactly?

Is this done via hardware (CPU)?

Is there a way for NSMutableData to catch memory writes in its "reserved" memory and only then do the "allocation"?

Does this also mean that a call to [NSMutableData dataWithLength:length] can never fail? Can it allocate any size of memory using swap to get it if needed?

If it can fail will my db variable be null?

In apple's "NSMutableData Class Reference" I have seen only vague sentences about these topics.

1条回答
一夜七次
2楼-- · 2019-05-06 12:59

This is not so much an NSMutableData issue but a kernel/OS issue. If the process requests a (big) chunk of memory, the kernel will normally just say "that's ok, here you go". But only on actually using it, it is really ("physically") allocated. This is ok since if your program starts with a 2 GB malloc (as you are doing here) it would otherwise instantly push out other programs to swap, while in practice you will often not use the 2 GB right away.

When accessing a memory page that is not actually present in physical memory, the kernel will get a signal from the CPU. If the page should be there (because it is within your 2 GB chunk) it will be put in place (possibly from swap) and you will not even notice. If the page shouldn't be there (because the address is not allocated within your virtual memory) you will get a segmentation fault (SIGSEGV or EXC_BAD_ACCESS kind of error).

One of the related topics is "overcommit(ment)", where the kernel promises more memory than is actually available. It can cause serious problems if all processes start using their promised memory. This is OS dependent.

There are a lot of pages on the internet explaining this better and in more detail; I just wanted to give a short intro so you have the terms to put in google.

edit just tested, linux will easily promise me 4 TB of memory, while - I assure you - there is not even 1 TB of total disk storage in that machine. You can imagine that this, if not taken care of, can cause some headaches when building mission critical systems.

查看更多
登录 后发表回答