c++ dynamic memory allocation limit

2019-05-23 02:06发布

问题:

Is there any kind of limit, system or otherwise, for dynamic allocation with new or malloc in C++? The system is 64bit and I want to allocate an array of some ~800million structs.

Edit: The reason I didn't test it on my own before was because I don't currently have access to a machine with enough memory, so I felt that there was no point testing it on my current machine.

After running my own tests, I can allocate 800million elements fine, but malloc returns NULL once I hit ~850million. The struct contains 7 floats, so the total size is about 22GB. What's the reason behind this seemingly arbitrary limit? This machine has 4GB of ram and 4GB virtual memory, so I'm not sure why I'm even able to allocate that much.

回答1:

There is no way to tell you that other than just trying to run the code.

The "bitness" just indicate the OS and the architecture that you are targeting, i also want to stress the fact that every OS that support C++ programs has is own implementation of the standard C++ library ( if you are using the std library ) and as coder you are just using headers and namespaces that belongs to the std library and you are relying on the C/C++ library that usually comes with the OS to actually run your code.

I also suggest to rely on the testing part and keep the use of the memory at the minimum, some OS also has some anti-overflow technology or something like that and so some OS can see your massive allocation as a threat for the system stability, an heavy use of the RAM also involves a big role for the memory controller like is normal in an X86 architecture, usually what you are trying to do is not a good thing and ends bad or ends up with a really specific machine and OS as your favorite target for this application that you are trying to create.

Finally, you are trying to write C code not C++ code!

malloc() is a function from the C world, also involves a direct memory management like direct allocation and de-allocation, you hardware also have to perform a lot, and i mean, a lot of indirections with ~800million structs.

I suggest a switch to a real C++ structure like the std vector ( better than the list for performance ) or just a switch to a language with its own garbage collector and without a direct memory management phase like C# or Java.

The answer to your question is no, also from a pragmatic point of view you will face a big problem about optimizing your code and probably, and i say probably, your life will be easier with a different language like C++ or C# or Java, but keep in mind that usually garbage collectors are memory-hungry, probably the best solution in your case will be the C++ with a little extra effort and testing phase from you.



回答2:

The limit is approximately your free ram plus the space allowed for swapping to disk. For the record 800 million byte = 800 Mb so you might sit well on the safe side with small structs, even swapping might not be required (and should be avoided) Just try it out and see where it crashes ;-)

64 Bit: 2^64/2^30 = approx. 17* 10⁹ Gigabyte (for a byte addressable architecture, 1Gb=2^30 Bit) so no worries here

32 Bit: 2^32 = approx 4 Gigabyte so even here you could be on the safe side

Divide by two for signed values, still you have much room left at least on a 64bit system



回答3:

For dynamic allocation the same restrictions as for static allocation apply. E.g. you are only restricted to the amount of memory available (which is restricted by the size of pointers). The main difference between 32 bit and 64 bit systems is the size of pointers, on a 32 Bit system you are restricted to 32 bit pointers, e.g. 4294967296 bytes (4GB) can be accessed. The system reserves some of it so in the end it is about 2,5 GB. On a 64 Bit system its way more, 2^64 = 16 exabyte, in practice its about 256 terabyte to 4 petabyte. Way more than you will need. If you don't have enough memory (and not enough swap space) it might crash though.