Memory Allocation Pattern On 'AnyCPU' Plat

2020-04-10 02:47发布

I'm purposefully leaking memory inside a simple C# program, to understand more about how .NET manages this aspect. This is done using int[] arrays, each with a size of 10 million, being declared every 100ms. The elements of the arrays are not "touched" - as in assigned a value -, in order not to bring the data in the process's working set:

const int BlockSIZE = 10000000;  // 10 million
const int noOfBlocks = 500;
int[][] intArray = new int[noOfBlocks][];

for (int k = 0; k < noOfBlocks; k++) {
    intArray[k] = new int[BlockSIZE];
    Console.WriteLine("Allocated (but not touched) for array {0}: {1} bytes", k, BlockSIZE);
    System.Threading.Thread.Sleep(100);
}

I'm using VMMap (the tool built by Mark Russinovich) to see how the memory is being allocated. The version is a recent one (3.25, released in 2018), so it knows about the managed heap.

Visual Studio 2015 is used on a x64 Windows 10 machine with 8 GB of RAM to compile and generate the .exe file. Depending on the Platform target setting within the project's Build section, different outcomes relating to how memory is allocated can be seen, as follows.

When Platform target is set to x86, the committed memory grows until close to the 2 GB mark, before throwing an out-of-memory error. The value is to be expected, since 2 GB is the limit for user virtual address space on the x86 architecture (I'm not using IncreaseUserVA, which would have brought this up to 3 GB later edit: this is not entirely correct - see David's answer below). VMMap's output in this case is below. Most of the committed data falls under the Managed Heap category, as expected.

Platform target= x64

When Platform target is set to x64, the committed area keeps growing, as expected. Eventually the app needs to be killed since it keeps allocating memory. This was also expected, since as long as the total quantity of available ram + paging file(s) can accommodate the growth, the theoretical limit on a 64-bit Win10 box is 128 TB per user virtual address space (as limited by current processors since they are only using 48 bits of the 64 available within a virtual address). VMMap's output is below. Again, most of the committed bytes fall under the Managed Heap category.

enter image description here

When Platform target is set to Any CPU and the Prefer 32-bit is ticked - this actually being the default setting in Visual Studio 2015 - the outcome is not so straightforward. First of all, an out-of-memory exception is thrown when the committed memory goes to about 3.5 GB. Secondly, the private bytes within the Managed Heap only grow to about 1.2 GB, after which the Private Data category registers the data that is being allocated next. VMMap's output below.

enter image description here

Why is the allocation happening as described in the last paragraph for the Any CPU + Prefer 32-bit setting ? Specifically why is a considerable amount of data listed under Private Data instead of Managed Heap ?

Later Edit: Added the pictures inline for better clarity.

标签: c# .net memory
1条回答
SAY GOODBYE
2楼-- · 2020-04-10 03:05

LARGEADDRESSAWARE 32bit processes running under Windows on Windows64 (wow64) have 4GB of user-mode Virtual Address Space (VAS), as the kernel memory is 64bit, and doesn't need to be mapped into the 4GB addressable with 32bit pointers. And you don't have to boot Windows with the /3GB switch to get it.

When compiling for X86 you might expect identical behavior on 32bit and 64bit platforms, so it makes sense to not set the LARGEADDRESSAWARE flag. Also this was probably compelled by backwards compatibility. In the very old days some 32bit libraries (mis)used the high-order bit of pointers, and so historically restricting 32bit programs to 2GB was a safety setting.

The AnyCPU+Prefer 32 Bit is a newer setting, and gets the LARGEADDRESSAWARE set by default, to give you better access to resources on 64bit platforms.

查看更多
登录 后发表回答