C#: new[] does not seem to allocate memory in Debu

2020-07-30 03:15发布

I attempt to allocate a large chunk of memory as a byte array (400MB)

    public static void Main(string[] args)
    {
        const int MegaByte = 1048576;

        byte[] bytes = new byte[400 * MegaByte];

        for (int i = 0; i < bytes.Length; i++)
        {
            bytes[i] = 3;
        }

        Console.WriteLine("Total array length : {0}", bytes.Length);
        Console.WriteLine("First byte = '{0}' Last byte = '{1}'", bytes[0], bytes[bytes.Length - 1]);
        Console.Read();
    }

Output:

Total array length : 419430400
First byte = '3' Last byte = '3'

As expected I see a large jump in Windows Task Manager for the memory that is being used, when I just allocated 400MB of memory. However if I replace the forloop usage to just using first and last bytes:

    public static void Main(string[] args)
    {
        const int MegaByte = 1048576;

        byte[] bytes = new byte[400 * MegaByte];

        bytes[0] = 0;
        bytes[bytes.Length - 1] = 2;

        Console.WriteLine("Total array length : {0}", bytes.Length);
        Console.WriteLine("First byte = '{0}' Last byte = '{1}'", bytes[0], bytes[bytes.Length - 1]);
        Console.Read();
    }     

Output:

Total array length : 419430400
First byte = '0' Last byte = '2'

No large jump in used memory is observed even though the length of the allocated array is said to be the same. I had an assumption that if we build in Debug there won't be such optimizations. Are there some optimizations that are done during 'Debug' builds or do such things get optimized at a lower level when IL gets compiled?

2条回答
我命由我不由天
2楼-- · 2020-07-30 04:02

Debug / Release mode optimizations are only instruction optimizations (IL -> native code compilation, inlining, reordering of instructions, etc - everything except const folding, which is always used). This doesn't change the way memory is managed, it just changes the flow of your code and the way memory is accessed. You can see this when you look at IL code and compares that to the corresponding assembler code: most IL code just defines the way memory is accessed/used; allocation and deallocation is always managed by the GC (with shorthands such as newobj and newarr for that).

The only way you can influence memory behavior is to influence the GC by changing the settings. The relevant settings of the GC can be found here: http://msdn.microsoft.com/en-us/library/6bs4szyc(v=vs.110).aspx .

So, in all cases, memory is managed by the GC, which grabs memory from the kernel (I guess using calloc) and releases memory (free) to the kernel. In your case, memory is allocated (address space is reserved for data), but not committed. Once you access a page (4 KB), it's committed and it shows up.

查看更多
一纸荒年 Trace。
3楼-- · 2020-07-30 04:02

To my limited knowledge there is no optimizations in object allocation between debug and non-debug builds.

Guess - most likely number you are looking at in task manager is for committed pages, while allocating large array on large object heap probably just reserves address space and allows OS to commit/zero memory as it is accessed. You should be able to somewhat confirm it by accessing every 4K element of array.

查看更多
登录 后发表回答