So I just was testing the CLR Profiler from microsoft, and I did a little program that created a List with 1,000,000 doubles in it. I checked the heap, and turns out the List<> size was around 124KB (I don't remember exactly, but it was around that). This really rocked my world, how could it be 124KB if it had 1 million doubles in it? Anyway, after that I decided to check a double[1000000]. And to my surprise (well not really since this is what I expected the with the List<> =P), the array size is 7.6MB. HUGE difference!!
How come they're different? How does the List<> manage its items that it's so (incredibly) memory efficient? I mean, it's not like the other 7.5 mb were somewhere else, because the size of the application was around 3 or 4 KB bigger after I created the 1 million doubles.
List<T>
uses an array to store values/references, so I doubt there there will be any difference in size apart from what little overheadList<T>
adds.Given the code below
the heap looks like this for the relevant object
So the
List<double>
is 8,000,036 bytes, and the underlying array is 8,000,012 bytes. This fits well with the usual 12 bytes overhead for a reference type (Array
) and 1,000,000 times 8 bytes for the doubles. On top of thatList<T>
adds another 24 bytes of overhead for the fields shown above.Conclusion: I don't see any evidence that
List<double>
will take up less space thandouble[]
for the same number of elements.Please note that the List is dynamically grown, usually doubling the size every time you hit the internal buffer size. Hence, the new list would have something like 4 element array initially, and after you add the first 4 elements, the 5th element would cause internal reallocation doubling the buffer to
(4 * 2)
.