Does initialization of local variable with null im

2019-04-07 10:56发布

问题:

Lets compare two pieces of code:

String str = null;
//Possibly do something...
str = "Test";
Console.WriteLine(str);

and

String str;
//Possibly do something...
str = "Test";
Console.WriteLine(str);

I was always thinking that these pieces of code are equal. But after I have build these code (Release mode with optimization checked) and compared IL methods generated I have noticed that there are two more IL instructions in the first sample:

1st sample code IL:

.maxstack 1
.locals init ([0] string str)
IL_0000: ldnull
IL_0001: stloc.0
IL_0002: ldstr "Test"
IL_0007: stloc.0
IL_0008: ldloc.0
IL_0009: call void [mscorlib]System.Console::WriteLine(string)
IL_000e: ret

2nd sample code IL:

.maxstack 1
.locals init ([0] string str)
IL_0000: ldstr "Test"
IL_0005: stloc.0
IL_0006: ldloc.0
IL_0007: call void [mscorlib]System.Console::WriteLine(string)
IL_000c: ret

Possibly this code is optimized by JIT compiller? So does the initialization of local bethod variable with null impacts the performence (I understand that it is very simple operation but any case) and we should avoid it? Thanks beforehand.

回答1:

http://www.codinghorror.com/blog/2005/07/for-best-results-dont-initialize-variables.html

To summarize from the article, after running various benchmarks, initializing an object to a value (either as part of a definition, in the class' constructor, or as part of an initialization method) can be anywhere from roughly 10-35% slower on .NET 1.1 and 2.0. Newer compilers may optimize away initialization on definition. The article closes by recommending to avoid initialization as a general rule.



回答2:

It is slightly slower, as Jon.Stromer.Galley's link points out. But the difference is amazingly small; likely on the order of nanoseconds. At that level, the overhead from using a high-level language like C# dwarfs any performance difference. If performance is that much of an issue, you may as well be coding in C or ASM or something.

The value of writing clear code (whatever that means to you) will far outweigh the 0.00001ms performance increase in terms of cost vs. benefit. That's why C# and other high-level languages exist in the first place.

I get that this is probably meant as an academic question, and I don't discount the value of understanding the internals of the CLR. But in this case, it just seems like the wrong thing to focus on.