可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I use C#, .NET, VS.NET 2008.
Besides being able to address more memory, what are the advantages to compiling my application to 64-bit?
Is it going to be faster or smaller? Why?
Does it make it more compatible with a x64 system (when compared to a 32-bit application)?
回答1:
For native applications, you get benefits like increased address space and whatnot. However, .NET applications run on the CLR which abstracts away any underlying architecture differences.
Assuming you're just dealing with managed code, there isn't any benefit to targeting a specific platform; you're better off just compiling with the "anycpu" flag set (which is on by default). This will generate platform agnostic assemblies that will run equally well on any of the architectures the CLR runs on.
Specifically targeting (say) x64 isn't going to give you any performance boost, and will prevent your assemblies from working on a 32-bit platform.
This article has a bit more information on the subject.
Update: Scott Hanselman just posted a good overview of this topic as well.
回答2:
In theory, a program compiled for x64 will run faster than a program compiled for x86. The reason for this is because there are more general purpose registers in the x64 architecture. 32-bit x86 has only 4 general purpose registers. AMD added an additional 8 general purpose registers in their x64 extensions. This allows for fewer memory loads and (slightly) faster performance.
In reality, this doesn't make a huge difference in performance, but it should make a slight one.
The size of the binary and the memory footprint will increase somewhat from using 64-bit instructions but because x64 is still a CISC archictecture, the binary size does not double as it would in a RISC architecture. Most instructions are still shorter than 64 bits in length.
回答3:
AS the matter of fact, 64Bit applications that do not require large memory space tend to work slower. One of the reasons behind it is that you have to move move data around. If you can not utilize >2GB memory space (for caching for example), I wouldn't recommend it.
Here's an interesting link I just found
http://www.osnews.com/story/5768 with a lot of info.
回答4:
I doubt it (given the C#/.NET platform), unless you are using Native Code. Remember, .NET managed code is compiled to IL, and the platform switch defaults to anycpu, so you should get better performance on 64-bit OS with your existing binary:
http://blogs.msdn.com/gauravseth/archive/2006/03/07/545104.aspx
This article has a ton of useful information including regarding the CorFlags tool which will let you inspect a PE header.
In general, for native code binaries, yes.
回答5:
I'm really not an expert at CPU architectures, so take my comments lightly. Wikipedia has a an article describing the x86-64 architecture (link text).
The x86-64 has more registers, this alone should help to make a program faster. Also this new architecture offers new instruction sets which could improve speed if the compiler takes advantage of it.
Another factor to take into account is the number of instruction sets available. When a program is compiled to x86 usually it's target is to to run into all existing 32-bit CPUS (Pentium 1, 2, 3, 4, core* etc). Each new CPU generation adds new instructions sets, this instructions can't be used by a program that wants to be fully portable in binary format among all x86 CPUS. As x86-64 bit is a new architecture, recompiling a program for this machine gives the compiler a wider set of instructions to use without worrying too much about binary compatibility among diff 64-bit CPUS.