In C language, Why does n++
execute faster than n=n+1
?
(int n=...; n++;)
(int n=...; n=n+1;)
Our instructor asked that question in today's class. (this is not homework)
In C language, Why does n++
execute faster than n=n+1
?
(int n=...; n++;)
(int n=...; n=n+1;)
Our instructor asked that question in today's class. (this is not homework)
That would be true if you are working on a "stone-age" compiler...
In case of "stone-age":
++n
is faster than n++
is faster than n=n+1
Machine usually have increment x
as well as add const to x
n++
, you will have 2 memory access only (read n, inc n, write n )n=n+1
, you will have 3 memory access (read n, read const, add n and const, write n)But today's compiler will automatically convert n=n+1
to ++n
, and it will do more than you may imagine!!
Also on today's out-of-order processors -despite the case of "stone-age" compiler- runtime may not be affected at all in many cases!!
Related
On GCC 4.4.3 for x86, with or without optimizations, they compile to the exact same assembly code, and thus take the same amount of time to execute. As you can see in the assembly, GCC simply converts n++
into n=n+1
, then optimizes it into the one-instruction add (in the -O2).
Your instructor's suggestion that n++
is faster only applies to very old, non-optimizing compilers, which were not smart enough to select the in-place update instructions for n = n + 1
. These compilers have been obsolete in the PC world for years, but may still be found for weird proprietary embedded platforms.
C code:
int n;
void nplusplus() {
n++;
}
void nplusone() {
n = n + 1;
}
Output assembly (no optimizations):
.file "test.c"
.comm n,4,4
.text
.globl nplusplus
.type nplusplus, @function
nplusplus:
pushl %ebp
movl %esp, %ebp
movl n, %eax
addl $1, %eax
movl %eax, n
popl %ebp
ret
.size nplusplus, .-nplusplus
.globl nplusone
.type nplusone, @function
nplusone:
pushl %ebp
movl %esp, %ebp
movl n, %eax
addl $1, %eax
movl %eax, n
popl %ebp
ret
.size nplusone, .-nplusone
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits
Output assembly (with -O2 optimizations):
.file "test.c"
.text
.p2align 4,,15
.globl nplusplus
.type nplusplus, @function
nplusplus:
pushl %ebp
movl %esp, %ebp
addl $1, n
popl %ebp
ret
.size nplusplus, .-nplusplus
.p2align 4,,15
.globl nplusone
.type nplusone, @function
nplusone:
pushl %ebp
movl %esp, %ebp
addl $1, n
popl %ebp
ret
.size nplusone, .-nplusone
.comm n,4,4
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits
The compiler will optimize n + 1
into nothingness.
Do you mean n = n + 1
?
If so, they will compile to identical assembly. (Assuming that optimizations are on and that they're statements, not expressions)
Who says it does? Your compiler optimizes it all away, really, making it a moot point.
Modern compilers should be able to recognize both forms as equivalent and convert them to the format that works best on your target platform. There is one exception to this rule: variable accesses that have side effects. For example, if n
is some memory-mapped hardware register, reading from it and writing to it may do more than just transferring a data value (reading might clear an interrupt, for instance). You would use the volatile
keyword to let the compiler know that it needs to be careful about optimizing accesses to n
, and in that case the compiler might generate different code from n++
(increment operation) and n = n + 1
(read, add, and store operations). However for normal variables, the compiler should optimize both forms to the same thing.
It doesn't really. The compiler will make changes specific to the target architecture. Micro-optimizations like this often have dubious benefits, but importantly, are certainly not worth the programmer's time.
Actually, the reason is that the operator is defined differently for post-fix than it is for pre-fix. ++n
will increment "n" and return a reference to "n" while n++
will increment "n" will returning a const
copy of "n". Hence, the phrase n = n + 1
will be more efficient. But I have to agree with the above posters. Good compilers should optimize away an unused return object.
In C language the side-effect of n++
expressions is by definition equivalent to the side effect of n = n + 1
expression. Since your code relies on the side-effects only, it is immediately obvious that the correct answer is that these expression always have exactly equivalent performance. (Regardless of any optimization settings in the compiler, BTW, since the issue has absolutely nothing to do with any optimizations.)
Any practical divergence in performance of these expressions is only possible if the compiler is intentionally (and maliciously!) trying to introduce that divergence. But in this case it can go either way, of course, i.e. whichever way the compiler's author wanted to skew it.
I think it's more like a hardware question rather than software... If I remember corectly, in older CPUs the n=n+1 requires two locations of memory, where the ++n is simply a microcontroller command... But I doubt this applies for modern architectures...
All those things depends on compiler/processor/compilation directives. So make any assumptions "what is faster in general" is not a good idea.