I\'m a beginner in assembly language and have noticed that the x86 code emitted by compilers usually keeps the frame pointer around even in release/optimized mode when it could use the EBP
register for something else.
I understand why the frame pointer might make code easier to debug, and might be necessary if alloca()
is called within a function. However, x86 has very few registers and using two of them to hold the location of the stack frame when one would suffice just doesn\'t make sense to me. Why is omitting the frame pointer considered a bad idea even in optimized/release builds?
Frame pointer is a reference pointer allowing a debugger to know where local variable or an argument is at with a single constant offset. Although ESP\'s value changes over the course of execution, EBP remains the same making it possible to reach the same variable at the same offset (such as first parameter will always be at EBP+8 while ESP offsets can change significantly since you\'ll be pushing/popping things)
Why don\'t compilers throw away frame pointer? Because with frame pointer, the debugger can figure out where local variables and arguments are using the symbol table since they are guaranteed to be at a constant offset to EBP. Otherwise there isn\'t an easy way to figure where a local variable is at any point in code.
As Greg mentioned, it also helps stack unwinding for a debugger since EBP provides a reverse linked list of stack frames therefore letting the debugger to figure out size of stack frame (local variables + arguments) of the function.
Most compilers provide an option to omit frame pointers although it makes debugging really hard. That option should never be used globally, even in release code. You don\'t know when you\'ll need to debug a user\'s crash.
Just adding my two cents to already good answers.
It\'s part of a good language architecture to have a chain of stack frames. The BP points to the current frame, where subroutine-local variables are stored. (Locals are at negative offsets, and arguments are at positive offsets.)
The idea that it is preventing a perfectly good register from being used in optimization raises the question: when and where is optimization actually worthwhile?
Optimization is only worthwhile in tight loops that 1) do not call functions, 2) where the program counter spends a significant fraction of its time, and 3) in code the compiler actually will ever see (i.e. non-library functions). This is usually a very small fraction of the overall code, especially in large systems.
Other code can be twisted and squeezed to get rid of cycles, and it simply won\'t matter, because the program counter is practically never there.
I know you didn\'t ask this, but in my experience, 99% of performance problems have nothing at all to do with compiler optimization. They have everything to do with over-design.
It depends on the compiler, certainly. I\'ve seen optimized code emitted by x86 compilers that freely uses the EBP register as a general purpose register. (I don\'t recall which compiler I noticed that with, though.)
Compilers may also choose to maintain the EBP register to assist with stack unwinding during exception handling, but again this depends on the precise compiler implementation.
However, x86 has very few registers
This is true only in the sense that opcodes can only address 8 registers. The processor itself will actually have many more registers than that and use register renaming, pipelining, speculative execution, and other processor buzzwords to get around that limit. Wikipedia has a good introductory paragraph as to what an x86 processor can do to overcome the register limit: http://en.wikipedia.org/wiki/X86#Current_implementations.
Using stack frames has gotten incredibly cheap in any hardware even remotely modern. If you have cheap stack frames then saving a couple of registers isn\'t as important. I\'m sure fast stack frames vs. more registers was an engineering trade-off, and fast stack frames won.
How much are you saving going pure register? Is it worth it?