I've heard of stackless languages. However I don't have any idea how such a language would be implemented. Can someone explain?
相关问题
- Ada beginner Stack program
- Parsing a Chemistry Formula in Python
- pass by reference in assembly
- Assembler Stack Alignment (or better misaligned ex
- Adding a language to the AVM2
相关文章
- Threading in C# , value types and reference types
- Stack<> implementation in C#
- Java, Printing the stack values
- Is there a stack space for every thread?
- Scala variadic functions and Seq
- Why are Java objects pointers to pointers?
- How we access stack variables without poping them?
- Android stack size
Call me ancient, but I can remember when the FORTRAN standards and COBOL did not support recursive calls, and therefore didn't require a stack. Indeed, I recall the implementations for CDC 6000 series machines where there wasn't a stack, and FORTRAN would do strange things if you tried to call a subroutine recursively.
For the record, instead of a call-stack, the CDC 6000 series instruction set used the RJ instruction to call a subroutine. This saved the current PC value at the call target location and then branches to the location following it. At the end, a subroutine would perform an indirect jump to the call target location. That reloaded saved PC, effectively returning to the caller.
Obviously, that does not work with recursive calls. (And my recollection is that the CDC FORTRAN IV compiler would generate broken code if you did attempt recursion ...)
There is an easy to understand description of continuations on this article: http://www.defmacro.org/ramblings/fp.html
Continuations are something you can pass into a function in a stack-based language, but which can also be used by a language's own semantics to make it "stackless". Of course the stack is still there, but as Ira Baxter described, it's not one big contiguous segment.
Stackless Python still has a Python stack (though it may have tail call optimization and other call frame merging tricks), but it is completely divorced from the C stack of the interpreter.
Haskell (as commonly implemented) does not have a call stack; evaluation is based on graph reduction.
In the stackless environments I'm more or less familiar with (Turing machine, assembly, and Brainfuck), it's common to implement your own stack. There is nothing fundamental about having a stack built into the language.
In the most practical of these, assembly, you just choose a region of memory available to you, set the stack register to point to the bottom, then increment or decrement to implement your pushes and pops.
EDIT: I know some architectures have dedicated stacks, but they aren't necessary.
Say you wanted to implement stackless C. The first thing to realize is that this doesn't need a stack:
But, does this?
No. Because a smart compiler will inline calls to
isequal
, turning them intoa == b
. So, why not just inline everything? Sure, you will generate more code but if getting rid of the stack is worth it to you then this is easy with a small tradeoff.What about recursion? No problem. A tail-recursive function like:
Can still be inlined, because really it's just a for loop in disguise:
In theory a really smart compiler could figure that out for you. But a less-smart one could still flatten it as a goto:
There is one case where you have to make a small trade off. This can't be inlined:
Stackless C simply cannot do this. Are you giving up a lot? Not really. This is something normal C can't do well very either. If you don't believe me just call
fib(1000)
and see what happens to your precious computer.Please feel free to correct me if I'm wrong, but I would think that allocating memory on the heap for each function call frame would cause extreme memory thrashing. The operating system does after all have to manage this memory. I would think that the way to avoid this memory thrashing would be a cache for call frames. So if you need a cache anyway, we might as well make it contigous in memory and call it a stack.