When viewing the below MIT 6.001 course video, at 28:00 the instructor marks this algorithm as iteration. But, at 30.27 he says both this and the actual "recursive" algorithms are recursive. The function is calling itself with a base case, so how is this iteration?
https://www.youtube.com/watch?v=dlbMuv-jix8&list=PLE18841CABEA24090&index=2
private int iterativeSum(int x, int y)
{
System.out.println("x "+x+" y "+y);
if(x == 0)
{
return y;
}
return iterativeSum(--x, ++y);
}
There are two separate senses of the word "recursive" being used here. One is syntactical -- any function calling itself is syntactically (i.e. syntax-wise) recursive.
Another is about essential behaviour of a computation process encoded by a given piece of code -- whether it runs in constant stack space (so is essentially iterative), or not (so is essentially recursive).
Scheme has tail call optimization, so your code is actually
which is equivalent to a standard
while
loop, because a tail-recursive function call reuses the function call frame.In the sense that the function calls itself, it is recursive. However, it has the important attribute that the result of the invocation depends solely on the result from another function call; no values from the current stack are needed. The result is provided by
not from something like
which would require "coming back" from the recursive call, doing something with the result, and then returning that. Because the result doesn't require anything from the current stack frame, an implementation (in some languages, depending on the semantics) could eliminate or reuse the current stack frame. That's called tail-call elimination, and it's mandated in some languages, like Scheme. That's why the Scheme implementation of that algorithm is essentially iterative: it doesn't require unbounded amounts of stack space.
In Scheme, the tail call elimination means that the implementation is essentially the following, in which iterativeSumDriver is a trampoline of sorts, or the iterative driver over the results provided by iterativeSumInternal.
A Proper Trampoline
As Will Ness pointed out, a proper trampoline doesn't really know about the states used in a computation; it just needs to have something to call until a non-callable thing gets returned. Here's a version that does that.
I think this is based on definitions in SICP. Here is the the relevant section. In short, recursive function can generate iterative process if recursive call is in tail position: none of the local variables' current values need to be remembered and their space can be cleared / reused (it is somewhat easier to see with LISP where everything is expression and one can see how the size of the expression does not grow in iterative process).
Recursive process, in contrast, is not finished yet after the recursive call is finished. For example, this function
will generate recursive process, as an extra piece of work (
++()
) still needs to be done.Whether the compiler would actually implement tail-call optimization (TCO) is a different matter. AFAIK, to date JVM does not support it. Function calling itself in tail position is in general easy to optimise (as a loop). The difficulty comes when one function is calling another, and the other is calling the first function back, etc.
He seems to be more interested in how it's executed, rather than how the code is written. There's a big difference between those two, but that's a whole other conversation (but notably, some languages will compile recursions as iterations, as an example).
In this case, it is iteration when you hold the whole state in one place and repeatedly work on that one piece of data. It is recursion when you have a stack of states and add to the stack, then eventually collapse the stack back down to an answer.
In his example at 31:00, he shows this as being iteration when there is one bit of paper that holds the entire state of the work done so far, and any one guy can take it and eventually produce the final answer.
In the example of recursion at 32:20, Joe has his own notes on the problem, and only passes on notes about a subsection of the problem. Harry then has enough information for his subsection of the problem, but the entire problem still requires Joe to hold on to his own information to process Harry's results when he gets them back from Harry.
You have a whole stack of people, with more people being added to the stack until one of them has a problem simple enough to do it by himself, and he can return his answer right away, which means the second last guy now has a simpler problem and can now return his answer, and so on until the entire stack of people collapses back down into one last (first) person who then produces the final answer.