Is there a noticeable computation time difference between recursion-style Fibonacci vs. loop-style Fibonacci? I keep running Fibonacci to 40 places using recursion and then using a loop directly afterwards. It seems as though the computation time difference is only academic.
Written in C
Recursive solution:
int main(int argc, const char * argv[]) {
int n, i = 0, c;
printf("Please enter an integer:\n");
scanf("%d", &n);
for ( c = 1 ; c <= n ; c++ )
{
printf("%lu ", fibonacci(i));
i++;
}
return 0;
}
long fibonacci(long n)
{
if ( n == 0 )
return 0;
else if ( n == 1 )
return 1;
else
return ( fibonacci(n-1) + fibonacci(n-2) );
};
For-loop solution:
int main(int argc, const char * argv[]) {
int n, first = 0, second = 1, next, c;
printf("Please enter an integer:\n");
scanf("%d", &n);
for ( c = 0 ; c < n ; c++ )
{
if ( c <= 1 )
next = c;
else
{
next = first + second;
first = second;
second = next;
}
printf("%d ",next);
}
return 0;
};
A for-loop it is not necessarily faster. In general languages like Java, C, and Python, recursion is fairly expensive compared to iteration because it requires the allocation of a new stack frame.
It is possible to eliminate this overhead in C/C++ enabling compiler optimization to perform tail recursion, which transforms certain types of recursion (actually, certain types of tail calls) into jumps instead of function calls. To let the compiler performs this optimization it is necessary that the last thing a function does before it returns is call another function (in this case itself).
An example of Fibonacci function could be like this:
and at assembly level, enabling compiler optimization, it will be implemented as a loop, e.g. sharing the same stack frame between calls.
I recently wrote an article about it.
Hope it helps.
How did you measure the speed difference?
The naive recursive implementation of the Fibonacci function takes about 100 million function calls to calculate f (40). On a modern computer that will be fast enough that you cannot time it with a stop watch.
Calculating f (50) takes about 10 billion function calls, which will be a noticable delay. f (60) takes over a trillion function calls, or about an hour. f (70) takes about 200 trillion function calls or a few days. f (80) takes about 20 quadrillion function calls or about a year.
I wouldn't call that difference academic.
The conventional recursion method is extremely slow compared to the tail recursive and iterative versions. In the example code below for the iteration version uses an unfolded loop along with Duff's Device to enter the loop. For 32 bit unsigned integers, the limit is fib(47), for 64 bit unsigned integers, the limit is fib(93).
Timing was done with Intel 2600K 3.4ghz, XP X64, 64 bit mode. XP or XP X64 high performance counter frequency is the same as the cpu clock, 3.4ghz, but operating system overhead (like interrupts), affects the timing if the duration is small.
Timings for fib(40):
Timing for 94 loops, n = 0 to 93:
Example code:
Alternate version of fibi(), without the n<2 check. What f0 and f1 represent changes within the loop designed to end up with the final sum in f0, so the initial state of what f0 and f1 represent depends if n is even or odd. If n is even, f0 = fib(0) = 0, f1 = fib(-1) = 1, if n is odd, f1 = fib(0) = 0, f0 = fib(-1) = 1 . (In case you're curious, fib(-1) = 1, fib(-2) = -1, fib(-3) = 2, fib(-4) = -3, fib(-5) = 5, fib(-6) = -8, ... ).
To explain the logic here, for the n even case, fib(-1) = f1 = 1, fib(0) = f0 = 0, then fib(1) = (f1 += f0), fib(2) = (f0 += f1), fib(3) = (f1 += f0), fib(4) = (f0 += f1), ... .
For-loop solution is faster. reasons:
n
th usesn
additions (the loop iteratesn
times), while the recursive solution uses an addition per function call, which sums toO(1.6
n
)
calls, henceO(1.6
n
)
additions. The cost came from the double recursive calls- when the recursive function asked for then
th element, it had to compute again then-1
th and then-2
th elements from the start, it not remember them.