I started to read SICP recently, and I'm very interested in converting a recursive procedure into a tail-recursive form.
For "one dimensional" situations (linear ones), like the Fibonacci series or factorial computation, it is not hard to do the conversion.
For example, as the book says, we can rewrite the Fibonacci computation as follows
(define (fib n)
(fib-iter 1 0 n))
(define (fib-iter a b count)
(if (= count 0)
b
(fib-iter (+ a b) a (- count 1))))
And this form is obviously tail recursive
However, for a "two dimensional" situation, like calculating Pascal's triangle (Ex 1.12 in SICP), we can still easily write a recursive solution like follows
(define (pascal x y)
(cond ((or (<= x 0) (<= y 0) (< x y )) 0)
((or (= 1 y) (= x y) ) 1)
(else (+ (pascal (- x 1) y) (pascal (- x 1) (- y 1))))))
The question is, how to convert this into a tail recursive form?
UPDATE: This problem has a far easier math solution that you can get down to O(row) using only factorial. Based on that this boils down to:
Old answer:
You need to study the patterns. Basically you want to iterate from the beginning of the triangle until you have enough information to produce a result. It's obvious that you need the previous row to compute the next so that must be an argument you give it and it must supply the next if the requested row is not the current. This solution is tail recusive and lightning fast.
To add to Óscar's answer, we can use continuation-passing style to convert any program to use tail calls:
You may say this program is not as satisfactory, as there's the closure that "grows". But they are allocated on the heap. In the general case, the point of having tail-call is not so much about performance as it is about space safety: you don't blow up the evaluation context.
First of all, the recursive-process
pascal
procedure can be expressed in a simpler way (assuming non-negative, valid inputs) - like this:Now for the question. It is possible to transform the recursive-process implementation into an iterative-process version that uses tail recursion. But it's trickier than it seems, and to fully understand it you have to grasp how dynamic programming works. For a detailed explanation of this algorithm, please refer to Steven Skiena's The Algorithm Design Manual, 2nd edition, page 278.
This is the kind of algorithm that doesn't lend itself for an idiomatic solution in Scheme, because it requires that we mutate state as part of the solution (in this case, we're updating the partial results in a vector). It's a rather contrived solution and I optimized the table memory usage so only one row is needed at a time - and here it goes:
In fact, in this case it would be more natural to write a straight iteration, mutating variables along the way. In Racket, this is how it looks:
We can print the results and check that all of the three implementations shown work. Again, in Racket: