Dynamic programming and Divide and conquer

2019-03-20 03:54发布

问题:

I was reading notes on Dynamic programming, and I encountered the following comment.

If the subproblems are not independent, i.e. subproblems share subsubproblems, then a divideand-conquer algorithm repeatedly solves the common subsubproblems. Thus, it does more work than necessary

What does this mean ? Can you give me examples to make the above point clear ?

回答1:

The author refers to the fact that many divide-and-conquer algorithms have subproblems that overlap with one another. Consider, for example, this very simple Fibonacci implementation:

int Fibonacci(int n) {
    if (n <= 1) return n;

    return Fibonacci(n - 1) + Fibonacci(n - 2);
}

If you trace out the calls done to compute Fibonacci(4), we get

  • Fibonacci(4) calls Fibonacci(3) and Fibonacci(2)
  • Fibonacci(3) calls Fibonacci(2) and Fibonacci(1)
  • Fibonacci(2) calls Fibonacci(1) and Fibonacci(0)
  • Fibonacci(2) (the other one) calls Fibonacci(1) and Fibonacci(0)
  • Fibonacci(1) terminates.
  • Fibonacci(1) terminates.
  • Fibonacci(1) terminates.
  • Fibonacci(0) terminates.
  • Fibonacci(0) terminates.

In other words, 9 total function calls are made, but there's only five unique calls here (Fibonacci of 0 to 4, inclusive). This algorithm could be made much more efficient if the recursive calls were shared across the subproblems rather than recomputed from scratch each time. This is one of the key ideas behind dynamic programming.

To compute Fn (the nth Fibonacci number), the above code will make a total of 2Fn+1 - 1 recursive calls. Since the Fibonacci numbers grow exponentially quickly, this requires exponentially much work. However, it's possible to use the fact that many recursive calls are identical to simplify this dramatically. Rather than starting at Fibonacci(4) and working down, let's start at Fibonacci(0) and work up. Specifically, we'll build a table (let's call it FTable) of length 5 and will fill it in as follows:

  • FTable[0] = 0
  • FTable[1] = 1

Now, suppose we want to compute FTable[2]. This requires us to know FTable[0] and FTable[1], but we already do know that because they're in the table. We thus can set

  • FTable[2] = 1

Using similar logic, we can compute FTable[3] from FTable[2] and FTable[1]:

  • FTable[3] = 2

And FTable[4] from FTable[2] and FTable[3]:

  • FTable[4] = 3

Notice how we avoided making lots of overlapping recursive calls by just building them up in the reverse order! This now computes Fibonacci numbers in time O(n), which is exponentially faster than before. Using some more advanced math we can do even better than this, but this does illustrate why dynamic programming can take infeasible problems and make them suddenly feasible.

Hope this helps!