Good examples, articles, books for understanding d

2019-01-15 23:48发布

问题:

I can't figure out the principles of dynamic programming and I really do want it. DP is very powerful, it can solve problems like this:

Getting the lowest possible sum from numbers' difference

So, can you suggest me good books or articles (preferably with examples with real code) which would explain me what is dynamic programming? I really want simple examples first of all, then I'll move on.

回答1:

Dynamic programming is a useful type of algorithm that can be used to optimize hard problems by breaking them up into smaller subproblems. By storing and re-using partial solutions, it manages to avoid the pitfalls of using a greedy algorithm. There are two kinds of dynamic programming, bottom-up and top-down.

In order for a problem to be solvable using dynamic programming, the problem must possess the property of what is called an optimal substructure. This means that, if the problem was broken up into a series of subproblems and the optimal solution for each subproblem was found, then the resulting solution would be realized through the solution to these subproblems. A problem that does not have this structure cannot be solved with dynamic programming.

Top-Down

Top-down is better known as memoization. It is the idea of storing past calculations in order to avoid re-calculating them each time.

Given a recursive function, say:

fib(n) = 0 if n = 0
         1 if n = 1
         fib(n - 1) + fib(n - 2) if n >= 2

We can easily write this recursively from its mathematic form as:

function fib(n)
  if(n == 0 || n == 1)
    n
  else
    fib(n-1) + fib(n-2)

Now, anyone that has been programming for awhile or knows a thing or two about algorithmic efficiency will tell you that this is a terrible idea. The reason is that, at each step, you must to re-calculate the value of fib(i), where i is 2..n-2.

A more efficient example of this is storing these values, creating a top-down dynamic programming algorithm.

m = map(int, int)
m[0] = 0
m[1] = 1
function fib(n)
  if(m[n] does not exist)
    m[n] = fib(n-1) + fib(n-2)

By doing this, we calculate fib(i) at most once.


Bottom-Up

Bottom-up uses the same technique of memoization that is used in top-down. The difference, however, is that bottom-up uses comparative sub-problems known as recurrences to optimize your final result.

In most bottom-up dynamic programming problems, you are often trying to either minimize or maximize a decision. You are given two (or more) options at any given point and you have to decide which is more optimal for the problem you're trying to solve. These decisions, however, are based on previous choices you made.

By making the most optimal decision at each point (each subproblem), you are making sure that your overall result is the most optimal.

The most difficult part of these problems is finding the recurrence relationships for solving your problem.

To pay for a bunch of algorithm textbooks, you plan to rob a store that has n items. The problem is that your tiny knapsack can only hold at most W kg. Knowing the weight (w[i]) and value (v[i]) of each item, you want to maximize the value of your stolen goods that all together weight at most W. For each item, you must make a binary choice - take it or leave it.

Now, you need to find what the subproblem is. Being a very bright thief, you realize that the maximum value of a given item, i, with a maximum weight, w, can be represented m[i, w]. In addition, m[0, w] (0 items at most weight w) and m[i, 0] (i items with 0 max weight) will always be equal to 0 value.

so,

m[i, w] = 0 if i = 0 or w = 0

With your thinking full-face mask on, you notice that if you have filled your bag with as much weight as you can, a new item can't be considered unless its weight is less than or equal to the difference between your max weight and the current weight of the bag. Another case where you might want to consider an item is if it has less than or equal weight of an item in the bag but more value.

 m[i, w] = 0 if i = 0 or w = 0
           m[i - 1, w] if w[i] > w
           max(m[i - 1, w], m[i - 1, w - w[i]] + v[i]) if w[i] <= w

These are the recurrence relations described above. Once you have these relations, writing the algorithm is very easy (and short!).

v = values from item1..itemn
w = weights from item1..itemn
n = number of items
W = maximum weight of knapsack

m[n, n] = array(int, int)
function knapsack
  for w=0..W
    m[0, w] = 0
  for i=1 to n
    m[i, 0] = 0
    for w=1..W
      if w[i] <= w
        if v[i] + m[i-1, w - w[i]] > m[i-1, w]
           m[i, w] = v[i] + m[i-1, w - w[i]]
        else
           m[i, w] = m[i-1, w]
      else
        m[i, w] = c[i-1, w]

  return m[n, n]

Additional Resources

  1. Introduction to Algorithms
  2. Programming Challenges
  3. Algorithm Design Manual

Example Problems

Luckily, dynamic programming has become really in when it comes to competitive programming. Check out Dynamic Programming on UVAJudge for some practice problems that will test your ability to implement and find recurrences for dynamic programming problems.



回答2:

In short, Dynamic Programming is a method to solve complex problems by breaking them down into simpler steps, that is, going through solving a problem step-by-step.

  1. Dynamic programming;
  2. Introduction to Dynamic Programming;
  3. MIT's Introduction to Algorithms, Lecture 15: Dynamic Programming;
  4. Algorithm Design (book).

I hope this links will help at least a bit.



回答3:

Start with

  • wikipedia article about dynamic programming then
  • I suggest you read this article in topcoder
  • ch6 about dynamic programming in algorithms (Vazirani)
  • Dynamic programming chapter in Algorithms Design Manual
  • Dynamic programming chapter in algorithms classical book (Introduction to Algorithms)

If you want to test yourself my choices about online judges are

  • Uva Dynamic programming problems
  • Timus Dynamic programming problems
  • Spoj Dynamic programming problems
  • TopCoder Dynamic programming problems

and of course

  • look at algorithmist dynamic programming category

You can also checks good universities algorithms courses

  • Aduni (Algorithms)
  • MIT (Introduction to Algorithms (chapter 15))

After all, if you can't solve problems ask SO that many algorithms addict exist here



回答4:

See below

  • http://www.topcoder.com/tc?d1=tutorials&d2=dynProg&module=Static

and there are too many samples and articles reference at above article.

After you learning dynamic programming you can improve your skill by solving UVA problems, There are lists of some UVA dynamic programming problems in discussion board of UVA

Also Wiki has a good simple samples for it.

Edit: for book algorithm about it, you can use:

  • Python Algorithms: Mastering Basic Algorithms in the Python Language: In this book you can see the practical working with DP.
  • Introduction to Algorithms: The simplest possible way of describing algorithms done in this book.

Also you should take a look at Memoization in dynamic programming.



回答5:

I think Algebraic Dynamic Programming worth mentioning. It's quite inspiring presentation of DP technique and is widely used in bioinformatics community. Also, Bellman's principle of optimality stated in very comprehensible way.

Traditionally, DP is taught by example: algorithms are cast in terms of recurrences between table entries that store solutions to intermediate problems, from this table the overall solution is constructed via some case analysis.

ADP organizes DP algorithm such that problem decomposition into subproblems and case analysis are completely separated from the intended optimization objective. This allows to reuse and combine different parts of DP algorithms for similar problems.

There are three loosely coupled parts in ADP algorithm:

  • building search space (which is stated in terms of tree grammars);
  • scoring each element of the search space;
  • objective function selecting those elements of the search space, that we are interested in.

All this parts then automatically fused together yielding effective algorithm.



回答6:

This USACO article is a good starting point to understand the basics of DP and how it can give tremendous speed-ups. Then look at this TopCoder article which also covers the basics, but isn't written that well. This tutorial from CMU is also pretty good. Once you understand that, you will need to take the leap to 2D DP to solve the problem you refer to. Read through this Topcoder article up to and including the apples question (labelled intermediate).

You might also find watching this MIT video lecture useful, depending on how well you pick things up from videos.

Also be aware that you will need to have a solid grasp of recursion before you will successfully be able to pick up DP. DP is hard! But the real hard part is seeing the solution. Once you understand the concept of DP (which the above should get you to) and you're giving the sketch of a solution (e.g. my answer to your question then it really isn't that hard to apply, since DP solutions are typically very concise and not too far off from iterative versions of an easier-to-understand recursive solution.

You should also have a look at memoization, which some people find easier to understand but it is often just as efficient as DP. To explain briefly, memoization takes a recursive function and caches its results to save re-computing the results for the same arguments in future.



回答7:

Only some problems can be solved with Dynamic Programming

Since no-one has mentioned it yet, the properties needed for a dynamic programming solution to be applicable are:

  • Overlapping subproblems. It must be possible to break the original problem down into subproblems in such a way that some subproblems occur more than once. The advantage of DP over plain recursion is that each of these subproblems will be solved only once, and the results saved and reused if necessary. In other words, DP algorithms trade memory for time.
  • Optimal substructure. It must be possible to calculate the optimal solution to a subproblem using only the optimal solutions to subproblems. Verifying that this property holds can require some careful thinking.

Example: All-Pairs Shortest Paths

As a typical example of a DP algorithm, consider the problem of finding the lengths of the shortest paths between all pairs of vertices in a graph using the Floyd-Warshall algorithm.

Suppose there are n vertices numbered 1 to n. Although we are interested in calculating a function d(a, b), the length of the shortest path between vertices a and b, it's difficult to find a way to calculate this efficiently from other values of the function d().

Let's introduce a third parameter c, and define d(a, b, c) to be the length of the shortest path between a and b that visits only vertices in the range 1 to c in between. (It need not visit all those vertices.) Although this seems like a pointless constraint to add, notice that we now have the following relationship:

d(a, b, c) = min(d(a, b, c-1), d(a, c, c-1) + d(c, b, c-1))

The 2 arguments to min() above show the 2 possible cases. The shortest way to get from a to b using only the vertices 1 to c either:

  1. Avoids c (in which case it's the same as the shortest path using only the first c-1 vertices), or
  2. Goes via c. In this case, this path must be the shortest path from a to c followed by the shortest path from c to b, with both paths constrained to visit only vertices in the range 1 to c-1 in between. We know that if this case (going via c) is shorter, then these 2 paths cannot visit any of the same vertices, because if they did it would be shorter still to skip all vertices (including c) between them, so case 1 would have been picked instead.

This formulation satisfies the optimal substructure property -- it is only necessary to know the optimal solutions to subproblems to find the optimal solution to a larger problem. (Not all problems have this important property -- e.g. if we wanted to find the longest paths between all pairs of vertices, this approach breaks down because the longest path from a to c may visit vertices that are also visited by the longest path from c to b.)

Knowing the above functional relationship, and the boundary condition that d(a, b, 0) is equal to the length of the edge between a and b (or infinity if no such edge exists), it's possible to calculate every value of d(a, b, c), starting from c=1 and working up to c=n. d(a, b, n) is the shortest distance between a and b that can visit any vertex in between -- the answer we are looking for.



回答8:

http://mat.gsia.cmu.edu/classes/dynamic/dynamic.html



回答9:

Almost all introductory algorithm books have some chapter for dynamic programming. I'd recommend:

  • Introduction to Algorithms by Cormen et al
  • Introduction to Algorithms: A Creative Approach by Udi Manber


回答10:

If you want to learn about algorithms, I have found MIT to have some quite excellent videos of lectures available.

For instance, 6.046J / 18.410J Introduction to Algorithms (SMA 5503) looks to be quite a good bet.

The course covers dynamic programming, among a lot of other useful algorithmic techniques. The book used is also, in my personal opinion, quite excellent, and very worthy of a buy for anyone serious in learning about algorithms.

In addition, the course comes with a list of assignments and so on, so you'd get a possibility to exercise the theory in practice as well.

Related questions:

  • Learning Algorithms and Data Structures Fundamentals
  • https://stackoverflow.com/questions/481260/whats-a-good-way-to-start-learning-about-data-structures-algorithms


回答11:

As part of a correspondence Mathematics MSc I did a course based on the book http://www.amazon.co.uk/Introduction-Programming-International-mathematics-computer/dp/0080250645/ref=sr_1_4?ie=UTF8&qid=1290713580&sr=8-4 It really is more of a mathematical angle than a programming angle, but if you can spare the time and effort, it is a very thorough introduction, which seemed work for me as a course that was run pretty much out of the book.

I also have an early version of the book "Algorithms" by Sedgewick, and there is a very readable short chapter on dynamic programming in there. He now seems to sell a bewildering variety of expensive books. Looking on amazon, there seems to be a chapter of the same name at http://www.amazon.co.uk/gp/product/toc/0201361205/ref=dp_toc?ie=UTF8&n=266239



回答12:

Planning Algorithms, by Steven LaValle has a section about Dynamic Programming:

http://planning.cs.uiuc.edu/

See for instance section 2.3.1.



回答13:

MIT Open CourseWare 6.00 Introduction to Computer Science and Programming



回答14:

If you try dynamic programming in order to solve a problem, I think you would come to appreciate the concept behind it . In Google codejam, once the participants were given a program called "Welcome to CodeJam", it revealed the use dynamic programming in an excellent way.