Dynamic Programming Solution for Activity-selectio

2019-02-20 09:42发布

In 16.1 An activity-selection problem of Introduction to Algorithm, the dynamic programming solution for this problem was given as

c[i, j] = 0 if S(i, j) is empty

c[i, j] = max { c[i, k] + c[k, j] + 1 } if S(i, j) is not empty

where S(i, j) denotes the set of activities that start after activity a(i) finishes and that finish before activity a(j) starts, and c[i, j] denotes the size of an optimal solution for the set S(i, j)

However, I am thinking of another simpler solution

c[i] = max { c[i - 1], c[f(i)] + 1 }

where f(i) gives the activity that is compatible with a(i) and has the max finish time and finishes before a(i) starts.

Will this work? If yes, why the author provides this complex solution. If not, what am I missing?

2条回答
贼婆χ
2楼-- · 2019-02-20 10:05

Will this work?

Yes, that will work too.

why the author provides this complex solution.

That wasn't their proposed solution, but a part of the analysis of the problem. In the next paragraph after the equation you cited the authors say:

But we would be overlooking another important characteristic of the activity-selection problem that we can use to great advantage.

The final solution (Greedy-Activity-Selector) is similar to yours, but even simpler.

Their point, as I understand it, is that a DP solution can be built almost mechanically (as described in the chapter 15.3), without considering the specifics of that particular problem, but coming up with a better algorithm requires some insight into the problem beyond the optimal substructure.

Your solution relies on the theorem 16.1, but once the theorem is proven, it doesn't make sense to create another DP algorithm, because you already know enough about the problem to create a simpler greedy algorithm.

查看更多
太酷不给撩
3楼-- · 2019-02-20 10:17

I think you are missing many details of designing the dp solution.

  1. What is the initial value?
  2. What is the base case?
  3. what happens if there are several activities compatible with a(i) with same finishing time?

When designing a dp solution, one of the properties needed is optimal substructure

The computing order of a particular state (i.e. c[i]) is important, it can only be computed by its subproblems. Your solution does not meet this requirement, as when you computing c[i], you have to computer c[j] first with j = f(i), let's assume j > i (or even j = i+1) , then you have to compute c[i] before computing c[j]! So c[i] depends on c[j] while c[j] depends on c[i] ==> not correct

Another example very similar to this question is Matrix chain mutiplication

You may want to have a look :)

Edit: After seeing you edit the question, then here's my response:

Assuming you can precompute f(i) in reasonable time (which obviously can), your solution is correct as it IS the greedy solution as other answers told you.

The reason why it works is quite straight forward, literally speaking,

until the i-th activity, you either choose activity i (thats the c[f(i)]+1 part) or not choose it (the c[i-1]) part

You can try to construct a formal proof as well, the correctness of a greedy method can usually be proofed by contradiction (roughly speaking, you can try to see why it is NOT possible to have a larger set other than c[i-1] if you do not choose activity i, similar for the case that you choose activity i)

To answer your question about why writer demonstrate the dp solution, I think it's out of programming context, but my thought is the user is trying to demonstrate two different ways to solve a problem, and furthermore to illustrate an idea here: given a problem which can be solved by greedy method, it can also be solved by dp but IT IS OVERKILLING.

Then the writer try to help the reader to recognize the difference between Greedy and dp as they are quite similar to a new learner. And that's why the writer first give the DP solution to show the pain, then the greedy solution, lastly a paragraph Greedy versus DP in Page 382

So TL;DR: Your solution is correct as it is basically the greedy method to solve the problem, and of course it is much easier than the DP solution given in the book, as this IS the point the book would like to illustrate. A quote from the book at P.382: ...One might be temped to generate a dp solution to a problem when a greedy solution suffices, or one might mistakenly think that a greedy solution suffices...when a dp solution is required...

查看更多
登录 后发表回答