Question about big O and big Omega

2019-07-16 11:51发布

问题:

I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I assume that this means that the efficiency is O(n) + O(n). Does this simplify to 2O(n), O(2n), or O(n)? From what I know about this notation, it would be O(2n) and using the rules of asymptotic notation you can drop the 2, giving an efficiency of O(n).

If we were trying to find a lower bound, though, can this rule still apply? If Ω(n) + Ω(n) = Ω(2n), can you still drop the 2? I would think not, since you would actually be lowering the lower bound (since n < 2n).

回答1:

Does this simplify to 2O(n), O(2n), or O(n)?"

All of the above, but the first two are overly complex. O(n) would be the canonical notation.

2*N is proportional to N, so if the maximum cost is no more than proportional to 2*N for a sufficiently large N ( O(2*N) ), the maximum cost is also no more than proportional to N for a sufficiently large N ( O(N) ).

If we were trying to find a lower bound, though, can this rule still apply?

Most definitely yes.

2*N is proportional to N, so if the minumum cost is no less than proportional to 2*N for a sufficiently large N ( Ω(2*N) ), the minimum cost is also no less than proportional to N for a sufficiently large N ( Ω(N) ).


For example, say you have an algorithm that takes 300 ms + 100*N ms to execute. The lower bound of its speed is Θ(N) and thus Ω(N).

What if the algorithm were to take twice as long to execute? Even if it took 600 ms + 200*N ms to execute, the lower bound of its speed is still Θ(N) and thus Ω(N).


The most important thing to realise to understand Θ(N) and the like is that these notations are used to study how well something scales. That one algorithm takes twice as long as another doesn't say anything about how well either scales, so it shouldn't be a surprise that they can have the same Ω() for the lower bound of their speed.



回答2:

It would simplify -- usually to O(n), indicating that the amount of time taken to solve the problem is proportional to the problem size. In this case, it may be more appropriate to write 2O(n), though it means the same thing.



回答3:

I believe according to the definition of Big-O:

If a function f(n) is <= cg(n) for some constant c and sufficiently large values of n, then it can be said that f(n) = O(g(n)). In your example, g(n) = n.

So, for example, if it is possible to pick some value for c which makes f(n) <= cn for sufficiently large n, then you can say that f(n) = O(n).

Big Omega is defined similarly.



回答4:

It's been a while but I think you're right in both cases.

UPDATE

Actually looks like Ω(n) + Ω(n) == Ω(n)