Big O for a finite, fixed size set of possible val

2019-06-27 17:02发布

问题:

This question triggered some confusion and many comments about whether the algorithms proposed in the various answers are O(1) or O(n).

Let's use a simple example to illustrate the two points of view:

we want to find a long x such that a * x + b = 0, where a and b are known, non-null longs.

  • An obvious O(1) algo is x = - b / a
  • A much slower algo would consist in testing every possible long value, which would be about 2^63 times slower on average.

Is the second algorithm O(1) or O(n)?

The arguments presented in the linked questions are:

  • it is O(n) because in the worst case you need to loop over all possible long values
  • it is O(1) because its complexity is of the form c x O(1), where c = 2^64 is a constant.

Although I understand the argument to say that it is O(1), it feels counter-intuitive.

ps: I added java as the original question is in Java, but this question is language-agnostic.

回答1:

The complexity is only relevant if there is a variable N. So, the question makes no sense as is. If the question was:

A much slower algo would consist in testing every possible value in a range of N values, which would be about N times slower on average.

Is the second algorithm O(1) or O(N)?

Then the answer would be: this algorithm is O(N).



回答2:

Big O describes how an algorithm's performance will scale as the input size n scales. In other words as you run the algorithm across more input data.

In this case the input data is a fixed size so both algorithms are O(1) albeit with different constant factors.

If you took "n" to mean the number of bits in the numbers (i.e. you removed the restriction that it's a 64-bit long), then you could analyze for a given bit size n how do the algorithms scale.

In this scenario, the first would still be O(1) (see Qnan's comment), but the second would now be O(2^n).

I highly recommend watching the early lectures from MIT's "Introduction to Algorithms" course. They are a great explanation of Big O (and Big Omega/Theta) although do assume a good grasp of maths.



回答3:

Checking every possible input is O(2^N) on the number of bits in the solution. When you make the number of bits constant, then both algorithms are O(1), you know how many solutions you need to check.



回答4:

Fact: Every algorithm that you actually run on your computer is O(1), because the universe has finite computational power (there are finitely many atoms and finitely many seconds have passed since the Big Bang).

This is true, but not a very useful way to think about things. When we use big-O in practice, we generally assume that the constants involved are small relative to the asymptotic terms, because otherwise only giving the asymptotic term doesn't tell you much about how the algorithm performs. This works great in practice because the constants are usually things like "do I use an array or a hash map" which is at most about a 30x different, and the inputs are 10^6 or 10^9, so the difference between a quadratic and linear algorithm is more important than constant factors. Discussions of big-O that don't respect this convention (like algorithm #2) are pointless.



回答5:

Whatever the value for a or b are, the worst case is still to check 2^64 or 2^32 or 2^somevalue values. This algorithm complexity is in O(2^k) time where k is the number of bits used to represent a long value, or O(1) time if we consider the values of a and b.