why O(2n^2) and O(100 n^2) same as O(n^2) in algor

2020-07-27 01:54发布

I am new in the algorithm analysis domain. I read here in the Stack Overflow question "What is a plain English explanation of "Big O" notation?" that O(2n^2) and O(100 n^2) are the same as O(n^2). I don't understand this, because if we take n = 4, the number of operations will be:

  • O(2 n^2) = 32 operations
  • O(100 n^2) = 1600 operations
  • O(n^2) = 16 operations

Can any one can explain why we are supposed to treat these different operation counts as equivalent?

3条回答
▲ chillily
2楼-- · 2020-07-27 02:30

Because it is a classification, so it places algorithms in some complexity class. The classes are O(1), O(n), O(n log n), O(n ^ 2), O(n ^ 3), O(n ^ n), etc. By definition, two algorithms are in the same complexity class if the difference is a constant factor when n goes to infinity (the big-oh notation is for comparing algorithmic complexity for large values of n).

查看更多
放荡不羁爱自由
3楼-- · 2020-07-27 02:37

Why this is true can be derived directly from the formal definition. More specifically, f(x) = O(g(n)) if and only if |f(x)| <= M|g(x)| for all x >= x0 for some M and x0. Here you're free to pick M as you wish, so if M = 5 for f(x) = O(n2) to be true, then you can just pick M = 5*100 for f(x) = O(100 n2) to be true.

Why this is useful is a bit of a different story.

Some concerns with having constants matter:

  • What operations are we measuring? Array accesses? Arithmetic operations? Multiplication only? Arithmetic with multiplication weighted double as much as addition? And you may want to compare algorithms (that have the same Big-O complexity) using this metric, when in fact there can be some subtle difference in the number of operations that even the most experience computer scientists can miss.
  • Let's say you can assign a reasonable weight to each operation. Now there has to be across the board agreement to this, otherwise you'll have some near-meaningless analyses of algorithms done by someone using different weights (except for what information big-O would've given you).
  • The weights may be time-bound, as the speed of operations improve with time, and some operations may improve faster than others.
  • The weights may be environment-bound, as the speed of operations can differ on different environments. For example, disk read is a lot slower than memory read.

Big-O (which is part of asymptotic complexity) avoid all of these issues. You only check how many times some piece of code that takes a constant amount of time (i.e. independent of input size) is executed. As example:

c = 0
for i = 1 to n
for j = 1 to n
for k = 1 to n
  x = input[i]*input[j]
  y = input[j]*input[k]
  z = input[i]*input[k]
  c += (x-y)*z

So there are 4 multiplications, 1 subtraction and 1 addition, each executed n3 times, but here we just say that this code:

  x = input[i]*input[j]
  y = input[j]*input[k]
  z = input[i]*input[k]
  c += (x-y)*z

runs in constant time (it will always take the same amount of time, regardless of how many elements there are in the array) and will be executed O(n3) times, thus the running time is O(n3).

查看更多
SAY GOODBYE
4楼-- · 2020-07-27 02:48

Because O(f(n)) means that the said function is bouded by some constant times f(n). If g is bounded by multiple of 100 f(n), it is also bouded by multiple of f(n). Specifically, O(2 n^2) does not mean it's not greater than 16 for n = 4, but that for all n it's not greater than C * 2n^2 for some fixed C, independent of n.

查看更多
登录 后发表回答