Why are constants ignored in asymptotic analysis?
相关问题
- Adjacency list with O(1) look up time using HashSe
- What is the big-O complexity of this algorithm?
- Calculating the complexity of an algorithm with 3
- Finding the minimum in an unsorted array in logari
- Complexity of STL max_element
相关文章
- What is the complexity of bisect algorithm?
- How to measure complexity of a string?
- Is there any function that is in o(1)?
- Searching strings with . wildcard
- why O(2n^2) and O(100 n^2) same as O(n^2) in algor
- Solving master theorem with log n: T(n) = 2T(n/4)
- Merge sort worst case running time for lexicograph
- Sorting nearly sorted array with O(n*Log(K)) compl
It's because of the linear speedup theorem for Turing machines.
If you show me a Turing machine that solves a problem of size n in f(n) steps, and specify a constant c > 0, I can make a Turing machine that solves the same problem in c f(n) steps (or one step, if c f(n) < 1). For example, by taking c = ½ my machine can solve the same problem in half as many steps. Or, by taking c = 1/1000000, my machine can solve the same problem in only a millionth as many steps!
This result makes constant factors uninteresting (theoretically speaking: obviously in practice they still have some interest).
Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors.
If we define
U( f(n) )
to be the set of all functiong
for which there exists anN
such that for allN > n: g(n) <= f(n)
(i.e. the same asO
without the constant factor), it is much harder to show that an algorithm's running time is inU( f(n) )
thanO( f(n) )
.For one thing, we'd need an exact unit for measuring running time. Using a CPU instruction as the basic unit would work, but that'd depend on the exact implementation of the algorithm as well as the processor architecture it runs on.
It's similar for memory consumption: different implementations of the same algorithm will differ in their memory consumption (by a constant factor). Further if an implementation uses a lot of pointers, the same implementation will use about twice as much memory on a 64-bit machine than on a 32-bit machine. But saying things like "this algorithm's memory consumption, when implemented using this C-code, is in
O(23 * n)
on a 32-bit Intel PC. It's inO(42 * n)
on a 64-bit PC" is just not useful.Ignoring constants allows us to reason about the properties of an algorithm in an implementation- and platform-independent manner.
If you are talking about this
http://www.cs.cornell.edu/courses/cs312/2004fa/lectures/lecture16.htm
When you analyze the running time (or some other aspect) of an algorithm and find that it is something like
Then, when you are figuring out the big-O performance time -- it makes no sense to look at k, because you want to know the running time as n is getting large.
n ^ 2
is so much larger than k -- that you can safely ignore it.