Can some body given a simple program or algorithm whose Time complexity is O(max(m,n)). I am trying to understand asymptotic notations. I followed some tutorials and understood what they have explained that i.e O(n), and O(n^2).
But now I want to understand Time complexity for O(max(m,n)) and how it is calculated.
Please give a sample program or algorithm to demonstrate this.
A common theorem to prove when studying big-O notation for the first time is that
Θ(max{m, n}) = Θ(m + n)
In other words, any algorithm whose runtime is O(max{m, n}) also has runtime O(m + n), so any algorithm with this time complexity will fit the bill.
As a specific example of this, consider the Knuth-Morris-Pratt string-matching algorithm, which takes in two strings and returns whether the first string is a substring of the second. The runtime is Θ(m + n) = Θ(max{m, n}), meaning that the runtime is linear in the length of the longer of the two strings.
I apologize if this doesn't give something that intuitively has runtime max{m, n}, but mathematically this does work out.
Hope this helps!
The one I can think of is Python's izip_longest
function :
Make an iterator that aggregates elements from each of the iterables.
If the iterables are of uneven length, missing values are filled-in
with fillvalue. Iteration continues until the longest iterable is
exhausted.
For example:
In [1]: from itertools import zip_longest
In [2]: list(zip_longest([1, 2, 3, 4, 5, 6, 7], ['a', 'b', 'c']))
Out[2]: [(1, 'a'), (2, 'b'), (3, 'c'), (4, None), (5, None), (6, None), (7, None)]
In [3]: list(zip_longest([1, 2], ['a', 'b', 'c']))
Out[3]: [(1, 'a'), (2, 'b'), (None, 'c')]
In [4]: list(zip_longest([1, 2, 3], ['a', 'b', 'c']))
Out[4]: [(1, 'a'), (2, 'b'), (3, 'c')]
It should be clear why this is an O(max(m, n))
operation and not O(m+n), as far as I know; because when m > n
, increasing n
doesn't increase time required.
The simplest example is
for i=0 to max(m,n)
print 'a'
From theory: O(max(m,n))
is just O(m+n)
The "real life" example of O(max(m,n))
may be the algorithm which for two unsorted arrays of size m
and n
respectively - finds the biggest elements of both
I think that the best answer to your question is the Robert Harvey's comment. In my opinion, a good example of an algorithm where that kind of bounding is used is the DFS
.
I hope, that this will clear your doubts:
DFS examines every vertex and every edge of a graph. Let n
denotes the number of vertices in the graph and m
denotes the number of edges in it.
Based on an above observation, you can derive an upper bound for the time complexity of DFS
as O(max(n, m))
.
Just notice, that there are graphs for which you cannot bound the time complexity of DFS
by just O(n)
. A complete graph is an example.
Also, there are graphs for which you cannot bound the time complexity of DFS
by just O(m)
. A null graph is an example.