I know this isn't strictly a programming question, but it is a computer science question so I'm hoping someone can help me.
I've been working on my Algorithms homework and figuring out the Big-Oh, Big-Omega, Theta, etc, of several algorithms. I'm proving them by finding their C and N0 values and all is going well.
However, I've come across my last two problems in the set and I'm struggling figuring out how to do them (and google isn't helping much).
I haven't had to figure out the Big-Oh/Omega of summations before.
My last two problems are:
- Show that Σ (i=1 to n) of i2 is O(N3)
and
- Show that Σ (i=1 to n) of [log2i] is Ω(n log n)
My question is, How do I show that?
For example, in the first one, intuitively I can't see how that summation of i2 is O(N3). The second one confuses me even more. Can someone explain how to show the Big-Oh and and Big-Omega of these summations?
The simplest approach that jumps out to me is a proof by induction.
For the first one, essentially you need to show that
If we use the generalized principle of induction and take a base case of n=1 and k=2.
we get
1<2*1
.Now of course take the inductive hypothesis, then we know that
sum(i=1 to n) i^2<k *n^3
, with a bit of fun math we get tosum(i=1 to n) i^2+(n+1)^2 < k *n^3+(n+1)^2
.Now show
k * n^3+(n+1)^2 < k *(n+1)^3
k *n^3+n^2+2n+1 < k *n^3+k *(3n^2+3n+1)
k *n^3 < k *n^3+(3k-1) *n^2+(3k-2) *n+k-1
Therefore, our original result is correct.
For the second proof you need to show that
sum(i=1 to n) log_2(i) >= k*n*log(n)
, which I'll leave as an exercise for the reader ;).The main step though is
sum(i=1 to n) log_2(i)+log_2(n+1)>=k*n*log(n)+k*log(n+1)
, for some k, so clearly k is 1.Probably, your CPU will multiply 32-bit integers in constant time. But big-Oh doesn't care about "less than four billion", so maybe you have to look at multiplication algorithms?
According to Wolfram, the "traditional" multiplication algorithm is O(n2). Although n in this case is the number of digits, and thus really log(n) in the actual number. So I should be able to square the integers 1..n in time O(n.log(n)). Summing is O(log(n2)), which is obviously O(log(n)), for an overall complexity of O(n.log(n)).
So I can quite understand your confusion.
Σ (i=1 to n) i2 = n(n+1)(2n+1)/6, which is O(n3).
Note that (n!)2 = (1 n) (2(n-1)) (3(n-2))...((n-1)2) (n 1)
= Π (i=1 to n) i (n+1-i)
>= Π (i=1 to n) n
[E.g., because for each i=1 to n, (i-1)(n-i) >= 0. Compare with Graham/Knuth/Patashnik, section 4.4]
= nn.
Thus, n! >= nn/2, and therefore
Σ (i=1 to n) log i = log Π (i=1 to n) i = log n! >= log nn/2 = (n/2) log n, which is Ω(n log n).
http://en.wikipedia.org/wiki/Big_O_notation
N repetitions of g(m)=O(f(m)) is
O(N*f(m))
for any f(N).Sum of i=1..N of i*g(i) is
O(N*f(N))
if g(n)=O(f(n)) and f is monotonic.Definition: g(n)=O(f(n)) if some c,m exist so for all n>m,
g(n)<=c*f(n)
The sum is for i=1..N of
i*f(i)
.If f is monotonic in i this means every term is <= if(N) <= Nf(N). So the sum is less than
N*c*f(N)
so the sum isO(N*f(N))
(witnessed by the same c,m that makes g(n)=O(f(n)))Of course, log_2(x) and x^2 are both monotonic.
My guess is that what the question statement means is if you're summing the results of some calculation for which the running time is proportional to i2 in the first case, and proportional to log2i in the second case. In both cases, the running time of the overall summation is "dominated" by the larger values of N within the summation, and thus the overall big-O evaluation of both will be N*O(f) where f is the function you're summing.