The code:
int temp = 0;
for (int h = 1; h <= n; h = h * 2)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j<=n; j++)
{
for (int k = 1; k<i; k++)
{
temp++
}
}
}
}
According to my calculations, big-O is lg n * n^4.
Kindly guide.
Edit: Thank you all for the replies. I understood where I was making the mistake. I really appreciate everyone who participated.
for (int h = 1; h <= n; h = h * 2) // increment is pow(2). hence executed ln(n) times
{
for (int i = 1; i <= n; i++) // increment is one. Executed n times
{
for (int j = 1; j<=n; j++)// increment is one. Executed n times
{
// increment is one. Executed i times. See below
for (int k = 1; k<i; k++)
{
temp++
}
}
}
}
The k
loop is executed i
times. The maximum value of i
is n. so you can consider it as being executed n
times always (over approximation)
complexity = ln(n) * n * n * n = ln*pow(n,3)
Another way to look at it... lets exchange i loop and j loop
for (int h = 1; h <= n; h = h * 2) // increment is pow(2). hence executed ln(n) times
{
for (int j = 1; j <= n; j++) // increment is one. Executed n times
{
// see below
for (int i = 1; i<=n; i++)
{
for (int k = 1; k<i; k++)
{
temp++
}
}
}
}
now look at it.... the total number of executions of (i,k) is n*(n+1)/2
. so now what is the complexity?
h * j * (i,k) = ln(n) * n * (n*(n+1)/2) == ln(n) * pow(n,3)
Just for kicks, I ran this code with various values of n:
n temp O?
----------- -------------- ---------
1 0
10 1800 1.8 x n^3
100 3465000 3.5 x n^3
1000 4995000000 5.0 x n^3
10000 6999300000000 7.0 x n^3 (and took a LONG time!)
conclusion: log(n) * n^3
Knightrider is basically right, but if you want correct numbers, you cant just say "I take the biggest value of i
".
Yes, it is correct in terms f(n) = O(...)
, but you can also write f(n) = O(n^15)
and this will be correct too.
The last loop is executed n-times, then n-1 times, then n-2 times etc., which is n+n-1+n-2+n-3....+3+2+1
which is n(n+1)/2
. Now you can multiply it and get n(n+1)/2 = n^2/2 + n/2
, constants can be ignored in asymptotic operations, which means n^2/2 + n/2 = Theta(n^2+n)
, which can be also transformed to n^2+n = Theta(n^2)
So after all, the result is not changed, BUT you have to be sure.
The final result is n^3*log_2(n)
as described by knightrider
Analysing the algorithm's complexity using Sigma notation
For the sake of completeness: when analyzing the time complexity of nested and co-dependent loops such as in your algorithm, Sigma notation can be a good tool
where ⌊x⌋ and ⌈x⌉ are the floor and ceiling functions.
From the above, it's apparent that an upper bound on the asymptotic behaviour of the algorithm is O(n^3 log_2(n))
.
Using Sigma notation to estimate the actual number of iterations
Sigma notation analysis is, in addition to being a rigorous tool for Big-O(-Ω, -Θ) analysis, also useful if we're interested in counting or estimating the actual number of iterations of our algorithm.
We compare the estimated number of iterations—using the formula prior to the ≤ symbol above—with the actual number of iterations as presented in @JohnHascell:s answer.
// our formula (pseudo-code / matlab-compliant)
numIt(n) = n*ceil(log2(n))*(n^2-n)/2;
// comparison with actual count:
---------------------------------------------------------
n actual # of iter. estimated # of iter.
(from @JohnHascell) (from formula above)
------------ ------------------- --------------------
1 0 0
10 1 800 1 800
100 3 465 000 3 465 000
1000 4 995 000 000 4 995 000 000
10000 6 999 300 000 000 6 999 300 000 000
---------------------------------------------------------
We see that the count from the formula conforms perfectly with the actual count; in this case, the estimation is in fact an actual count.