Im new with algorithmic analysis. I just want to verify mi understanding:
For example:
for (i=0, i<n; i++) {
}
Is clear that there are 1 asignation, n comparations, and n incrementations.
The n function is: T(n) = t0 + t1*n + t2*n = t0 + (t1+t2)n = c0+c1*n
So the complexity is: O(n)
But, in this other cases i need some advice:
for (i=0, i<=n; i++) {
}
T(n) = t0 + t1*(n+1) + t2*(n+1) ???
for (i=0, i<n-1; i++) {
}
T(n) = t0 + t1*(n-1) + t2*(n-1) ???
for (i=1, i<n; i++) {
}
T(n) = t0 + t1*(n-1) + t2*(n-1) ???
I think one would say all those fors loops are just O(n) and thats the only thing that matters. But i want to know if those T(n) definitions are ok.
Yes, they are all O(n), and yes, your equations for T
are correct.
In general, while T
is useful for getting familiar with complexity analysis, it's not used otherwise. Most people are concerned with the O
of a problem. Furthermore, once you find a set of algorithms with minimal (or optimal) O
, the problem of finding the best from that set rarely hinges on T
. This is because things like, say, cache coherency usually matter way more than the absolute number of comparisons or additions for most practical problems.
If you take two loops:
for (i = 0; i < n; i++) {}
and
for (i = 0; i <= n; i++) {}
The bottom loop will execute one more time than the top one (it will execute when i == n
, whereas the top loop will skip this). So when computing the run-time, these are different. The top-one will execute the comparison/increment n
times exactly (when i
is {0,1,...,n-1}
) ; the bottom will execute it n+1
(when i
is {0,1,...,n-1,n}
).
However, in Big-O notation, you're looking for asymptotic behavior - i.e. what happens with n
is really large. And in that case, you only take the factor of n
that varies the fastest. When n
is very large, the extra iteration of the loop above won't matter much at all, so both loops are O(n)
.
One key thing to keep in mind with Big-O notation is that it most definitely does not capture everything about an algorithm. It's a good first-pass check - an algorithm that is O(n)
is almost always going to be better than one that is O(n^3)
. But within the space of algorithms with the same - or almost the same - exponent, there can be wildly different performance when implemented on real systems.
O(n) means that there is constant M>0 that the number of operation is < M*n.
So in your case it is O(n) for the second case, we could say that it is O(n-1) but it is easier to say that it is O(n) because it is exactly the same when n -> infinity.
n-1/n -> 1
If T(n) is the exact number of operations so you can't simplify your result