I have a counting algorithm for which I am trying to get a general big-o description. It is horribly nested and horribly exponential. Here it is:
1. For each T_i in T
2. For k = 1 to max_k
3. For each of 2^k*(n choose k) items
4. For each t in T_i
5. check if the item is in t...etc.
Here is a line-by-line idea of each running time
- This is a simple partitioning and I'm going to just give it a constant c1.
- max_k is a small number, always less than n, perhaps around 4 or 5. I will use k below.
- This loop always runs 2^k*(n choose k) times
- By considering line 1 constant, we can generalize this line and know it will never fire more than 2^n times in total in worst case, but generally will run a fraction of 2^n times, so we will call this one (2^n)/c2
- This is the simple if-statement operation inside all these loops, so c3.
Multiplying all these together gives:
c1 * k * 2^k * (n choose k) * (2^n)/c2 * c3
Since I want a big-O representation, ignoring constants gives:
k * 2^k * (n choose k) * (2^n)
It is known that (n choose k) is bounded above by (n * e / k)^k, so:
O(k * 2^k * (n * e / k)^k * (2^n))
My question is, what can I ignore here... 2^n is certainly the dominating term since n is always larger than k, and typically much more so. Can this be simplified to O(2^n)? Or O(2^terrible)? Or should I leave in the 2^k, as in O(2^k * 2^n)? (or leave all the terms in?)
My understanding is that if k or max_k can compete or surpass n, then they are vital. But since they are always dominated, can they be discarded like lower order terms of polynomial running times? I suppose all the exponential running time mess is confusing me. Any advice is greatly appreciated.