Premise:
This Wikipedia page suggests that
the computational complexity of
"Schoolbook" long division is
O(n^2).
Deduction:
Instead of taking two n-digit
numbers, if I take one n-digit number
and one m-digit number, then the
complexity would be O(n*m).
Contradiction:
Suppose you divide 100000000 (n
digits) by 1000 (m digits), you get 100000, which takes six steps to
arrive at.
Now, if you divide 100000000 (n
digits) by 10000 (m digits), you get 10000. Now this takes only five steps.
Conclusion:
So, it seems that the order of
computation should be something like
O(n/m).
Question:
Who is wrong, me or Wikipedia, and
where?
You are wrong, and it is because you are not counting operations on every digit. Instead, you're counting as if you can do a subtraction on a N-digit number in O(1). In general, you can't; it takes O(N).
Try is again with numbers like 12341234 and 43214321.
Big O is the limiting complexity for all cases, not for a particularly easy one.
I think (haven't proved, so not sure) that your deduction is a true statement, but it doesn't actually follow from your premise. If all you know is that dividing two n-digit numbers is O(n^2), all you can deduce about an n- and an m-digit number is that it is O(max(n,m)^2), not that it is O(n*m). That's because an n digit number can also be considered an n+1 digit number with a leading 0, replacing the operation with one that we know the complexity of.
For example which is not O(nm): using long multiplication, calculating A^2 + B^2 is O(n^2) if A and B are n-digit numbers. However, it is not O(nm) if A is n digits and B is m digits. To see this, fix B=1, hence m=1 and note that calculating A^2 + 1 by long multiplication certainly is not O(log(A))[*].
Your "contradiction" does not contradict either your premise or your deduction. Big-O notation is about asymptotic behaviour as something tends to infinity. The fact that f(3) = 12 for some function f tells you absolutely nothing about big-O limits for f. Even if f(n) = 12 for all odd n, that still tells you nothing about big-O estimates, because you don't know how fast the function grows on even numbers. The existence of fast special cases doesn't mean the function is fast.
[] Actually, I've made an abuse of notation myself, there. If a two-variable function f(n,m) is O(nm), it doesn't follow (as I've suggested), that f(n,1) is O(n). But it does follow that for sufficiently large m, f(n,m) is O(n), so replace 1 with "some large constant or other".
Well, consider this case:
Sorting the array [1, 2, 3, 4, ..... , 10000000]
takes exactly one step. Hardly ~nlogn
steps like you would expect from an optimal sorting algorithm.
The flaw in your logic is that Big-O is an asymptotic bound on all inputs. You can't take one easy input and deduce a contradiction from it.
Well the Cormen book says this:
Consider the ordinary "paper and pencil" algorithm for long division: dividing a by b,
which yields a quotient q and remainder r. Show that this method requires O((1 + lg q)
lg b) bit operations.
Yes, it is O(N^2) due to all the multiplication and the subtraction plus the division that has to be done and as the numbers grows the running time needed increase in quadratic pattern due to that, e.g. 4/2 take one unit of time while 2342677/39 takes much much more time.