I use algorithm with division.
According to https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations the division has time complexity (one of following):
O(n log n log log n)
O(n log n 2O(log* n))
O(n**2)
O(M(n))
So far I use this algorithm in Python but I need to describe it independently on platform. which one of those time complexity definitions is the correct one today for Python (or similar language) users?
As mentioned if you use ALU or FPU for basic variable types
you can assume division complexity is
O(1)
because the instruction overhead is usually comparable with the actual runtime of the division used. If used HW platform does not support division (some older MCU's for example) then it must be computed via program (not single instruction) and this no longer apply.Also if you have arbitrary precision variables (bignums) then the actual number bit or digit width start to matter and you are no longer in
O(1)
In that case see the #2.most division algorithms use multiplication as a core function
The complexity of division is then defined by the used algorithm and components used by it. For example if you have basic variables but computing division (no HW divider support) then the used operations are still
O(1)
but the division used is not.Let us take Division by repeated subtraction as example.
If
n
is the bit width of result then this isO(2^n)
for basic variables. But if the variables are arbitrary precision then the used operations are no longerO(1)
This use substraction,comparison and increment which are allO(n)
so the division complexity will becameO(n*(2^n))
without any change in the algorithm or code ... So you always have to know what complexity you are talking aboutO(2^n)
O(n*(2^n))
This algorithm is not used because is painfully slow. Instead more advanced thing are used. Most division algorithms use multiplication as a core function so Schönhage–Strassen and Karatsuba are relevant for division algorithms. See:
Now how to determine the complexity of custom division?
Take the base complexity of your algorithm and multiply it by the slowest complexity of its core function. In case the core functions are not used each iteration this can became very tricky ... Do not forget to use the same meaning of
n
while combining/comparing complexities !!!If you do not have access to source code of the algorithm used then you can try to measure division for BIG set of numbers with big enough range of
n
and try to estimate the complexity from the graph of measured times ... This is not reliable because many things can screw this up like multithreading , scheduling granularity, unknownn
,etc ...