How do you calculate big O on a function with a ha

2019-07-15 08:00发布

As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.

Big O shows how the time increases based on a scaling n, but as n scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was O(nlogn % k) when k+1 is when it resets. Another thought is that as it has a hard limit, the value is O(1), since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?

As an example of the reset, the k value is 31336. At n=31336, it takes 31336 steps but at n=31337, it takes 1.

The code is:

def Entry(a1, q):
    F = [a1]
    lastnum = a1
    q1 = q % 31336
    rows = (q / 31336)
    for i in range(1, q1):
        lastnum = (lastnum * 31334) % 31337
        F.append(lastnum)

    F = MergeSort(F)
    print lastnum * rows + F.index(lastnum) + 1

MergeSort is a standard merge sort with O(nlogn) complexity.

3条回答
爱情/是我丢掉的垃圾
2楼-- · 2019-07-15 08:25

It's O(1) and you can derive this from big O's definition. If f(x) is the complexity of your solution, then:

Equation 2

with

Equation 2

and with any M > 470040 (it's nlogn for n = 31336) and x > 0. And this implies from the definition that:

Equation 1

查看更多
冷血范
3楼-- · 2019-07-15 08:38

Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).

Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.

But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.

查看更多
劫难
4楼-- · 2019-07-15 08:47

If there are a few different instances of this problem, each with its own k value, then the complexity of the method is not O(1), but instead O(k·ln k).

查看更多
登录 后发表回答