Python string search efficiency

2020-04-07 10:36发布

For very large strings (spanning multiple lines) is it faster to use Python's built-in string search or to split the large string (perhaps on \n) and iteratively search the smaller strings?

E.g., for very large strings:

for l in get_mother_of_all_strings().split('\n'):
 if 'target' in l:
   return True
return False

or

return 'target' in get_mother_of_all_strings()

6条回答
三岁会撩人
2楼-- · 2020-04-07 11:15

The second way is faster, the slitting adds one more O(n) iteration of searching and delimiting, the alocating memory for each sublist, then close to O(n^2) to iter each sublist and search a string in them. while just O(n) to search the bigger string.

查看更多
Luminary・发光体
3楼-- · 2020-04-07 11:21

The second one is a lot faster, here are some measurement data:

def get_mother_of_all_strings():
    return "abcdefg\nhijklmnopqr\nstuvwxyz\naatargetbb"

first: 2.00
second: 0.26
查看更多
疯言疯语
4楼-- · 2020-04-07 11:24

If you are only matching once to see if the substring is in the string at all, then both methods are about the same, and you get more overhead for splitting it into separate line by line searches; so the large string search is a bit faster.

If you have to do multiple matches, then I would tokenize the string and stuff them into a dictionary or set and store it in memory.

s = 'SOME REALLY LONG STRING'
tokens = set(s.split())
return substring in tokens
查看更多
等我变得足够好
5楼-- · 2020-04-07 11:26
import timeit

a = #a really long string with multiple lines and target near the end

timeit.timeit(stmt='["target" in x for x in a.split("\\n")]', setup='a = """%s"""'%a)
23.499058284211792
timeit.timeit(stmt='"target" in a', setup='a = """%s"""'%a)
5.2557157624293325

So the large string is MUCH faster to search through than a split list of smaller ones.

查看更多
家丑人穷心不美
6楼-- · 2020-04-07 11:31

for loop in python is slow, and split a large string is also slow. so search the large string is much faster.

查看更多
Deceive 欺骗
7楼-- · 2020-04-07 11:32

Probably Certainly the second, I don't see any difference in doing a search in a big string or many in small strings. You may skip some chars thanks to the shorter lines, but the split operation has its costs too (searching for \n, creating n different strings, creating the list) and the loop is done in python.

The string __contain__ method is implemented in C and so noticeably faster.

Also consider that the second method aborts as soon as the first match is found, but the first one splits all the string before even starting to search inside it.

This is rapidly proven with a simple benchmark:

import timeit

prepare = """
with open('bible.txt') as fh:
    text = fh.read()
"""

presplit_prepare = """
with open('bible.txt') as fh:
    text = fh.read()
lines = text.split('\\n')
"""

longsearch = """
'hello' in text
"""

splitsearch = """
for line in text.split('\\n'):
    if 'hello' in line:
        break
"""

presplitsearch = """
for line in lines:
    if 'hello' in line:
        break
"""


benchmark = timeit.Timer(longsearch, prepare)
print "IN on big string takes:", benchmark.timeit(1000), "seconds"

benchmark = timeit.Timer(splitsearch, prepare)
print "IN on splitted string takes:", benchmark.timeit(1000), "seconds"

benchmark = timeit.Timer(presplitsearch, presplit_prepare)
print "IN on pre-splitted string takes:", benchmark.timeit(1000), "seconds"

The result is:

IN on big string takes: 4.27126097679 seconds
IN on splitted string takes: 35.9622690678 seconds
IN on pre-splitted string takes: 11.815297842 seconds

The bible.txt file actually is the bible, I found it here: http://patriot.net/~bmcgin/kjvpage.html (text version)

查看更多
登录 后发表回答