For very large strings (spanning multiple lines) is it faster to use Python's built-in string search or to split the large string (perhaps on \n
) and iteratively search the smaller strings?
E.g., for very large strings:
for l in get_mother_of_all_strings().split('\n'):
if 'target' in l:
return True
return False
or
return 'target' in get_mother_of_all_strings()
The second way is faster, the slitting adds one more O(n) iteration of searching and delimiting, the alocating memory for each sublist, then close to O(n^2) to iter each sublist and search a string in them. while just O(n) to search the bigger string.
The second one is a lot faster, here are some measurement data:
If you are only matching once to see if the substring is in the string at all, then both methods are about the same, and you get more overhead for splitting it into separate line by line searches; so the large string search is a bit faster.
If you have to do multiple matches, then I would tokenize the string and stuff them into a dictionary or set and store it in memory.
So the large string is MUCH faster to search through than a split list of smaller ones.
for loop in python is slow, and split a large string is also slow. so search the large string is much faster.
ProbablyCertainly the second, I don't see any difference in doing a search in a big string or many in small strings. You may skip some chars thanks to the shorter lines, but the split operation has its costs too (searching for\n
, creating n different strings, creating the list) and the loop is done in python.The string
__contain__
method is implemented in C and so noticeably faster.Also consider that the second method aborts as soon as the first match is found, but the first one splits all the string before even starting to search inside it.
This is rapidly proven with a simple benchmark:
The result is:
The bible.txt file actually is the bible, I found it here: http://patriot.net/~bmcgin/kjvpage.html (text version)