My C program had a lot of strstr function calls. The standard library strstr is already fast but in my case the search string has always length of 5 characters. I replaced it with a special version to gain some speed:
int strstr5(const char *cs, const char *ct) { while (cs[4]) { if (cs[0] == ct[0] && cs[1] == ct[1] && cs[2] == ct[2] && cs[3] == ct[3] && cs[4] == ct[4]) return 1; cs++; } return 0; }
The function returns an integer because it’s enough to know if ct occurs in cs. My function is simple and faster than standard strstr in this special case but I’m interested to hear if anybody has some performance improvements that could be applied. Even small improvements are welcome.
Summary:
- cs has length of >=10, but otherwise it can vary. Length is known before (not used in my function). Length of cs is usually from 100 to 200.
- ct has length of 5
- Content of strings can be anything
Edit: Thank you for all answers and comments. I have to study and test ideas to see what works best. I will start with MAK's idea about suffix trie.
strstr's interface impose some constraints that can be beaten. It takes null-terminated strings, and any competitor that first does a "strlen" of its target will lose. It takes no "state" argument, so set-up costs can't be amortized across many calls with (say) the same target or pattern. It is expected to work on a wide range of inputs, including very short targets/patterns, and pathological data (consider searching for "ABABAC" in a string of "ABABABABAB...C"). libc is also now platform-dependent. In the x86-64 world, SSE2 is seven years old, and libc's strlen and strchr using SSE2 are 6-8 time faster than naive algorithms. On Intel platforms that support SSE4.2, strstr uses the PCMPESTRI instruction. But you can beat that, too.
Boyer-Moore's (and Turbo B-M, and Backward Oracle Matching, et al) have set-up time that pretty much knock them out of the running, not even counting the null-terminated-string problem. Horspool is a restricted B-M that works well in practice, but doesn't do the edge cases well. Best I've found in that field is BNDM ("Backward Nondeterministic Directed-Acyclic-Word-Graph Matching"), whose implementation is smaller than its name :-)
Here are a couple of code snippets that might be of interest. Intelligent SSE2 beats naive SSE4.2, and handles the null-termination problem. A BNDM implementation shows one way of keeping set-up costs. If you're familiar with Horspool, you'll notice the similarity, except that BNDM uses bitmasks instead of skip-offsets. I'm about to post how to solve the null-terminator problem (efficiently) for suffix algorithms like Horspool and BNDM.
A common attribute of all good solutions is splitting into different algorithms for different argument lengths. An example of is Sanmayce's "Railgun" function.
Your code may access
cs
beyond the bounds of its allocation ifcs
is shorter than 4 characters.A common optimisation for string search is to use the Boyer-Moore algorithm where you start looking in
cs
from the end of what would bect
. See the linked page for a full description of the algorithm.There are several fast string search algorithms. Try looking at Boyer-Moore (as already suggested by Greg Hewgill), Rabin-Karp and KMP algorithms.
If you need to search for many small patterns in the same large body of text, you can also try implementing a suffix tree or a suffix array. But these are IMHO somewhat harder to understand and implement correctly.
But beware, these techniques are very fast, but only give you an appreciable speedup if the strings involved are very large. You might not see an appreciable speedup for strings less than say a 1000 characters long.
EDIT:
If you are searching on the same text over and over again (i.e. the value of
cs
is always/often the same across calls), you will get a big speedup by using a suffix trie (Basically a trie of suffixes). Since your text is as small as 100 or 200 characters, you can use the simpler O(n^2) method to build the trie and then do multiple fast searches on it. Each search would require only 5 comparisons instead of the usual 5*200.Edit 2:
As mentioned by caf's comment, C's
strstr
algorithm is implementations dependent. glibc uses a linear time algorithm which should be more or less as fast in practice as any of the methods I've mentioned. While the OP's method is asymptotically slower (O(N*m) instead of O(n) ), it is faster probably due to the fact that both n and m (the lengths of the pattern and the text) are very small and it does not have to do any of the long preprocessing in the glibc version.You won't beat a good implementation on a modern x86 computer.
New Intel processors have an instruction that takes 16 bytes of the string you are examining, up to 16 bytes of the search string, and in a single instruction returns which is the first byte position where the search string could be (or if there is none). For example if you search for "Hello" in the string "abcdefghijklmnHexyz" the first instruction will tell you that the string "Hello" might start at offset 14 (because reading 16 bytes, the processor has the bytes H, e, unknown which might be the location of "Hello". The next instruction starting at offset 14 then tells that the string isn't there. And yes, it knows about trailing zero bytes.
That's two instructions to find that a five character string is not present in a 19 character string. Try beating that with any special case code. (Obviously this is built specifically for strstr, strcmp and similar instructions).
Reducing the number of comparisons will increase the speed of the search. Keep a running int of the string and compare it to a fixed int for the search term. If it matches compare the last character.
Add checks for a short cs.
Edit:
Added fixes from comments. Thanks.
This could easily be adopted to use 64 bit values. You could store cs[4] and ct[4] in local variables instead of assuming the compiler will do that for you. You could add 4 to cs and ct before the loop and use cs[0] and ct[0] in the loop.