I need information about any standard python package which can be used for "longest prefix match" on URLs. I have gone through the two standard packages http://packages.python.org/PyTrie/#pytrie.StringTrie & 'http://pypi.python.org/pypi/trie/0.1.1' but they don't seem to be useful for longest prefix match task on URLs.
Examlple, if my set has these URLs 1->http://www.google.com/mail , 2->http://www.google.com/document, 3->http://www.facebook.com, etc..
Now if I search for 'http://www.google.com/doc' then it should return 2 and search for 'http://www.face' should return 3.
I wanted to confirm if there is any standard python package which can help me in doing this or should I implement a Trie for prefix matching.
I am not looking for a regular-expression kind of solution since it is not scalable as the number of URL's increases.
Thanks a lot.
The function below will return the index of the longest match. Other useful information can easily be extracted as well.
If you are willing to trade RAM for the time performance then
SuffixTree
might be useful. It has nice algorithmic properties such as it allows to solve the longest common substring problem in a linear time.If you always search for a prefix rather than an arbitrary substring then you could add a unique prefix while populating
SubstringDict()
:Such usage of
SuffixTree
seems suboptimal but it is 20-150 times faster (withoutSubstringDict()
's construction time) than @StephenPaulger's solution [which is based on.startswith()
] on the data I've tried and it could be good enough.To install
SuffixTree
, run:This example is good for small url lists but does not scale well.
An implementation using the trie module.
Result:
or using PyTrie which gives the same result but the lists are ordered differently.
I'm beginning to think a radix tree / patricia tree would be better from a memory usage point of view. This is what the a radix tree would look like:
Whereas the trie looks more like:
Performance comparison
suffixtree
vs.pytrie
vs.trie
vs.datrie
vs.startswith
-functionsSetup
The recorded time is a minimum time among 3 repetitions of 1000 searches. A trie construction time is included and spread among all searches. The search is performed on collections of hostnames from 1 to 1000000 items.
Three types of a search string:
non_existent_key
- there is no match for the stringrare_key
- around 20 in a millionfrequent_key
- number of occurrences is comparable to the collection sizeResults
Maximum memory consumption for a million urls:To reproduce the results, run the trie benchmark code.
rare_key/nonexistent_key case
if number of urls is less than 10000 then datrie is the fastest, for N>10000 -
suffixtree
is faster,startwith
is significally slower on average.axes:
frequent_key
Upto N=100000
datrie
is the fastest (for a million urls the time is dominated by the trie construction time).The most time is taken by finding the longest match among found matches. So all functions behave similar as expected.
startswith
- time performance is independent from type of key.trie
andpytrie
behave similar to each other.Performance without trie construction time
datrie
- the fastest, decent memory consumptionstartswith
is even more at disadvantage here because other approaches are not penalized by the time it takes to build a trie.datrie
,pytrie
,trie
- almost O(1) (constant time) for rare/non_existent keyFitting (approximating) polynoms of known functions for comparison (same log/log scale as in figures):