I'm using PyParsing to parse some rather large text files with a C-like format (braces and semicolons and all that).
PyParsing works just great, but it is slow and consumes a very large amount of memory due to the size of my files.
Because of this, I wanted to try to implement an incremental parsing approach wherein I'd parse the top-level elements of the source file one-by-one. The scanString
method of pyparsing seems like the obvious way to do this. However, I want to make sure that there is no invalid/unparseable text in-between the sections parsed by scanString
, and can't figure out a good way to do this.
Here's a simplified example that shows the problem I'm having:
sample="""f1(1,2,3); f2_no_args( );
# comment out: foo(4,5,6);
bar(7,8);
this should be an error;
baz(9,10);
"""
from pyparsing import *
COMMENT=Suppress('#' + restOfLine())
SEMI,COMMA,LPAREN,RPAREN = map(Suppress,';,()')
ident = Word(alphas, alphanums+"_")
integer = Word(nums+"+-",nums)
statement = ident("fn") + LPAREN + Group(Optional(delimitedList(integer)))("arguments") + RPAREN + SEMI
p = statement.ignore(COMMENT)
for res, start, end in p.scanString(sample):
print "***** (%d,%d)" % (start, end)
print res.dump()
Output:
***** (0,10)
['f1', ['1', '2', '3']]
- arguments: ['1', '2', '3']
- fn: f1
***** (11,25)
['f2_no_args', []]
- arguments: []
- fn: f2_no_args
***** (53,62)
['bar', ['7', '8']]
- arguments: ['7', '8']
- fn: bar
***** (88,98)
['baz', ['9', '10']]
- arguments: ['9', '10']
- fn: baz
The ranges returned by scanString
have gaps due to unparsed text between them ((0,10),(11,25),(53,62),(88,98)). Two of these gaps are whitespace or comments, which should not trigger an error, but one of them (this should be an error;
) contains unparsable text, which I want to catch.
Is there a way to use pyparsing to parse a file incrementally while still ensuring that the entire input could be parsed with the specified parser grammar?
I came up with what seems to be a pretty decent solution after a brief discussion on the PyParsing users' mailing list.
I modified the
ParserElement.parseString
method slightly to come up withparseConsumeString
, which does about what I want. This version callsParserElement._parse
followed byParserElement.preParse
repeatedly.Here is code to monkey-patch
ParserElement
with theparseConsumeString
method:Notice that I also moved the call to
ParserElement.resetCache
into each loop iteration. Because it's impossible to backtrack out of each loop, there's no need to retain the cache across iterations. This drastically reduce memory consumption when using PyParsing's packrat caching feature. In my tests with a 10 MiB input file, peak memory consumption goes down from ~6G to ~100M peak, while running about 15-20% faster.