Are there any alternatives to the code below:
startFromLine = 141978 # or whatever line I need to jump to
urlsfile = open(filename, "rb", 0)
linesCounter = 1
for line in urlsfile:
if linesCounter > startFromLine:
DoSomethingWithThisLine(line)
linesCounter += 1
If I'm processing a huge text file (~15MB)
with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.
What generates the file you want to process? If it is something under your control, you could generate an index (which line is at which position.) at the time the file is appended to. The index file can be of fixed line size (space padded or 0 padded numbers) and will definitely be smaller. And thus can be read and processed qucikly.
I have had the same problem (need to retrieve from huge file specific line).
Surely, I can every time run through all records in file and stop it when counter will be equal to target line, but it does not work effectively in a case when you want to obtain plural number of specific rows. That caused main issue to be resolved - how handle directly to necessary place of file.
I found out next decision: Firstly I completed dictionary with start position of each line (key is line number, and value – cumulated length of previous lines).
ultimately, aim function:
t.seek(line_number) – command that execute pruning of file up to line inception. So, if you next commit readline – you obtain your target line.
Using such approach I have saved significant part of time.
You may use mmap to find the offset of the lines. MMap seems to be the fastest way to process a file
example:
then use f.seek(offsets) to move to the line you need
You can't jump ahead without reading in the file at least once, since you don't know where the line breaks are. You could do something like:
Do the lines themselves contain any index information? If the content of each line was something like "
<line index>:Data
", then theseek()
approach could be used to do a binary search through the file, even if the amount ofData
is variable. You'd seek to the midpoint of the file, read a line, check whether its index is higher or lower than the one you want, etc.Otherwise, the best you can do is just
readlines()
. If you don't want to read all 15MB, you can use thesizehint
argument to at least replace a lot ofreadline()
s with a smaller number of calls toreadlines()
.linecache: