I'm grabbing a few columns from a tab delineated file in Perl. The first line of the file is completely different from the other lines, so I'd like to skip that line as fast and efficiently as possible.
This is what I have so far.
my $firstLine = 1;
while (<INFILE>){
if($firstLine){
$firstLine = 0;
}
else{
my @columns = split (/\t+/);
print OUTFILE "$columns[0]\t\t$columns[1]\t$columns[2]\t$columns[3]\t$columns[11]\t$columns[12]\t$columns[15]\t$columns[20]\t$columns[21]\n";
}
}
Is there a better way to do this, perhaps without $firstLine? OR is there a way to start reading INFILE from line 2 directly?
Thanks in advance!
You can just assign it a dummy variable for the 1st time:
Your code would probably be more elegant in this form:
But it's still fine. @Guru's answer is better in terms of CPU cycles, but i/o usually consumes orders of magnitude more of them than a single if.
I had a similar question/issue. My solution was the following - for either unzipped or gzipped files:
I don't know about benchmarking, but it works fine for me.
Best,
Sander
You can read a file in a file handle and then can either use array or while loop to iterate over lines. for while loop, @Guru has the solution for you. for array, it would be as below:
Let's get some data on this. I benchmarked everybody's techniques...
Since this is I/O which can be affected by forces beyond the ability of Benchmark.pm to adjust for, I ran them several times and checked I got the same results.
/usr/share/dict/words
is a 2.4 meg file with about 240k very short lines. Since we're not processing the lines, line length shouldn't matter.I only did a tiny amount of work in each routine to emphasize the difference between the techniques. I wanted to do some work so as to produce a realistic upper limit on how much performance you're going to gain or lose by changing how you read files.
I did this on a laptop with an SSD, but its still a laptop. As I/O speed increases, CPU time becomes more significant. Technique is even more important on a machine with fast I/O.
Here's how many times each routine read the file per second.
I'm shocked to find that
my @array = <$fh>
is slowest by a huge margin. I would have thought it would be the fastest given all the work is happening inside the perl interpreter. However, it's the only one which allocates memory to hold all the lines and that probably accounts for the performance lag.Using
$.
is another surprise. Perhaps that's the cost of accessing a magic global, or perhaps its doing a numeric comparison.And, as predicted by algorithmic analysis, putting the header check code outside the loop is the fastest. But not by much. Probably not enough to worry about if you're using the next two fastest.
Using splice seems to be the easiest and cleanest way to me:
Done. Now your @file array doesn't have the first line anymore.