Best way to skip a header when reading in from a t

2019-03-13 10:55发布

I'm grabbing a few columns from a tab delineated file in Perl. The first line of the file is completely different from the other lines, so I'd like to skip that line as fast and efficiently as possible.

This is what I have so far.

my $firstLine = 1;

while (<INFILE>){
    if($firstLine){
        $firstLine = 0;
    }
    else{
        my @columns = split (/\t+/);
        print OUTFILE "$columns[0]\t\t$columns[1]\t$columns[2]\t$columns[3]\t$columns[11]\t$columns[12]\t$columns[15]\t$columns[20]\t$columns[21]\n";
    }
}

Is there a better way to do this, perhaps without $firstLine? OR is there a way to start reading INFILE from line 2 directly?

Thanks in advance!

7条回答
爱情/是我丢掉的垃圾
2楼-- · 2019-03-13 11:32

You can just assign it a dummy variable for the 1st time:

#!/usr/bin/perl
use strict;
use warnings;

open my $fh, '<','a.txt' or die $!;

my $dummy=<$fh>;   #First line is read here
while(<$fh>){
        print ;
}
close($fh);
查看更多
别忘想泡老子
3楼-- · 2019-03-13 11:35

Your code would probably be more elegant in this form:

my $first;
while (...) {
    $first++ or next; 

    # do whatever you want
};

But it's still fine. @Guru's answer is better in terms of CPU cycles, but i/o usually consumes orders of magnitude more of them than a single if.

查看更多
甜甜的少女心
4楼-- · 2019-03-13 11:39

I had a similar question/issue. My solution was the following - for either unzipped or gzipped files:

print STDERR "\nReading input file...\n";
if ($file =~ /.gz$/) {
    open(IN, "gunzip -c $file | grep -v '##' |") or die " *** ERROR ***     Cannot open pipe to [ $file ]!\n";
    } else {
        open(IN, "cat $file | grep -v '##' |") or die " *** ERROR ***     Cannot open [ $file ]!\n";
}

I don't know about benchmarking, but it works fine for me.

Best,

Sander

查看更多
Root(大扎)
5楼-- · 2019-03-13 11:40

You can read a file in a file handle and then can either use array or while loop to iterate over lines. for while loop, @Guru has the solution for you. for array, it would be as below:

#!/usr/bin/perl
use strict;
use warnings;

open (my $fh, '<','a.txt')  or die "cant open the file: $! \n";
my @array = <$fh>;

my $dummy = shift (@array);   << this is where the headers are stored.

foreach (@array)
{
   print $_."\n";
}
close ($fh);
查看更多
Bombasti
6楼-- · 2019-03-13 11:47

Let's get some data on this. I benchmarked everybody's techniques...

#!/usr/bin/env perl

sub flag_in_loop {
    my $file = shift;

    open my $fh, $file;

    my $first = 1;
    while(<$fh>) {
        if( $first ) {
            $first = 0;
        }
        else {
            my $line = $_;
        }
    }

    return;
}

sub strip_before_loop {
    my $file = shift;

    open my $fh, $file;

    my $header = <$fh>;
    while(<$fh>) {
        my $line = $_;
    }

    return;
}

sub line_number_in_loop {
    my $file = shift;

    open my $fh, $file;

    while(<$fh>) {
        next if $. < 2;

        my $line = $_;
    }

    return;
}

sub inc_in_loop {
    my $file = shift;

    open my $fh, $file;

    my $first;
    while(<$fh>) {
        $first++ or next;

        my $line = $_;
    }

    return;
}

sub slurp_to_array {
    my $file = shift;

    open my $fh, $file;

    my @array = <$fh>;
    shift @array;

    return;
}


my $Test_File = "/usr/share/dict/words";
print `wc $Test_File`;

use Benchmark;

timethese shift || -10, {
    flag_in_loop        => sub { flag_in_loop($Test_File); },
    strip_before_loop   => sub { strip_before_loop($Test_File); },
    line_number_in_loop => sub { line_number_in_loop($Test_File); },
    inc_in_loop         => sub { inc_in_loop($Test_File); },
    slurp_to_array      => sub { slurp_to_array($Test_File); },
};

Since this is I/O which can be affected by forces beyond the ability of Benchmark.pm to adjust for, I ran them several times and checked I got the same results.

/usr/share/dict/words is a 2.4 meg file with about 240k very short lines. Since we're not processing the lines, line length shouldn't matter.

I only did a tiny amount of work in each routine to emphasize the difference between the techniques. I wanted to do some work so as to produce a realistic upper limit on how much performance you're going to gain or lose by changing how you read files.

I did this on a laptop with an SSD, but its still a laptop. As I/O speed increases, CPU time becomes more significant. Technique is even more important on a machine with fast I/O.

Here's how many times each routine read the file per second.

slurp_to_array:       4.5/s
line_number_in_loop: 13.0/s
inc_in_loop:         15.5/s
flag_in_loop:        15.8/s
strip_before_loop:   19.9/s

I'm shocked to find that my @array = <$fh> is slowest by a huge margin. I would have thought it would be the fastest given all the work is happening inside the perl interpreter. However, it's the only one which allocates memory to hold all the lines and that probably accounts for the performance lag.

Using $. is another surprise. Perhaps that's the cost of accessing a magic global, or perhaps its doing a numeric comparison.

And, as predicted by algorithmic analysis, putting the header check code outside the loop is the fastest. But not by much. Probably not enough to worry about if you're using the next two fastest.

查看更多
劳资没心,怎么记你
7楼-- · 2019-03-13 11:51

Using splice seems to be the easiest and cleanest way to me:

open FILE, "<$ARGV[0]";
my @file = <FILE>;
splice(@file, 0, 1);

Done. Now your @file array doesn't have the first line anymore.

查看更多
登录 后发表回答