fastest way to sum the file sizes by owner in a di

2019-07-09 03:34发布

I'm using the below command using an alias to print the sum of all file sizes by owner in a directory

ls -l $dir | awk ' NF>3 { file[$3]+=$5 } \
END { for( i in file) { ss=file[i]; \
if(ss >=1024*1024*1024 ) {size=ss/1024/1024/1024; unit="G"} else \ 
if(ss>=1024*1024) {size=ss/1024/1024; unit="M"} else {size=ss/1024; unit="K"}; \
format="%.2f%s"; res=sprintf(format,size,unit); \
printf "%-8s %12d\t%s\n",res,file[i],i }}' | sort -k2 -nr

but, it doesn't seem to be fast all the times.

Is it possible to get the same output in some other way, but faster?

标签: linux shell perl
6条回答
三岁会撩人
2楼-- · 2019-07-09 04:07

Parsing output from ls - bad idea.

How about using find instead?

  • start in directory ${dir}
    • limit to that directory level (-maxdepth 1)
    • limit to files (-type f)
    • print a line with user name and file size in bytes (-printf "%u %s\n")
  • run the results through a perl filter
    • split each line (-a)
    • add to a hash under key (field 0) the size (field 1)
    • at the end (END {...}) print out the hash contents, sorted by key, i.e. user name
$ find ${dir} -maxdepth 1 -type f -printf "%u %s\n" | \
     perl -ane '$s{$F[0]} += $F[1]; END { print "$_ $s{$_}\n" foreach (sort keys %s); }'
stefanb 263305714

A solution using Perl:

#!/usr/bin/perl
use strict;
use warnings;
use autodie;

use File::Spec;

my %users;
foreach my $dir (@ARGV) {
    opendir(my $dh, $dir);

    # files in this directory
    while (my $entry = readdir($dh)) {
        my $file = File::Spec->catfile($dir, $entry);

        # only files
        if (-f $file) {
            my($uid, $size) = (stat($file))[4, 7];
            $users{$uid} += $size
        }
    }

    closedir($dh);
}

print "$_ $users{$_}\n" foreach (sort keys %users);

exit 0;

Test run:

$ perl dummy.pl .
1000 263618544

Interesting difference. The Perl solution discovers 3 more files in my test directory than the find solution. I have to ponder why that is...

查看更多
做自己的国王
3楼-- · 2019-07-09 04:11

Get a listing, add up sizes, and sort it by owner (with Perl)

perl -wE'
    chdir (shift // "."); 
    for (glob ".* *") { 
        next if not -f;
        ($owner_id, $size) = (stat)[4,7]
            or do { warn "Trouble stat for: $_"; next };
        $rept{$owner_id} += $size 
    } 
    say (getpwuid($_)//$_, " => $rept{$_} bytes") for sort keys %rept
'

I didn't get to benchmark it, and it'd be worth trying it out against an approach where the directory is iterated over, as opposed to glob-ed (while I found glob much faster in a related problem).

I expect good runtimes in comparison with ls, which slows down dramatically as a file list in a single directory gets long. This is due to the system so Perl will be affected as well but as far as I recall it handles it far better. However, I've seen a dramatic slowdown only once entries get to half a million or so, not a few thousand, so I am not sure why it runs slow on your system.

If this need be recursive then use File::Find. For example

perl -MFile::Find -wE'
    $dir = shift // "."; 
    find( sub { 
        return if not -f;
        ($owner_id, $size) = (stat)[4,7] 
            or do { warn "Trouble stat for: $_"; return }; 
        $rept{$owner_id} += $size 
    }, $dir ); 
    say (getpwuid($_)//$_, "$_ => $rept{$_} bytes") for keys %rept
'

This scans a directory with 2.4 Gb, of mostly small files over a hierarchy of subdirectories, in a little over 2 seconds. The du -sh took around 5 seconds (the first time round).


It is reasonable to bring these two into one script

use warnings;
use strict;
use feature 'say';    
use File::Find;
use Getopt::Long;

my %rept;    
sub get_sizes {
    return if not -f; 
    my ($owner_id, $size) = (stat)[4,7] 
        or do { warn "Trouble stat for: $_"; return };
    $rept{$owner_id} += $size 
}

my ($dir, $recurse) = ('.', '');
GetOptions('recursive|r!' => \$recurse, 'directory|d=s' => \$dir)
    or die "Usage: $0 [--recursive] [--directory dirname]\n";

($recurse) 
    ? find( { wanted => \&get_sizes }, $dir )
    : find( { wanted => \&get_sizes, 
              preprocess => sub { return grep { -f } @_ } }, $dir );

say (getpwuid($_)//$_, " => $rept{$_} bytes") for keys %rept;

I find this to perform about the same as the one-dir-only code above, when run non-recursively (default as it stands).

Note that File::Find::Rule interface has many conveniences but is slower in some important use cases, what clearly matters here. (That analysis should be redone since it's a few years old.)

查看更多
走好不送
4楼-- · 2019-07-09 04:18

Using datamash (and Stefan Becker's find code):

find ${dir} -maxdepth 1 -type f -printf "%u\t%s\n" | datamash -sg 1 sum 2
查看更多
Luminary・发光体
5楼-- · 2019-07-09 04:22

Did I see some awk in the op? Here is one in GNU awk using filefuncs extension:

$ cat bar.awk
@load "filefuncs"
BEGIN {
    FS=":"                                     # passwd field sep
    passwd="/etc/passwd"                       # get usernames from passwd
    while ((getline < passwd)>0)
        users[$3]=$1
    close(passwd)                              # close passwd

    if(path="")                                # set path with -v path=...
        path="."                               # default path is cwd
    pathlist[1]=path                           # path from the command line
                                               # you could have several paths
    fts(pathlist,FTS_PHYSICAL,filedata)        # dont mind links (vs. FTS_LOGICAL)
    for(p in filedata)                         # p for paths
        for(f in filedata[p])                  # f for files
            if(filedata[p][f]["stat"]["type"]=="file")      # mind files only
                size[filedata[p][f]["stat"]["uid"]]+=filedata[p][f]["stat"]["size"]
    for(i in size)
        print (users[i]?users[i]:i),size[i]    # print username if found else uid
    exit
}

Sample outputs:

$ ls -l
total 3623
drwxr-xr-x 2 james james  3690496 Mar 21 21:32 100kfiles/
-rw-r--r-- 1 root  root         4 Mar 21 18:52 bar
-rw-r--r-- 1 james james      424 Mar 21 21:33 bar.awk
-rw-r--r-- 1 james james      546 Mar 21 21:19 bar.awk~
-rw-r--r-- 1 james james      315 Mar 21 19:14 foo.awk
-rw-r--r-- 1 james james      125 Mar 21 18:53 foo.awk~
$ awk -v path=. -f bar.awk
root 4
james 1410

Another:

$ time awk -v path=100kfiles -f bar.awk
root 4
james 342439926

real    0m1.289s
user    0m0.852s
sys     0m0.440s

Yet another test with a million empty files:

$ time awk -v path=../million_files -f bar.awk

real    0m5.057s
user    0m4.000s
sys     0m1.056s
查看更多
闹够了就滚
6楼-- · 2019-07-09 04:25

Another perl one, that displays total sizes sorted by user:

#!/usr/bin/perl
use warnings;
use strict;
use autodie;
use feature qw/say/;
use File::Spec;
use Fcntl qw/:mode/;

my $dir = shift;
my %users;

opendir(my $d, $dir);
while (my $file = readdir $d) {
  my $filename = File::Spec->catfile($dir, $file);
  my ($mode, $uid, $size) = (stat $filename)[2, 4, 7];
  $users{$uid} += $size if S_ISREG($mode);
}
closedir $d;

my @sizes = sort { $a->[0] cmp $b->[0] }
  map { [ getpwuid($_) // $_, $users{$_} ] } keys %users;
local $, = "\t";
say @$_ for @sizes;
查看更多
We Are One
7楼-- · 2019-07-09 04:25

Not sure why question is tagged perl when awk is being used.

Here's a simple perl version:

#!/usr/bin/perl

chdir($ARGV[0]) or die("Usage: $0 dir\n");

map {
    if ( ! m/^[.][.]?$/o ) {
        ($s,$u) = (stat)[7,4];
        $h{$u} += $s;
    }
} glob ".* *";

map {
    $s = $h{$_};
    $u = !( $s      >>10) ? ""
       : !(($s>>=10)>>10) ? "k"
       : !(($s>>=10)>>10) ? "M"
       : !(($s>>=10)>>10) ? "G"
       :   ($s>>=10)      ? "T"
       :                    undef
       ;
    printf "%-8s %12d\t%s\n", $s.$u, $h{$_}, getpwuid($_)//$_;
} keys %h;

  • glob gets our file list
  • m// discards . and ..
  • stat the size and uid
  • accumulate sizes in %h
  • compute the unit by bitshifting (>>10 is integer divide by 1024)
  • map uid to username (// provides fallback)
  • print results (unsorted)
  • NOTE: unlike some other answers, this code doesn't recurse into subdirectories

To exclude symlinks, subdirectories, etc, change the if to appropriate -X tests. (eg. (-f $_), (!-d $_ and !-l $_), etc). See perl docs on the _ filehandle optimisation for caching stat results.

查看更多
登录 后发表回答