How to sort a 3G bytes access log file?

2019-09-12 10:25发布

Hi all: Now I have a 3G bytes tomcat access log named urls, each line is a url. I want to count each url and sort these urls order by the number of each url. I did it this way:

awk '{print $0}' urls | sort | uniq -c | sort -nr >> output

But it took really long time to finish this job, it's already took 30 minutes and its still working. log file is like bellow:

/open_api/borrow_business/get_apply_by_user
/open_api/borrow_business/get_apply_by_user
/open_api/borrow_business/get_apply_by_user
/open_api/borrow_business/get_apply_by_user
/loan/recent_apply_info?passportId=Y20151206000011745
/loan/recent_apply_info?passportId=Y20160331000000423
/open_api/borrow_business/get_apply_by_user
...

Is there any other way that I could process and sort a 3G bytes file? Thanks in advance!

标签: shell sorting
2条回答
干净又极端
2楼-- · 2019-09-12 10:40

I'm not sure why you're using awk at the moment - it's not doing anything useful.

I would suggest using something like this:

awk '{ ++urls[$0] } END { for (i in urls) print urls[i], i }' urls | sort -nr

This builds up a count of each URL and then sorts the output.

查看更多
forever°为你锁心
3楼-- · 2019-09-12 11:01

I generated a sample file of 3,200,000 lines, amounting to 3GB, using Perl like this:

perl -e 'for($i=0;$i<3200000;$i++){printf "%d, %s\n",int rand 1000, "0"x1000}' > BigBoy

I then tried sorting it in one step, followed by splitting it into 2 halves and sorting the halves separately and merging the results, then splitting into 4 parts and sorting separately and merging, then splitting into 8 parts and sorting separately and merging.

This resulted, on my machine at least, in a very significant speedup.

enter image description here

Here is the script. The filename is hard-coded as BigBoy, but could easily be changed and the number of parts to split the file into must be supplied as a parameter.

#!/bin/bash -xv
################################################################################
# Sort large file by parts and merge result
#
# Generate sample large (3GB with 3,200,000 lines) file with:
# perl -e 'for($i=0;$i<3200000;$i++){printf "%d, %s\n",int rand 1000, "0"x1000}' > BigBoy
################################################################################
file=BigBoy
N=${1:-1}
echo  $N
if [ $N -eq 1 ]; then
   # Straightforward sort
   sort -n "$file" > sorted.$N
else
   rm sortedparts-* parts-* 2> /dev/null
   tlines=$(wc -l < "$file")
   echo $tlines
   ((plines=tlines/N))
   echo $plines
   split -l $plines "$file" parts-
   for f in parts-*; do
      sort -n "$f" > "sortedparts-$f" &
   done
   wait
   sort -n -m sortedparts-* > sorted.$N
fi

Needless to say, the resulting sorted files are identical :-)

查看更多
登录 后发表回答