Calculate Word occurrences from file in bash

2019-02-10 21:58发布

问题:

I'm sorry for the very noob question, but I'm kind of new to bash programming (started a few days ago). Basically what I want to do is keep one file with all the word occurrences of another file

I know I can do this:

sort | uniq -c | sort

the thing is that after that I want to take a second file, calculate the occurrences again and update the first one. After I take a third file and so on.

What I'm doing at the moment works without any problem (I'm using grep, sed and awk), but it looks pretty slow.

I'm pretty sure there is a very efficient way just with a command or so, using uniq, but I can't figure out.

Could you please lead me to the right way?

I'm also pasting the code I wrote:

#!/bin/bash
#   count the number of word occurrences from a file and writes to another file #
#   the words are listed from the most frequent to the less one                 #

touch .check                # used to check the occurrances. Temporary file
touch distribution.txt      # final file with all the occurrences calculated

page=$1             # contains the file I'm calculating
occurrences=$2          # temporary file for the occurrences

# takes all the words from the file $page and orders them by occurrences
cat $page | tr -cs A-Za-z\' '\n'| tr A-Z a-z > .check

# loop to update the old file with the new information
# basically what I do is check word by word and add them to the old file as an update
cat .check | while read words
do
    word=${words}       # word I'm calculating
    strlen=${#word}     # word's length
    # I use a black list to not calculate banned words (for example very small ones or inunfluent words, like articles and prepositions
    if ! grep -Fxq $word .blacklist && [ $strlen -gt 2 ]
    then
        # if the word was never found before it writes it with 1 occurrence
        if [ `egrep -c -i "^$word: " $occurrences` -eq 0 ]
        then
            echo "$word: 1" | cat >> $occurrences
        # else it calculates the occurrences
        else
            old=`awk -v words=$word -F": " '$1==words { print $2 }' $occurrences`
            let "new=old+1"
            sed -i "s/^$word: $old$/$word: $new/g" $occurrences
        fi
    fi
done

rm .check

# finally it orders the words
awk -F": " '{print $2" "$1}' $occurrences | sort -rn | awk -F" " '{print $2": "$1}' > distribution.txt

回答1:

Well, I'm not sure that I've got the point of the thing you are trying to do, but I would do it this way:

while read file
do
  cat $file | tr -cs A-Za-z\' '\n'| tr A-Z a-z | sort | uniq -c > stat.$file
done < file-list 

Now you have statistics for all your file, and now you simple aggregate it:

while read file
do
  cat stat.$file
done < file-list \
| sort -k2 \
| awk '{if ($2!=prev) {print s" "prev; s=0;}s+=$1;prev=$2;}END{print s" "prev;}'

Example of usage:

$ for i in ls bash cp; do man $i > $i.txt ; done
$ cat <<EOF > file-list
> ls.txt
> bash.txt
> cp.txt
> EOF

$ while read file; do
> cat $file | tr -cs A-Za-z\' '\n'| tr A-Z a-z | sort | uniq -c > stat.$file
> done < file-list

$ while read file
> do
>   cat stat.$file
> done < file-list \
> | sort -k2 \
> | awk '{if ($2!=prev) {print s" "prev; s=0;}s+=$1;prev=$2;}END{print s" "prev;}' | sort -rn | head

3875 the
1671 is
1137 to
1118 a
1072 of
793 if
744 and
533 command
514 in
507 shell