I have a command (cmd1) that greps through a log file to filter out a set of numbers. The numbers are in random order, so I use sort -gr to get a reverse sorted list of numbers. There may be duplicates within this sorted list. I need to find the count for each unique number in that list.
For e.g. if the output of cmd1 is:
100
100
100
99
99
26
25
24
24
I need another command that I can pipe the above output to, so that, I get:
100 3
99 2
26 1
25 1
24 2
how about;
The result is :
if order is not important
uniq -c
works for GNU uniq 8.23 at least, and does exactly what you want (assuming sorted input).Numerically sort the numbers in reverse, then count the duplicates, then swap the left and the right words. Align into columns.
In Bash, we can use an associative array to count instances of each input value. Assuming we have the command
$cmd1
, e.g.Then we can count values in the array variable
a
using the++
mathematical operator on the relevant array entries:We can print the resulting values:
If the order of output is important, we might need an external
sort
of the keys: