I have a server access log, with timestamps of each http request, I'd like to obtain a count of the number of requests at each second. Using sed
, and cut -c
, so far I've managed to cut the file down to just the timestamps, such as:
22-Sep-2008 20:00:21 +0000
22-Sep-2008 20:00:22 +0000
22-Sep-2008 20:00:22 +0000
22-Sep-2008 20:00:22 +0000
22-Sep-2008 20:00:24 +0000
22-Sep-2008 20:00:24 +0000
What I'd love to get is the number of times each unique timestamp appears in the file. For example, with the above example, I'd like to get output that looks like:
22-Sep-2008 20:00:21 +0000: 1
22-Sep-2008 20:00:22 +0000: 3
22-Sep-2008 20:00:24 +0000: 2
I've used sort -u
to filter the list of timestamps down to a list of unique tokens, hoping that I could use grep like
grep -c -f <file containing patterns> <file>
but this just produces a single line of a grand total of matching lines.
I know this can be done in a single line, stringing a few utilities together ... but I can't think of which. Anyone know?
I think you're looking for
Using awk:
Just in case you want the output in the format you originally specified (with the number of occurences at the end):
Tom's solution:
works more generally.
My file was not sorted :
Therefore the occurrences weren't following each other, and
uniq
does not work as it gives :With the awk script however:
Using AWK with associative arrays might be another solution to something like this.
maybe use xargs? Can't put it all together in my head on the spot here, but use xargs on your sort -u so that for each unique second you can grep the original file and do a wc -l to get the number.