I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,
Red Ball 1 Sold
Blue Bat 5 OnSale
...............
So, its like the first column has colors, so I want to know how many different unique values are there in that column and I want to be able to do that for each column.
I need to do this in a Linux command line, so probably using some bash script, sed, awk or something.
Addendum: Thanks everyone for the help, can I ask one more thing? What if I wanted a count of these unique values as well?
I guess I didn't put the second part clearly enough. What I wanted to do is to have a count of "each" of these unique values not know how many unique values are there. For instance, in the first column I want to know how many Red, Blue, Green etc coloured objects are there.
This script outputs the number of unique values in each column of a given file. It assumes that first line of given file is header line. There is no need for defining number of fields. Simply save the script in a bash file (.sh) and provide the tab delimited file as a parameter to this script.
Code
Execution Example:
bash> ./script.sh <path to tab-delimited file>
Output Example
You can make use of
cut
,sort
anduniq
commands as follows:gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.
Avoiding UUOC :)
EDIT:
To count the number of unique occurences you can make use of
wc
command in the chain as:Here is a bash script that fully answers the (revised) original question. That is, given any .tsv file, it provides the synopsis for each of the columns in turn. Apart from bash itself, it only uses standard *ix/Mac tools: sed tr wc cut sort uniq.
You can use awk, sort & uniq to do this, for example to list all the unique values in the first column
As posted elsewhere, if you want to count the number of instances of something you can pipe the unique list into
wc -l