unix - count of columns in file

2020-02-16 06:00发布

问题:

Given a file with data like this (i.e. stores.dat file)

sid|storeNo|latitude|longitude
2|1|-28.03720000|153.42921670
9|2|-33.85090000|151.03274200

What would be a command to output the number of column names?

i.e. In the example above it would be 4. (number of pipe characters + 1 in the first line)

I was thinking something like:

awk '{ FS = "|" } ; { print NF}' stores.dat

but it returns all lines instead of just the first and for the first line it returns 1 instead of 4

回答1:

awk -F'|' '{print NF; exit}' stores.dat 

Just quit right after the first line.



回答2:

This is a workaround (for me: I don't use awk very often):

Display the first row of the file containing the data, replace all pipes with newlines and then count the lines:

$ head -1 stores.dat | tr '|' '\n' | wc -l


回答3:

Unless you're using spaces in there, you should be able to use | wc -w on the first line.

wc is "Word Count", which simply counts the words in the input file. If you send only one line, it'll tell you the amount of columns.



回答4:

You could try

cat FILE | awk '{print NF}'



回答5:

Perl solution similar to Mat's awk solution:

perl -F'\|' -lane 'print $#F+1; exit' stores.dat

I've tested this on a file with 1000000 columns.


If the field separator is whitespace (one or more spaces or tabs) instead of a pipe:

perl -lane 'print $#F+1; exit' stores.dat


回答6:

If you have python installed you could try:

python -c 'import sys;f=open(sys.argv[1]);print len(f.readline().split("|"))' \
    stores.dat


回答7:

This is usually what I use for counting the number of fields:

head -n 1 file.name | awk -F'|' '{print NF; exit}'


回答8:

Based on Cat Kerr response. This command is working on solaris

awk '{print NF; exit}' stores.dat


回答9:

you may try:

head -1 stores.dat | grep -o \|  | wc -l


回答10:

select any row in the file (in the example below, it's the 2nd row) and count the number of columns, where the delimiter is a space:

sed -n 2p text_file.dat | tr ' ' '\n' | wc -l


回答11:

Proper pure bash way

Under bash, you could simply:

IFS=\| read -ra headline <stores.dat
echo ${#headline[@]}
4

A lot quicker as without forks, and reusable as $headline hold the full head line. You could, for sample:

printf " - %s\n" "${headline[@]}"
 - sid
 - storeNo
 - latitude
 - longitude

Nota This syntax will drive correctly spaces and others characters in column names.

Alternative: strong binary checking for max columns on each rows

What if some row do contain some extra columns?

This command will search for bigger line, counting separators:

tr -dc $'\n|' <stores.dat |wc -L
3

There are max 3 separators, then 4 fields.