Given a file with data like this (i.e. stores.dat file)
sid|storeNo|latitude|longitude
2|1|-28.03720000|153.42921670
9|2|-33.85090000|151.03274200
What would be a command to output the number of column names?
i.e. In the example above it would be 4. (number of pipe characters + 1 in the first line)
I was thinking something like:
awk '{ FS = "|" } ; { print NF}' stores.dat
but it returns all lines instead of just the first and for the first line it returns 1 instead of 4
Proper pure bash way
Under bash, you could simply:
A lot quicker as without forks, and reusable as
$headline
hold the full head line. You could, for sample:Nota This syntax will drive correctly spaces and others characters in column names.
Alternative: strong binary checking for max columns on each rows
What if some row do contain some extra columns?
This command will search for bigger line, counting separators:
There are max 3 separators, then 4 fields.
This is usually what I use for counting the number of fields:
Perl solution similar to Mat's awk solution:
I've tested this on a file with 1000000 columns.
If the field separator is whitespace (one or more spaces or tabs) instead of a pipe:
Based on Cat Kerr response. This command is working on solaris
you may try:
Just quit right after the first line.