I have data file with many thousands columns and rows. I want to delete the first column which is in fact the row counter. I used this command in linux:
cut -d " " -f 2- input.txt > output.txt
but nothing changed in my output. Does anybody knows why it does not work and what should I do?
This is what my input file looks like:
col1 col2 col3 col4 ...
1 0 0 0 1
2 0 1 0 1
3 0 1 0 0
4 0 0 0 0
5 0 1 1 1
6 1 1 1 0
7 1 0 0 0
8 0 0 0 0
9 1 0 0 0
10 1 1 1 1
11 0 0 0 1
.
.
.
I want my output look like this:
col1 col2 col3 col4 ...
0 0 0 1
0 1 0 1
0 1 0 0
0 0 0 0
0 1 1 1
1 1 1 0
1 0 0 0
0 0 0 0
1 0 0 0
1 1 1 1
0 0 0 1
.
.
.
I also tried the sed
command:
sed '1d' input.file > output.file
But it deletes the first row not the first column.
Could anybody guide me?
@Karafka I had CSV files so I added the "," separator (you can replace with yours
Then, I used a loop to go over all files inside the directory
You can use
cut
command with--complement
option:This will output all columns except the first one.
idiomatic use of cut will be
if you delimiter is tab ("\t").
Or, simply with
awk
magic (will work for both space and tab delimiter)first awk will delete field 1, but leaves a delimiter, second awk removes the delimiter. Default output delimiter will be space, if you want to change to tab, add
-vOFS="\t"
to the second awk.UPDATED
Based on your updated input the problem is the initial spaces that cut treats as multiple columns. One way to address is to remove them first before feeding to cut
or use the
awk
alternative above which will work in this case as well.