How to extract one column from multiple files, and

2020-04-05 07:10发布

I want to extract the 5th column from multiple files, named in a numerical order, and paste those columns in sequence, side by side, into one output file.

The file names look like:

sample_problem1_part1.txt
sample_problem1_part2.txt

sample_problem2_part1.txt
sample_problem2_part2.txt

sample_problem3_part1.txt
sample_problem3_part2.txt
......

Each problem file (1,2,3...) has two parts (part1, part2). Each file has the same number of lines. The content looks like:

sample_problem1_part1.txt
1 1 20 20 1
1 7 21 21 2
3 1 22 22 3
1 5 23 23 4
6 1 24 24 5
2 9 25 25 6
1 0 26 26 7

sample_problem1_part2.txt
1 1 88 88 8
1 1 89 89 9
2 1 90 90 10
1 3 91 91 11
1 1 92 92 12
7 1 93 93 13
1 5 94 94 14

sample_problem2_part1.txt
1 4 330 30 a
3 4 331 31 b
1 4 332 32 c
2 4 333 33 d
1 4 334 34 e
1 4 335 35 f
9 4 336 36 g

The output should look like: (in a sequence of problem1_part1, problem1_part2, problem2_part1, problem2_part2, problem3_part1, problem3_part2,etc.,)

1 8 a ...
2 9 b ...
3 10 c ...
4 11 d ...
5 12 e ...
6 13 f ...
7 14 g ...

I was using:

 paste sample_problem1_part1.txt sample_problem1_part2.txt > \
     sample_problem1_partall.txt
 paste sample_problem2_part1.txt sample_problem2_part2.txt > \
     sample_problem2_partall.txt
 paste sample_problem3_part1.txt sample_problem3_part2.txt > \
     sample_problem3_partall.txt

And then:

for i in `find . -name "sample_problem*_partall.txt"`
do
    l=`echo $i | sed 's/sample/extracted_col_/'`
    `awk '{print $5, $10}'  $i > $l`
done    

And:

paste extracted_col_problem1_partall.txt \
      extracted_col_problem2_partall.txt \
      extracted_col_problem3_partall.txt > \
    extracted_col_problemall_partall.txt

It works fine with a few files, but it's a crazy method when the number of files is large (over 4000). Could anyone help me with simpler solutions that are capable of dealing with multiple files, please? Thanks!

5条回答
beautiful°
2楼-- · 2020-04-05 07:36

Here's one way using awk and a sorted glob of files:

awk '{ a[FNR] = (a[FNR] ? a[FNR] FS : "") $5 } END { for(i=1;i<=FNR;i++) print a[i] }' $(ls -1v *)

Results:

1 8 a
2 9 b
3 10 c
4 11 d
5 12 e
6 13 f
7 14 g

Explanation:

  • For each line of input of each input file:

    • Add the files line number to an array with a value of column 5.

    • (a[FNR] ? a[FNR] FS : "") is a ternary operation, which is set up to build up the arrays value as a record. It simply asks if the files line number is already in the array. If so, add the arrays value followed by the default file separator before adding the fifth column. Else, if the line number is not in the array, don't prepend anything, just let it equal the fifth column.

  • At the end of the script:

    • Use a C-style loop to iterate through the array, printing each of the arrays values.
查看更多
手持菜刀,她持情操
3楼-- · 2020-04-05 07:47

You can pass awk output to paste and redirect it to a new file as follows:

paste <(awk '{print $3}' file1) <(awk '{print $3}' file2) <(awk '{print $3}' file3) > file.txt

查看更多
冷血范
4楼-- · 2020-04-05 07:50

For only ~4000 files, you should be able to do:

 find . -name sample_problem*_part*.txt | xargs paste

If find is giving names in the wrong order, pipe it to sort:

 find . -name sample_problem*_part*.txt | sort ... | xargs paste
查看更多
成全新的幸福
5楼-- · 2020-04-05 07:54
# print filenames in sorted order
find -name sample\*.txt | sort |
# extract 5-th column from each file and print it on a single line
xargs -n1 -I{} sh -c '{ cut -s -d " " -f 5 $0 | tr "\n" " "; echo; }' {} |
# transpose
python transpose.py ?

where transpose.py:

#!/usr/bin/env python
"""Write lines from stdin as columns to stdout."""
import sys
from itertools import izip_longest

missing_value = sys.argv[1] if len(sys.argv) > 1 else '-'
for row in izip_longest(*[column.split() for column in sys.stdin],
                         fillvalue=missing_value):
    print " ".join(row)

Output

1 8 a
2 9 b
3 10 c
4 11 d
5 ? e
6 ? f
? ? g

Assuming the first and second files have less lines than the third one (missing values are replaced by '?').

查看更多
【Aperson】
6楼-- · 2020-04-05 07:56

Try this one. My script assumes that every file has the same number of lines.

# get number of lines
lines=$(wc -l sample_problem1_part1.txt | cut -d' ' -f1)

for ((i=1; i<=$lines; i++)); do
  for file in sample_problem*; do
    # get line number $i and delete everything except the last column
    # and then print it
    # echo -n means that no newline is appended
    echo -n $(sed -n ${i}'s%.*\ %%p' $file)" "
  done
  echo
done

This works. For 4800 files, each 7 lines long it took 2 minutes 57.865 seconds on a AMD Athlon(tm) X2 Dual Core Processor BE-2400.

PS: The time for my script increases linearly with the number of lines. It would take very long time to merge files with 1000 lines. You should consider learning awk and use the script from steve. I tested it: For 4800 files, each with 1000 lines it took only 65 seconds!

查看更多
登录 后发表回答