Eliminate partially duplicate lines by column and

2019-01-17 15:32发布

I have a file that looks like this:

2011-03-21 name001 line1
2011-03-21 name002 line2
2011-03-21 name003 line3
2011-03-22 name002 line4
2011-03-22 name001 line5

for each name, I only want its last appearance. So, I expect the result to be:

2011-03-21 name003 line3
2011-03-22 name002 line4
2011-03-22 name001 line5

Could someone give me a solution with bash/awk/sed?

4条回答
何必那么认真
2楼-- · 2019-01-17 16:01

This code get uniq lines by second field but from the end of file or text (like in your result example)

tac temp.txt | sort -k2,2 -r -u
查看更多
别忘想泡老子
3楼-- · 2019-01-17 16:04

EDIT: Here's a version that actually answers the question.

sort -k 2 filename | while read f1 f2 f3; do if [ ! "$f2" = "$lf2" ]; then echo "$f1 $f2 $f3"; lf2="$f2"; fi; done
查看更多
做个烂人
4楼-- · 2019-01-17 16:11
awk '{a[$2]=$0} END {for (i in a) print a[i]}' file

If order of appearance is important:

  • Based on first appearance:

    awk '!a[$2] {b[++i]=$2} {a[$2]=$0} END {for (i in b) print a[b[i]]}' file
    
  • Based on last appearance:

    tac file | awk '!a[$2] {b[++i]=$2} {a[$2]=$0} END {for (i in b) print a[b[i]]}'
    
查看更多
Anthone
5楼-- · 2019-01-17 16:13
sort < bar > foo
uniq  < foo > bar

bar now has no duplicated lines

查看更多
登录 后发表回答