Removing duplicate blocks of lines from a file

2019-07-04 07:46发布

I have a certain file structure like this

>ID1
data about ID1....
................
................

>ID2
data about ID2....
................
................
................
................
>ID3
data about ID3....
................
................
...............

>ID1
data about ID1....
................
>ID5
data about ID5....
................
................

I want to remove these duplicate blocks of IDs. For eg in the above case it is ID1. It should be noted that only the ID part is same, the data after that could be different. However, I want to keep the first one and remove all the other ones. How can I do this in shell scripting manner?

标签: bash shell
1条回答
迷人小祖宗
2楼-- · 2019-07-04 08:44

In awk

awk '/^>/{p=!($0 in a);a[$0]}p' file1
查看更多
登录 后发表回答