How to merge two fasta files and remove the duplic

2019-08-04 23:02发布

I want to merge two fasta file and remove the duplicate information.

Here is some example

>Symbiotaphrina_buchneri|DQ248313|SH1641879.08FU|reps|k__Fungi;p__Ascomycota;c__Xylonomycetes;o__Symbiotaphrinales;f__Symbiotaphrinaceae;g__Symbiotaphrina;s__Symbiotaphrina_buchneri
ACGATTTTGACCCTTCGGGGTCGATCTCCAACCCTTTGTCTACCTTCCTTGTTGCTTTGGCGGGCCGATGTTCGTTCTCGCGAACGACACCGCTGGCCTGACGGCTGGTGCGCGCCCGCCAGAGTCCACCAAAACTCTGATTCAAACCTACAGTCTGAGTATATATTATATTAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCCTTGGTATTCCGAGGGGCATGCCTGTTCGAGCGTCATTTCACCACTCAAGCTCAGCTTGGTATTGGGTCATCGTCTGGTCACACAGGCGTGCCTGAAAATCAGTGGCGGTGCCCATCCGGCTTCAAGCATAGTAATTTCTATCTTGCTTTGGAAGTCTCCGGAGGGTTACACCGGCCAACAACCCCAATTTTCTATG
>Dactylonectria_anthuriicola|JF735302|SH1546329.08FU|refs|k__Fungi;p__Ascomycota;c__Sordariomycetes;o__Hypocreales;f__Nectriaceae;g__Dactylonectria;s__Dactylonectria_anthuriicola
CCGAGTTTTCAACTCCCAAACCCCTGTGAACATACCATTTTGTTGCCTCGGCGGTGCCTGTTCCGACAGCCCGCCAGAGGACCCCAAACCCAAATTTCCTTGAGTGAGTCTTCTGAGTAACCGATTAAATAAATCAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCGCCAGTATTCTGGCGGGCATGCCTGTTCGAGCGTCATTTCAACCCTCAAGCCCCCGGGCTTGGTGTTGGGGATCGGCGAGCCTCTGCGCCCGCCGTCCCCTAAATTGAGTGGCGGTCACGTTGTAACTTCCTCTGCGTAGTAGCACACTTAGCACTGGGAAACAGCGCGGCCACGCCGTAAAACCCCCAACTTTGAACG
>Ilyonectria_robusta|JF735264|SH1546327.08FU|refs|k__Fungi;p__Ascomycota;c__Sordariomycetes;o__Hypocreales;f__Nectriaceae;g__Ilyonectria;s__Ilyonectria_robusta
CCGAGTTTACAACTCCCAAACCCCTGTGAACATACCATATTGTTGCCTCGGCGGTGTCTGTTTCGGCAGCCCGCCAGAGGACCCAAACCCTAGATTACATTAAAGCATTTTCTGAGTCAATGATTAAATCAATCAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCGCCAGTATTCTGGCGGGCATGCCTGTCCGAGCGTCATTTCAACCCTCAAGCCCCCGGGCTTGGTGTTGGAGATCGGCGAGCCCCCCGGGGCGCGCCGTCTCCCAAATATAGTGGCGGTCCCGCTGTAGCTTCCTCTGCGTAGTAGCACACCTCGCACTGGGAAACAGCGTGGCCACGCCGTAAAACCCCCCACTTCTGAAAG
>Symbiotaphrina_buchneri|DQ248313|SH1641879.08FU|reps|k__Fungi;p__Ascomycota;c__Xylonomycetes;o__Symbiotaphrinales;f__Symbiotaphrinaceae;g__Symbiotaphrina;s__Symbiotaphrina_buchneri
ACGATTTTGACCCTTCGGGGTCGATCTCCAACCCTTTGTCTACCTTCCTTGTTGCTTTGGCGGGCCGATGTTCGTTCTCGCGAACGACACCGCTGGCCTGACGGCTGGTGCGCGCCCGCCAGAGTCCACCAAAACTCTGATTCAAACCTACAGTCTGAGTATATATTATATTAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCCTTGGTATTCCGAGGGGCATGCCTGTTCGAGCGTCATTTCACCACTCAAGCTCAGCTTGGTATTGGGTCATCGTCTGGTCACACAGGCGTGCCTGAAAATCAGTGGCGGTGCCCATCCGGCTTCAAGCATAGTAATTTCTATCTTGCTTTGGAAGTCTCCGGAGGGTTACACCGGCCAACAACCCCAATTTTCTATG

I have tried

$ cat Unite/sh_general_release_dynamic_02.02.2019.fasta \
  Unite_61635/sh_general_release_dynamic_s_02.02.2019.fasta \
  > mergeUnite/MergeUnite.temp.fasta

After merging the file, I used fastx_collapser to collapse the duplicate information. However, after using fastx_collapser, I will lose the taxonomy information and become:

>1-234
ATCG........ 

The expected output should be:

>Symbiotaphrina_buchneri|DQ248313|SH1641879.08FU|reps|k__Fungi;p__Ascomycota;c__Xylonomycetes;o__Symbiotaphrinales;f__Symbiotaphrinaceae;g__Symbiotaphrina;s__Symbiotaphrina_buchneri
ACGATTTTGACCCTTCGGGGTCGATCTCCAACCCTTTGTCTACCTTCCTTGTTGCTTTGGCGGGCCGATGTTCGTTCTCGCGAACGACACCGCTGGCCTGACGGCTGGTGCGCGCCCGCCAGAGTCCACCAAAACTCTGATTCAAACCTACAGTCTGAGTATATATTATATTAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCCTTGGTATTCCGAGGGGCATGCCTGTTCGAGCGTCATTTCACCACTCAAGCTCAGCTTGGTATTGGGTCATCGTCTGGTCACACAGGCGTGCCTGAAAATCAGTGGCGGTGCCCATCCGGCTTCAAGCATAGTAATTTCTATCTTGCTTTGGAAGTCTCCGGAGGGTTACACCGGCCAACAACCCCAATTTTCTATG
>Dactylonectria_anthuriicola|JF735302|SH1546329.08FU|refs|k__Fungi;p__Ascomycota;c__Sordariomycetes;o__Hypocreales;f__Nectriaceae;g__Dactylonectria;s__Dactylonectria_anthuriicola
CCGAGTTTTCAACTCCCAAACCCCTGTGAACATACCATTTTGTTGCCTCGGCGGTGCCTGTTCCGACAGCCCGCCAGAGGACCCCAAACCCAAATTTCCTTGAGTGAGTCTTCTGAGTAACCGATTAAATAAATCAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCGCCAGTATTCTGGCGGGCATGCCTGTTCGAGCGTCATTTCAACCCTCAAGCCCCCGGGCTTGGTGTTGGGGATCGGCGAGCCTCTGCGCCCGCCGTCCCCTAAATTGAGTGGCGGTCACGTTGTAACTTCCTCTGCGTAGTAGCACACTTAGCACTGGGAAACAGCGCGGCCACGCCGTAAAACCCCCAACTTTGAACG
>Ilyonectria_robusta|JF735264|SH1546327.08FU|refs|k__Fungi;p__Ascomycota;c__Sordariomycetes;o__Hypocreales;f__Nectriaceae;g__Ilyonectria;s__Ilyonectria_robusta
CCGAGTTTACAACTCCCAAACCCCTGTGAACATACCATATTGTTGCCTCGGCGGTGTCTGTTTCGGCAGCCCGCCAGAGGACCCAAACCCTAGATTACATTAAAGCATTTTCTGAGTCAATGATTAAATCAATCAAAACTTTCAACAACGGATCTCTTGGTTCTGGCATCGATGAAGAACGCAGCGAAATGCGATAAGTAATGTGAATTGCAGAATTCAGTGAATCATCGAATCTTTGAACGCACATTGCGCCCGCCAGTATTCTGGCGGGCATGCCTGTCCGAGCGTCATTTCAACCCTCAAGCCCCCGGGCTTGGTGTTGGAGATCGGCGAGCCCCCCGGGGCGCGCCGTCTCCCAAATATAGTGGCGGTCCCGCTGTAGCTTCCTCTGCGTAGTAGCACACCTCGCACTGGGAAACAGCGTGGCCACGCCGTAAAACCCCCCACTTCTGAAAG

Is there another method to do this without losing taxonomy information?

标签: cat fasta
1条回答
冷血范
2楼-- · 2019-08-04 23:28

The following awk line will remove duplicate information. There are 3 ways I can see how you can detect duplicates:

sequence name identical:

The short version would be:

$ awk '/^>/{p=seen[$0]++}!p' file1.fasta file2.fasta file3.fasta ...

However, the following version introduces a bit more clarity and allows any user to quickly adapt it to his needs:

$ awk 'BEGIN{RS=">"; FS="\n"; ORS=""}
       (FNR==1){next}
       { name=$1; seq=$0; gsub(/(^[^\n]*|)\n/,"",seq) }
       !(seen[name]++){ print ">" $0 }' file1.fasta file2.fasta file3.fasta ...

Here we introduced the variable name which holds the sequence name, and the variable seq that holds the sequence itself. Multi-line sequences are moved to a single line in the variable.

As said before, this is easily adaptable when using other metrics for determining duplications. Eg.

First part of sequence name identical:

$ awk 'BEGIN{RS=">"; FS="\n"; ORS=""}
       (FNR==1){next}
       { name=$1; seq=$0; gsub(/(^[^\n]*|)\n/,"",seq) }
       { key=substr(name,1,index(s,"|")) }
       !(seen[key]++){ print ">" $0 }' file1.fasta file2.fasta file3.fasta ...

sequence identical:

$ awk 'BEGIN{RS=">"; FS="\n"; ORS=""}
       (FNR==1){next}
       { name=$1; seq=$0; gsub(/(^[^\n]*|)\n/,"",seq) }
       !(seen[seq]++){ print ">" $0 }' file1.fasta file2.fasta file3.fasta ...

sequence name and sequence identical:

$ awk 'BEGIN{RS=">"; FS="\n"; ORS=""}
       (FNR==1){next}
       { name=$1; seq=$0; gsub(/(^[^\n]*|)\n/,"",seq) }
       !(seen[name,seq]++){ print ">" $0 }' file1.fasta file2.fasta file3.fasta ...

In some parts you could of-course clean up. You do not always need the name to determine the duplicate (see sequence identical) or you do not always need the seq (see sequence name identical). this allows you to remove some parts of the code. I just kept it this way, without cleanup, to show the method you could use.

note: the above makes use of Remove line if field is duplicate

查看更多
登录 后发表回答