remove duplicate lines with similar prefix

2020-03-26 08:40发布

I need to remove similar lines in a file which has duplicate prefix and keep the unique ones.

From this,

abc/def/ghi/
abc/def/ghi/jkl/one/
abc/def/ghi/jkl/two/
123/456/
123/456/789/
xyz/

to this

abc/def/ghi/jkl/one/
abc/def/ghi/jkl/two/
123/456/789/
xyz/

Appreciate any suggestions,

4条回答
你好瞎i
2楼-- · 2020-03-26 08:59

Answer in case reordering the output is allowed.

sort -r file | awk 'a!~"^"$0{a=$0;print}'
  1. sort -r file : sort lines in revers this way longer lines with the same pattern will be placed before shorter line of the same pattern

  2. awk 'a!~"^"$0{a=$0;print}' : parse sorted output where a holds the previous line and $0 holds the current line

    • a!~"^"$0 checks for each line if current line is not a substring at the beginning of the previous line.
    • if $0 is not a substring (ie. not similar prefix), we print it and save new string in a (to be compared with next line)

The first line $0 is not in a because no value was assigned to a (first line is always printed)

查看更多
Viruses.
3楼-- · 2020-03-26 09:03

Step 1: This solution is based on assumption that reordering the output is allowed. If so, then it should be faster to reverse sort the input file before processing. By reverse sorting, we only need to compare 2 consecutive lines in each loop, no need to search all the file or all the "known prefixes". I understand that a line is defined as a prefix and should be removed if it is a prefix of any another line. Here is an example of remove prefixes in a file, reordering is allowed:

#!/bin/bash

f=sample.txt                                 # sample data

p=''                                         # previous line = empty

sort -r "$f" | \
  while IFS= read -r s || [[ -n "$s" ]]; do  # reverse sort, then read string (line)
    [[ "$s" = "${p:0:${#s}}" ]] || \
      printf "%s\n" "$s"                     # if s is not prefix of p, then print it
    p="$s"
  done

Explainations: ${p:0:${#s}} take the first ${#s} (len of s) characters in string p.

Test:

$ cat sample.txt 
abc/def/ghi/
abc/def/ghi/jkl/one/
abc/def/ghi/jkl/two/
abc/def/ghi/jkl/one/one
abc/def/ghi/jkl/two/two
123/456/
123/456/789/
xyz/

$ ./remove-prefix.sh 
xyz/
abc/def/ghi/jkl/two/two
abc/def/ghi/jkl/one/one
123/456/789/

Step 2: If you really need to keep the order, then this script is an example of removing all prefixes, reordering is not allowed:

#!/bin/bash

f=sample.txt
p=''

cat -n "$f" | \
  sed 's:\t:|:' | \
  sort -r -t'|' -k2 | \
  while IFS='|' read -r i s || [[ -n "$s" ]]; do
    [[ "$s" = "${p:0:${#s}}" ]] || printf "%s|%s\n" "$i" "$s"
    p="$s"
  done | \
  sort -n -t'|' -k1 | \
  sed 's:^.*|::'

Explanations:

  1. cat -n: numbering all lines
  2. sed 's:\t:|:': use '|' as the delimiter -- you need to change it to another one if needed
  3. sort -r -t'|' -k2: reverse sort with delimiter='|' and use the key 2
  4. while ... done: similar to solution of step 1
  5. sort -n -t'|' -k1: sort back to original order (numbering sort)
  6. sed 's:^.*|::': remove the numbering

Test:

$ ./remove-prefix.sh 
abc/def/ghi/jkl/one/one
abc/def/ghi/jkl/two/two
123/456/789/
xyz/

Notes: In both solutions, the most costed operations are calls to sort. Solution in step 1 calls sort once, and solution in the step 2 calls sort twice. All other operations (cat, sed, while, string compare,...) are not at the same level of cost.

In solution of step 2, cat + sed + while + sed is "equivalent" to scan that file 4 times (which theorically can be executed in parallel because of pipe).

查看更多
在下西门庆
4楼-- · 2020-03-26 09:05

The following awk does what is requested, it reads the file twice.

  • In the first pass it builds up all possible prefixes per line
  • The second pass, it checks if the line is a possible prefix, if not print.

The code is:

awk -F'/' '(NR==FNR){s="";for(i=1;i<=NF-2;i++){s=s$i"/";a[s]};next}
           {if (! ($0 in a) ) {print $0}}' <file> <file>

You can also do it with reading the file a single time, but then you store it into memory :

awk -F'/' '{s="";for(i=1;i<=NF-2;i++){s=s$i"/";a[s]}; b[NR]=$0; next}
           END {for(i=1;i<=NR;i++){if (! (b[i] in a) ) {print $0}}}' <file>

Similar to the solution of Allan, but using grep -c :

while read line; do (( $(grep -c $line <file>) == 1 )) && echo $line;  done < <file>

Take into account that this construct reads the file (N+1) times where N is the amount of lines.

查看更多
虎瘦雄心在
5楼-- · 2020-03-26 09:13

A quick and dirty way of doing it is the following:

$ while read elem; do echo -n "$elem " ; grep $elem file| wc -l; done <file | awk '$2==1{print $1}'
abc/def/ghi/jkl/one/
abc/def/ghi/jkl/two/
123/456/789/
xyz/

where you read the input file and print each elements and the number of time it appears in the file, then with awk you print only the lines where it appears only 1 time.

查看更多
登录 后发表回答