可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have a plain text file with words, which are separated by comma, for example:
word1, word2, word3, word2, word4, word5, word 3, word6, word7, word3
i want to delete the duplicates and to become:
word1, word2, word3, word4, word5, word6, word7
Any Ideas? I think, egrep can help me, but i'm not sure, how to use it exactly....
回答1:
Assuming that the words are one per line, and the file is already sorted:
uniq filename
If the file's not sorted:
sort filename | uniq
If they're not one per line, and you don't mind them being one per line:
tr -s [:space:] \\n < filename | sort | uniq
That doesn't remove punctuation, though, so maybe you want:
tr -s [:space:][:punct:] \\n < filename | sort | uniq
But that removes the hyphen from hyphenated words. "man tr" for more options.
回答2:
ruby -pi.bak -e '$_.split(",").uniq.join(",")' filename
?
I'll admit the two kinds of quotations are ugly.
回答3:
Creating a unique list is pretty easy thanks to uniq
, although most Unix commands like one entry per line instead of a comma-separated list, so we have to start by converting it to that:
$ sed 's/, /\n/g' filename | sort | uniq
word1
word2
word3
word4
word5
word6
word7
The harder part is putting this on one line again with commas as separators and not terminators. I used a perl one-liner to do this, but if someone has something more idiomatic, please edit me. :)
$ sed 's/, /\n/g' filename | sort | uniq | perl -e '@a = <>; chomp @a; print((join ", ", @a), "\n")'
word1, word2, word3, word4, word5, word6, word7
回答4:
Here's an awk script that will leave each line in tact, only removing the duplicate words:
BEGIN {
FS=", "
}
{
for (i=1; i <= NF; i++)
used[$i] = 1
for (x in used)
printf "%s, ",x
printf "\n"
split("", used)
}
回答5:
i had the very same problem today.. a word list with 238,000 words but about 40, 000 of those were duplicates. I already had them in individual lines by doing
cat filename | tr " " "\n" | sort
to remove the duplicates I simply did
cat filename | uniq > newfilename .
Worked perfectly no errors and now my file is down from 1.45MB to 1.01MB
回答6:
I'd think you'll want to replace the spaces with newlines, use the uniq command to find unique lines, then replace the newlines with spaces again.
回答7:
I presumed you wanted the words to be unique on a single line, rather than throughout the file. If this is the case, then the Perl script below will do the trick.
while (<DATA>)
{
chomp;
my %seen = ();
my @words = split(m!,\s*!);
@words = grep { $seen{$_} ? 0 : ($seen{$_} = 1) } @words;
print join(", ", @words), "\n";
}
__DATA__
word1, word2, word3, word2, word4, word5, word3, word6, word7, word3
If you want uniqueness over the whole file, you can just move the %seen
hash outside the while (){}
loop.
回答8:
Came across this thread while trying to solve much the same problem. I had concatenated several files containing passwords, so naturally there were a lot of doubles. Also, many non-standard characters. I didn't really need them sorted, but it seemed that was gonna be necessary for uniq.
I tried:
sort /Users/me/Documents/file.txt | uniq -u
sort: string comparison failed: Illegal byte sequence
sort: Set LC_ALL='C' to work around the problem.
sort: The strings compared were `t\203tonnement' and `t\203tonner'
Tried:
sort -u /Users/me/Documents/file.txt >> /Users/me/Documents/file2.txt
sort: string comparison failed: Illegal byte sequence
sort: Set LC_ALL='C' to work around the problem.
sort: The strings compared were `t\203tonnement' and `t\203tonner'.
And even tried passing it through cat first, just so I could see if we were getting a proper input.
cat /Users/me/Documents/file.txt | sort | uniq -u > /Users/me/Documents/file2.txt
sort: string comparison failed: Illegal byte sequence
sort: Set LC_ALL='C' to work around the problem.
sort: The strings compared were `zon\351s' and `zoologie'.
I'm not sure what's happening. The strings "t\203tonnement" and "t\203tonner" aren't found in the file, though "t/203" and "tonnement" are found, but on separate, non-adjoining lines. Same with "zon\351s".
What finally worked for me was:
awk '!x[$0]++' /Users/me/Documents/file.txt > /Users/me/Documents/file2.txt
It also preserved words whose only difference was case, which is what I wanted. I didn't need the list sorted, so it was fine that it wasn't.
回答9:
And don't forget the -c
option for the uniq
utility if you're interested in getting a count of the words as well.
回答10:
open file with vim (vim filename
) and run sort command with unique flag (:sort u
).