How can I find the unique lines and remove all duplicates from a file? My input file is
1
1
2
3
5
5
7
7
I would like the result to be:
2
3
sort file | uniq
will not do the job. Will show all values 1 time
How can I find the unique lines and remove all duplicates from a file? My input file is
1
1
2
3
5
5
7
7
I would like the result to be:
2
3
sort file | uniq
will not do the job. Will show all values 1 time
this worked for me for a similar one. Use this if it is not arranged. You can remove sort if it is arranged
uniq -u has been driving me crazy because it did not work.
So instead of that, if you have python (most Linux distros and servers already have it):
Assuming you have the data file in notUnique.txt
Note that due to empty lines, the final set may contain '' or only-space strings. You can remove that later. Or just get away with copying from the terminal ;)
#Just FYI, From the uniq Man page:
"Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without 'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'."
One of the correct ways, to invoke with: # sort nonUnique.txt | uniq
Example run:
Spaces might be printed, so be prepared!
uniq
should do fine if you're file is/can be sorted, if you can't sort the file for some reason you can useawk
:awk '{a[$0]++}END{for(i in a)if(a[i]<2)print i}'
Use as follows:
uniq -u < file
will do the job.uniq
has the option you need: