This question already has an answer here:
- Removing duplicates in lists 43 answers
I want to get the unique values from the following list:
[u'nowplaying', u'PBS', u'PBS', u'nowplaying', u'job', u'debate', u'thenandnow']
The output which I require is:
[u'nowplaying', u'PBS', u'job', u'debate', u'thenandnow']
This code works:
output = []
for x in trends:
if x not in output:
output.append(x)
print output
is there a better solution I should use?
My solution to check contents for uniqueness but preserve the original order:
Edit: Probably can be more efficient by using dictionary keys to check for existence instead of doing a whole file loop for each line, I wouldn't use my solution for large sets.
Try this function, it's similar to your code but it's a dynamic range.
First thing, the example you gave is not a valid list.
Suppose if above is the example list. Then you can use the following recipe as give the itertools example doc that can return the unique values and preserving the order as you seem to require. The iterable here is the example_list
In addition to the previous answers, which say you can convert your list to set, you can do that in this way too
output will be
though order will not be preserved.
Another simpler answer could be (without using sets)
output=[]
trends=list(set(trends))