I have a list of tuples namely:
[[[('p', 'u'), ('r', 'w')], [('t', 'q')]], [[('p', 'u'), ('r', 'w')], [('v', 'q')]], [[('p', 'u'), ('r', 'w')], [('t', 's')]], [[('p', 'u'), ('r', 'w')], [('v', 's')]], [[('p', 'w'), ('r', 'u')], [('t', 'q')]], [[('p', 'w'), ('r', 'u')], [('v', 'q')]], [[('p', 'w'), ('r', 'u')], [('t', 's')]], [[('p', 'w'), ('r', 'u')], [('v', 's')]], [[('r', 'u'), ('p', 'w')], [('t', 'q')]], [[('r', 'u'), ('p', 'w')], [('v', 'q')]], [[('r', 'u'), ('p', 'w')], [('t', 's')]], [[('r', 'u'), ('p', 'w')], [('v', 's')]], **[[('r', 'w'), ('p', 'u')], [('t', 'q')]]**, [[('r', 'w'), ('p', 'u')], [('v', 'q')]], [[('r', 'w'), ('p', 'u')], [('t', 's')]], [[('r', 'w'), ('p', 'u')], [('v', 's')]]]
But now for example the element [[('p','u'),('r','w')], [('t','q')]]
is the same as [[('r','w'),('p','u')], [('t','q')]]
, which are marked fat in the list.
So in the list I have 16 elements, where every element is double.
Now, I want to delete the duplicates, that I have only the first eight elements left.
So naively, I've tried with
[[list(y) for y in set([tuple(set(x)) for x in doublegammas1])]]
But here, he says:
TypeError: unhashable type: 'list'
So my question:
How can I extend the list comprehension, that it works for a more dimensional list?
A mutable object (such as a list or a set) cannot be a member of a set. You can use a frozenset, which is immutable.
Output:
To convert back to the original structure of nested lists:
Output:
UPDATE:
A solution that removes the duplicate items in-place from the input data.
Lists aren't hashable, tuples are hashable. You then need to take a
set
of these tuples. But inside these tuples, you want to disregard order. But a tuples of sets are not hashable, so instead you need to use tuples offrozenset
objects:You can then apply this via the
itertools
unique_everseen
recipe, also available in 3rd party libraries astoolz.unique
ormore_itertools.unique_everseen
:Input data