Pythonicly eliminating duplicates from list of hashable elements

167 Views Asked by At

I just thought of doing

mylist = list(set(mylist))

to remove all duplicate entries from mylist. However, chaining built-ins always feels a little hacky. I am wondering, what is the (most) pythonic/zen way of eliminating duplicates from a list?

While searching, I found the above constuct as an answer to the "eliminate duplicates"-problem here on stackoverflow. Nobody cried this is a bad idea, but that's only implying an answer and explicit is better than implicit.

Is the above construct the way of eliminating duplicates from a list (of hashable elements)?

If not, what is?

1

There are 1 best solutions below

2
On

What makes something Pythonic if not being succinct and explicit? And this is exactly what your code does:

uniq = list(set(dups))

Convert a list to a set, which removes the duplicates since sets only contain unique values, and then turn it back into a list again. Chaining built-ins to accomplish a goal isn't hacky. It is pithy. Succinct. Elegant. It doesn't depend on any modules or libraries. Each operation is clear in what it does. The intent is easily discernible. Truly, this is the right solution.