How could I go about detecting (returning true/false) whether an ArrayList contains more than one of the same element in Java?
Many thanks, Terry
Edit Forgot to mention that I am not looking to compare "Blocks" with each other but their integer values. Each "block" has an int and this is what makes them different. I find the int of a particular Block by calling a method named "getNum" (e.g. table1[0][2].getNum();
best way to handle this issue is to use a HashSet :
Just print result arraylist and see the result without duplicates :)
Note: this will have major performance hit though as items are removed from start of the list. To address this, we have two options. 1) iterate in reverse order and remove elements. 2) Use LinkedList instead of ArrayList. Due to biased questions asked in interviews to remove duplicates from List without using any other collection, above example is the answer. In real world though, if I have to achieve this, I will put elements from List to Set, simple!
To know the Duplicates in a List use the following code:It will give you the set which contains duplicates.
Simplest: dump the whole collection into a Set (using the Set(Collection) constructor or Set.addAll), then see if the Set has the same size as the ArrayList.
Update: If I'm understanding your question correctly, you have a 2d array of Block, as in
Block table[][];
and you want to detect if any row of them has duplicates?
In that case, I could do the following, assuming that Block implements "equals" and "hashCode" correctly:
I'm not 100% sure of that for syntax, so it might be safer to write it as
...
If your elements are somehow Comparable (the fact that the order has any real meaning is indifferent -- it just needs to be consistent with your definition of equality), the fastest duplicate removal solution is going to sort the list ( 0(n log(n)) ) then to do a single pass and look for repeated elements (that is, equal elements that follow each other) (this is O(n)).
The overall complexity is going to be O(n log(n)), which is roughly the same as what you would get with a Set (n times long(n)), but with a much smaller constant. This is because the constant in sort/dedup results from the cost of comparing elements, whereas the cost from the set is most likely to result from a hash computation, plus one (possibly several) hash comparisons. If you are using a hash-based Set implementation, that is, because a Tree based is going to give you a O( n log²(n) ), which is even worse.
As I understand it, however, you do not need to remove duplicates, but merely test for their existence. So you should hand-code a merge or heap sort algorithm on your array, that simply exits returning true (i.e. "there is a dup") if your comparator returns 0, and otherwise completes the sort, and traverse the sorted array testing for repeats. In a merge or heap sort, indeed, when the sort is completed, you will have compared every duplicate pair unless both elements were already in their final positions (which is unlikely). Thus, a tweaked sort algorithm should yield a huge performance improvement (I would have to prove that, but I guess the tweaked algorithm should be in the O(log(n)) on uniformly random data)
If you are looking to avoid having duplicates at all, then you should just cut out the middle process of detecting duplicates and use a Set.