I have a set of objects in a Vector from which I'd like to select a random subset (e.g. 100 items coming back; pick 5 randomly). In my first (very hasty) pass I did an extremely simple and perhaps overly clever solution:
Vector itemsVector = getItems();
Collections.shuffle(itemsVector);
itemsVector.setSize(5);
While this has the advantage of being nice and simple, I suspect it's not going to scale very well, i.e. Collections.shuffle() must be O(n) at least. My less clever alternative is
Vector itemsVector = getItems();
Random rand = new Random(System.currentTimeMillis()); // would make this static to the class
List subsetList = new ArrayList(5);
for (int i = 0; i < 5; i++) {
// be sure to use Vector.remove() or you may get the same item twice
subsetList.add(itemsVector.remove(rand.nextInt(itemsVector.size())));
}
Any suggestions on better ways to draw out a random subset from a Collection?
Jon Bentley discusses this in either 'Programming Pearls' or 'More Programming Pearls'. You need to be careful with your N of M selection process, but I think the code shown works correctly. Rather than randomly shuffle all the items, you can do the random shuffle only shuffling the first N positions - which is a useful saving when N << M.
Knuth also discusses these algorithms - I believe that would be Vol 3 "Sorting and Searching", but my set is packed pending a move of house so I can't formally check that.
Your second solution of using Random to pick element seems sound, however:
Depending on how sensitive your data is, I suggest using some sort of hashing method to scramble the random number seed. For a good case study, see How We Learned to Cheat at Online Poker (but this link is 404 as of 2015-12-18). Alternative URLs (found via a Google search on the article title in double quotes) include:
Vector is synchronized. If possible, use ArrayList instead to improve performance.
I wrote an efficient implementation of this a few weeks back. It's in C# but the translation to Java is trivial (essentially the same code). The plus side is that it's also completely unbiased (which some of the existing answers aren't) - a way to test that is here.
It's based on a Durstenfeld implementation of the Fisher-Yates shuffle.
I'd personal opt for your initial implementation: very concise. Performance testing will show how well it scales. I've implemented a very similar block of code in a decently abused method and it scaled sufficiently. The particular code relied on arrays containing >10,000 items as well.
@Jonathan,
I believe this is the solution you're talking about:
It's on page 127 of Programming Pearls by Jon Bentley and is based off of Knuth's implementation.
EDIT: I just saw a further modification on page 129:
This is based on the idea that "...we need shuffle only the first m elements of the array..."