I'm using junit to run a primitive benchmark like this:
@Test
public void testTime() throws Exception{
LoadoutBase<?> loadout = new LoadoutStandard("AS7-D-DC");
final int iterations = 1000000;
long start = System.nanoTime();
int sum = 0;
for(int i = 0; i < iterations; ++i){
Iterator<Item> it = loadout.iterator();
while( it.hasNext() ){
sum += it.next().getNumCriticalSlots();
}
}
long end = System.nanoTime();
long time_it = end - start;
start = System.nanoTime();
sum = 0;
for(int i = 0; i < iterations; ++i){
for(Item item : loadout.getAllItems()){
sum += item.getNumCriticalSlots();
}
}
end = System.nanoTime();
long time_arrays = end - start;
System.out.println("it: " + time_it + " array: " + time_arrays + " diff: " + (double)time_it/time_arrays);
}
If I set iterations=1000000
then I get
it: 792771504 array: 1109215387 diff: 0.7147137637029551
very consistently but if I set iterations=10000
then I get
it: 32365742 array: 28902811 diff: 1.1198129482976587
with very wild fluctuations. The diff parameters is anywhere from 0.7 to 1.2
Could any one shed some light on what might be happening here? Which method should I choose?
Edit:
What I really am benchmarking is behind the scenes work. getAllItems
creates a new List<Item>
and populates it by getting all items from 16 sublists using addAll
. The Iterator
approach doesn't construct this temporary list but rather keeps track of in which of these 16 sublists it is currently iterating and has some logic to make the 16 sublists look like a continuous list.
After reading the links from Luiggi Mendoza, I refactored my test code like this:
Now I get consistent results saying that the iterator approach is about 0.7 times the execution speed of the
getAllItems
approach. The results are consistent even if I change the order of the tests so I trust them for this test.Thanks.
Since you want to test the difference between using
Iterator
and using enhanced for loop (which usesIterator
behind the scenes for you), then you're doing it wrong. First because JIT has enough time to improve the results on the second approach rather than in the first and several other reasons explained here: How do I write a correct micro-benchmark in Java?. You could see very different results of this by doing these (again, as result of JIT):for
loop that covers all this code.for
loop approach to be executed before.The best way to obtain real results for your test would be to split the approaches into different methods, then measure each (apart from the JVM warm up phase and other stuff covered in prev QA). Also, I recommend you to not reinvent the wheel and use a proper benchmark framework based on JUnit. Here are two of them:
Related Q/As for benchmarking: