Here is a sample code:
public class TestIO{
public static void main(String[] str){
TestIO t = new TestIO();
t.fOne();
t.fTwo();
t.fOne();
t.fTwo();
}
public void fOne(){
long t1, t2;
t1 = System.nanoTime();
int i = 10;
int j = 10;
int k = j*i;
System.out.println(k);
t2 = System.nanoTime();
System.out.println("Time taken by 'fOne' ... " + (t2-t1));
}
public void fTwo(){
long t1, t2;
t1 = System.nanoTime();
int i = 10;
int j = 10;
int k = j*i;
System.out.println(k);
t2 = System.nanoTime();
System.out.println("Time taken by 'fTwo' ... " + (t2-t1));
}
}
This gives the following output: 100 Time taken by 'fOne' ... 390273 100 Time taken by 'fTwo' ... 118451 100 Time taken by 'fOne' ... 53359 100 Time taken by 'fTwo' ... 115936 Press any key to continue . . .
Why does it take more time (significantly more) to execute the same method for the first time than the consecutive calls?
I tried giving -XX:CompileThreshold=1000000
to the command line, but there was no difference.
Well the most probably answer is initialization. JIT is for sure not the right answer as it takes a lot more cycles before it starts to optimize. But at the very first time there can be:
As has been suggested, JIT could be the culprit, but so could I/O wait time as well as resource wait time if other processes on the machine were using resources at that moment.
The moral of this story is that micrbenchmarking is a hard problem, especially for Java. I don't know why you're doing this, but if you're trying to choose between two approaches for a problem, don't measure them this way. Use the strategy design pattern and run your entire program with the two different approaches and measure the whole system. That makes little bumps in processing time even out over the long run, and gives you a much more realistic view of how much the performance of your entire app is bottlenecked at that point (hint: it's probably less than you think.)
The most likely culprit is the JIT (just-in-time) hotspot engine. Basically the first time code is executed the machine code is "remembered" by the JVM and then reused on future executions.
In addition to JITting, other factors could be:
If you want to get good benchmarks, you should
There are benchmarking libraries on several platforms that will help you do this stuff; they can also calculate standard deviations and other statistics.
The code tested is quite trivial. the most expensive action taken is
so what you are measuring is how fast the debug output is written. This varies widely, and may even depend on position of the debug window on the screen, if it needs to scroll its size, etc.
JIT/Hotspot incrementally optimizes often-used codepaths.
The processor optimizes for expected codepaths. Paths used more often execute faster.
Your sample size is way too small. Such microbenchmarks usually do a warm-up phase you can see how extensively this should be done like Java is really fast at doing nothing.
I think it is because the second time the generated code was already optimized, after the first run.