Why do two consecutive calls to the same method yi

2019-07-09 01:41发布

Here is a sample code:

public class TestIO{
public static void main(String[] str){
    TestIO t = new TestIO();
    t.fOne();
    t.fTwo();
    t.fOne();
    t.fTwo();
}


public void fOne(){
    long t1, t2;
    t1 = System.nanoTime();
    int i = 10;
    int j = 10;
    int k = j*i;
    System.out.println(k);
    t2 = System.nanoTime();
    System.out.println("Time taken by 'fOne' ... " + (t2-t1));
}

public void fTwo(){
    long t1, t2;
    t1 = System.nanoTime();
    int i = 10;
    int j = 10;
    int k = j*i;
    System.out.println(k);
    t2 = System.nanoTime();
    System.out.println("Time taken by 'fTwo' ... " + (t2-t1));
}

}

This gives the following output: 100 Time taken by 'fOne' ... 390273 100 Time taken by 'fTwo' ... 118451 100 Time taken by 'fOne' ... 53359 100 Time taken by 'fTwo' ... 115936 Press any key to continue . . .

Why does it take more time (significantly more) to execute the same method for the first time than the consecutive calls?

I tried giving -XX:CompileThreshold=1000000 to the command line, but there was no difference.

8条回答
We Are One
2楼-- · 2019-07-09 02:22

Well the most probably answer is initialization. JIT is for sure not the right answer as it takes a lot more cycles before it starts to optimize. But at the very first time there can be:

  • looking up classes (is cached so no second lookup needed)
  • loading classes (once loaded stays in memory)
  • getting additional code from the native library (native code is cached)
  • finally it loads the code to be executed in the L1 cache of the CPU. That is the most propable case for speedup in your sense and at the same time a reason why the benchmark (being a microbenchmark) does not say much. If your code is small enough the second invocation of a loop can be run completely from inside the CPU which is fast. In the real world this does not happen because programs are bigger and the reusage of the L1 cache is far from being that big.
查看更多
smile是对你的礼貌
3楼-- · 2019-07-09 02:24

As has been suggested, JIT could be the culprit, but so could I/O wait time as well as resource wait time if other processes on the machine were using resources at that moment.

The moral of this story is that micrbenchmarking is a hard problem, especially for Java. I don't know why you're doing this, but if you're trying to choose between two approaches for a problem, don't measure them this way. Use the strategy design pattern and run your entire program with the two different approaches and measure the whole system. That makes little bumps in processing time even out over the long run, and gives you a much more realistic view of how much the performance of your entire app is bottlenecked at that point (hint: it's probably less than you think.)

查看更多
爷的心禁止访问
4楼-- · 2019-07-09 02:33

The most likely culprit is the JIT (just-in-time) hotspot engine. Basically the first time code is executed the machine code is "remembered" by the JVM and then reused on future executions.

查看更多
Evening l夕情丶
5楼-- · 2019-07-09 02:34

In addition to JITting, other factors could be:

  • The process's output stream blocking when you call System.out.println
  • Your process getting scheduled out by another process
  • The garbage collector doing some work on a background thread

If you want to get good benchmarks, you should

  • Run the code you're benchmarking a large number of times, several thousand at least, and calculate the average time.
  • Ignore the times of the first several calls (due to JITting, etc.)
  • Disable the GC if you can; this may not be an option if your code generates a lot of objects.
  • Take the logging (println calls) out of the code being benchmarked.

There are benchmarking libraries on several platforms that will help you do this stuff; they can also calculate standard deviations and other statistics.

查看更多
等我变得足够好
6楼-- · 2019-07-09 02:36
  1. The code tested is quite trivial. the most expensive action taken is

     System.out.println(k);
    

    so what you are measuring is how fast the debug output is written. This varies widely, and may even depend on position of the debug window on the screen, if it needs to scroll its size, etc.

  2. JIT/Hotspot incrementally optimizes often-used codepaths.

  3. The processor optimizes for expected codepaths. Paths used more often execute faster.

  4. Your sample size is way too small. Such microbenchmarks usually do a warm-up phase you can see how extensively this should be done like Java is really fast at doing nothing.

查看更多
趁早两清
7楼-- · 2019-07-09 02:38

I think it is because the second time the generated code was already optimized, after the first run.

查看更多
登录 后发表回答