Hotspot JIT optimization and “de-optimization”: ho

2020-07-22 16:20发布

问题:

I have a BIG application that I'm trying to optimize. to do so, I'm profiling / benchmarking small elements of it by running them millions of times in a loop, and checking their processing time.

obviously Hotspot's JIT is kicking in, and I can actually see when that happens. I like it, I can clearly see things going much faster after the "warm up" period.

however, after reaching the fastest execution speed and keeping it for some time, I can see that the speed is then reduced to a less impressive one, and it stays there.

what's executed in the loop does not actually change much, so I can hardly see why escape analysis would force a "de-optimization" of code.

basically I get the feeling the JIT is getting the best performance, then settles for something slower, thinking that it's "enough".

is there any way to tell him "it's not enough, I really want that code to run as fast as possible !". I know it can as it already did. Just how can I force it to do so ?

回答1:

Its impossible to trace with a "product" (read normal released) build, I would put the code through yourkit and see what's happening, hotspot will try to get the best possible optimisation at all times.

It will only go slower if you have code that forces de-optimisation in the jit, in the profiler you should be able to see this as things like large numbers of allocations, huge number of exceptions.



回答2:

You should use VisualVM to find memory leaks and CPU usage problems, which comes with the JDK. It's pretty good for profiling, and has many plugins. You can also start VisualVM from Eclipse automatically when an application runs. Netbeans probably has something similar. It also doesn't impact performance much, providing that you use sampling rather than profiling.

I used to prefer JRockIt Mission Control over this (with JRockIt VM), but this does the job these days.