I am doing some Java performance comparison between my classes, and wondering if there is some sort of Java Performance Framework to make writing performance measurement code easier?
I.e, what I am doing now is trying to measure what effect does it have having a method as "synchronized" as in PseudoRandomUsingSynch.nextInt() compared to using an AtomicInteger as my "synchronizer".
So I am trying to measure how long it takes to generate random integers using 3 threads accessing a synchronized method looping for say 10000 times.
I am sure there is a much better way doing this. Can you please enlighten me? :)
public static void main( String [] args ) throws InterruptedException, ExecutionException {
PseudoRandomUsingSynch rand1 = new PseudoRandomUsingSynch((int)System.currentTimeMillis());
int n = 3;
ExecutorService execService = Executors.newFixedThreadPool(n);
long timeBefore = System.currentTimeMillis();
for(int idx=0; idx<100000; ++idx) {
Future<Integer> future = execService.submit(rand1);
Future<Integer> future1 = execService.submit(rand1);
Future<Integer> future2 = execService.submit(rand1);
int random1 = future.get();
int random2 = future1.get();
int random3 = future2.get();
}
long timeAfter = System.currentTimeMillis();
long elapsed = timeAfter - timeBefore;
out.println("elapsed:" + elapsed);
}
the class
public class PseudoRandomUsingSynch implements Callable<Integer> {
private int seed;
public PseudoRandomUsingSynch(int s) { seed = s; }
public synchronized int nextInt(int n) {
byte [] s = DonsUtil.intToByteArray(seed);
SecureRandom secureRandom = new SecureRandom(s);
return ( secureRandom.nextInt() % n );
}
@Override
public Integer call() throws Exception {
return nextInt((int)System.currentTimeMillis());
}
}
Regards
More micro-benchmarking advice - micro benchmarks rarely tell you what you really need to know ... which is how fast a real application is going to run.
In your case, I imagine you are trying to figure out if your application will perform better using an Atomic object than using synchronized ... or vice versa. And the real answer is that it most likely depends on factors that a micro-benchmark cannot measure. Things like the probability of contention, how long locks are held, the number of threads and processors, and the amount of extra algorithmic work needed to make atomic update a viable solution.
EDIT - in response to this question.
In theory yes. Once you have implemented the entire application, it is possible to instrument it to measure these things. But that doesn't give you your answer either, because there isn't a predictive model you can plug these numbers into to give the answer. And besides, you've already implemented the application by then.
But my point was not that measuring these factors allows you to predict performance. (It doesn't!) Rather, it was that a micro-benchmark does not allow you to predict performance either.
In reality, the best approach is to implement the application according to your intuition, and then use profiling as the basis for figuring out where the real performance problems are.
OpenJDK guys have developed a benchmarking tool called JMH:
http://openjdk.java.net/projects/code-tools/jmh/
This provides quite an easy to setup framework, and there is a couple of samples showing how to use that.
http://hg.openjdk.java.net/code-tools/jmh/file/tip/jmh-samples/src/main/java/org/openjdk/jmh/samples/
Nothing can prevent you from writing the benchmark wrong, but they did a great job at eliminating the non-obvious mistakes (such as false sharing between threads, preventing dead code elimination etc).
You probably want to move the loop into the task. As it is you just start all the threads and almost immediately you're back to single threaded.
Usual microbenchmarking advice: Allow for some warm up. As well as average, deviation is interesting. Use
System.nanoTime
instead ofSystem.currentTimeMillis
.Specific to this problem is how much the threads fight. With a large number of contending threads, cas loops can perform wasted work. Creating a
SecureRandom
is probably expensive, and so mightSystem.currentTimeMillis
to a lesser extent. I believeSecureRandom
should already be thread safe, if used correctly.These guys designed a good JVM measurement methodology so you won't fool yourself with bogus numbers, and then published it as a Python script so you can re-use their smarts -
Statistically Rigorous Java Performance Evaluation (pdf paper)
Ignoring the question of whether a microbenchmark is useful in your case (Stephen C' s points are very valid), I would point out:
Firstly, don't listen to people who say 'it's not that hard'. Yes, microbenchmarking on a virtual machine with JIT compilation is difficult. It's actually really difficult to get meaningful and useful figures out of a microbenchmark, and anyone who claims it's not hard is either a supergenius or doing it wrong. :)
Secondly, yes, there are a few such frameworks around. One worth looking at (thought it's in very early pre-release stage) is Caliper, by Kevin Bourrillion and Jesse Wilson of Google. Looks really impressive from a few early looks at it.
In short, you are thus searching for an "Java unit performance testing tool"?
Use JUnitPerf.
Update: for the case it's not clear yet: it also supports concurrent (multithreading) testing. Here's an extract of the chapter "LoadTest" of the aforementioned link which includes a code sample: