We run the 32-bit Sun Java 5 JVM on 64-bit Linux 2.6 servers, but apparently this limits our maximum memory per process to 2GB. So it's been proposed that we upgrade to the 64-bit JVM's to remove the restriction. We currently run multiple JVM's (Tomcat instances) on a server in order to stay under the 2GB limit, but we'd like to consolidate them in the interest of simplifying deployment.
If you've done this, can you share your experiences, please? Are you running 64-bit JVM's in production? Would you recommend staying at Java 5, or would it be ok to move to both Java 6 and 64 bits simultaneously? Should we expect performance issues, either better or worse? Are there any particular areas we should focus our regression testing?
Thanks for any tips!
We use a 64-bit JVM with heaps of around 40 Gb. In our application, a lot of data is cached, resulting in a large "old" generation. The default garbage collection settings did not work well and needed some painful tuning in production. Lesson: make sure that you have adequate load-testing infrastructure before you scale up like this. That said, once we got the kinks worked out, GC performance has been great.
At the Kepler Science Operations Center we have about 50 machines with 32-64G each. The JVMs heaps are typically 7-20G. We are using Java 6. The OS has Linux 2.6 kernel.
When we migrated to 64bit I expected there would be some issues with running the 64-bit JVM But really there have not been. Out of memory conditions are more difficult to debug since the heap dumps are so much larger. The Java Service Wrapper needed some modifications to support larger heap sizes.
There are some sites on the web claiming GC does not scale well past 2G, but I've not seen any problems. Finally, we are doing throughput intensive rather interactive intensive computing. I've never looked at latency differences; my guess is worst case GC latency will be longer with the larger heap sizes.
After migrating to JDK6 64bits from JDK5 32bits (Windows server), we got leak in "perm gen space" memory block. After playing with JDK parameters it was resolved. Hope you will be more lucky then we are.
I can confirm Sean's experience. We are running pure-Java, computationally intensive web services (home-cooked Jetty integration, with nowadays more than 1k servlet threads, and >6Gb of loaded data in memory), and all our applications scaled very well to a 64 bit JVM when we migrated 2 years ago. I would advise to use the latest Sun JVM, as substantial improvement in the GC overhead have been done in the last few releases. I did not have any issue with Tanukisoftware's Wrapper either.
If you use numactl --show you can see the size of the memory banks in your server. I have found the GC doesn't scale well when it uses more than one memory bank. This is more a hardware than a software issue IMHO but it can effect your GC times all the same.
Any JNI code you have written that assumes it's running in 32 bits will need to be retested. For problems you may run into porting c code from 32 to 64 bits see this link. It's not JNI specific but still applys. http://www.ibm.com/developerworks/library/l-port64.html