Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I asked the same question in a different way and the question was closed : https://stackoverflow.com/questions/7231460/java-64-bit-or-32-bit
This is my 2nd try at getting an objective answer(s).
We were contemplating moving our product to 64-bit Java for those customers who are pushing the boundaries of the 32-bit server JVM on Solaris (SPARC) and Linux (RHEL 5.x). Our director asked "some years ago, 64-bit wasn't quite there. How about now?"
For those customers not pushing the 4 GB boundary, will using 64-bit JVM have adverse effects in terms of performance? If yes, how much? We create a lot of objects. (we don't want to support 32-bit and 64-bit JVMs at the same time. It's a either or situation, preferably).
For those pushing the 4 GB boundary, can we expect the JVM to be as stable as the 32-bit one?
Will performance be an issue? If yes, how much? we create a lot of objects.
What GC tuning techniques are new ?
Profilers: are they quite there for profiling 64-bit JVM apps?
UPDATE : To those commenters and those who closed my earlier question, I believe I NOW understand your angst. At the same time, I believe some of you made some (untrue) assumptions that I was lazy or a suit who has no clue. I'll post my findings after my investigations. Thanks to all those who gave me real pointers.
Since you can pretty much do the testing yourself, I'm assuming you are expecting "real life" answers as to the experience of using a 32/64 bit JVM.
I'm part of a team which creates financial application for banks. We use 32 bit JVM's for development on Windows and almost all our server applications run on RHEL which runs a 64 bit JVM. Performance was never a problem for us since we use decent machines (our regular server machine use a 32 core AMD box with at least 32 GiB RAM).
As expected, the heap size would go up due to the difference in pointer sizes but since now the limit is raised from 4GiB, it again doesn't bother us. Our application also creates a lot of objects. We have no problem debugging our application with either VisualVM or command line tools. Regarding GC settings, we use the default settings and try to change only when we measure that it's actually the GC creating problems. Allocate a heap size which is much more than your application would use (8GiB) is common for us and you would see a single big GC sweep around once a day which is pretty good.
I've heard that the JDK 7 has a new GC algorithm called G1 garbage collector which is more suited for server applications so you should totally give it a try.
All that being said, your best bet would be to measure as much as possible (using 32 v/s 64 bit on 32 and 64 bit machines respectively) and then decide. FYI, our organization is totally pushing forward with the 64 bit JVM's given that most stock server hardware these days is 64 bits and 4GiB is pretty restrictive for a modern server application (at least in our domain).
We were contemplating moving our product to 64-bit java for those customers who are pushing the boundaries of the 32- bit server JVM on Solaris(SPARC) and Linux(RHEL 5.x). Our director asked "some years ago, 64-bit wasn't quite there. How about now ?"
Sun have been doing 64-bit for much longer than Windows. Solaris 2.5 (1995 the same year Windows 95 was released) wasn't as reliable in 64-bit as it could have been. Many people still on SunOS (32-bit) didn't see the point and few machine has enough memory to matter. Solaris 2.6 (1997) saw the first significant migration to the 64-bit platform. I didn't use Java seriously until 1999 (on Solaris) and at that point 64-bit what already established in my mind.
1) For those customers not pushing the 4 GB boundary, will using 64-bit JVM have adverse effects in terms of performance ? if yes, how much ?
The 64-bit JVM has registers twice the size and twice as many. If you use long alot you can see a dramatic improvement, however for typical applications the difference is 5-10% either way.
we create a lot of objects.
IMHO Performance hasn't been much of an issue for you if this isn't recognised as a problem for you. Use any profiler and there are two reports CPU and Memory usage. Often examining the memory profile makes more of a performance difference. (See below)
(we preferably don't want to support 32-bit and 64-bit JVMs at the same time. It's a either or situation, preferably)
Can't say there is much difference. What do you imagine is the overhead of supporting each. The code is exactly the same, from your point of view it might increase testing slightly. It not much different to supporting two versions of Java 6.
2) For those pushing the 4 GB boundary, can we expect the JVM to be as stable as 32-bit ?
Having used the 64-bit version since 1999 I can't remember an occasion where using the 32-bit would have made things better (only worse due to limited memory)
will performance be an issue ? if yes, how much ? we create a lot of objects.
If performance is an issue, discard less objects.
what GC tuning techniques are new ?
You can set the maximum memory size higher. That's about it. As long as you are below 32 GB there won't be a noticable increase in memory usage as it uses 32-bit references.
One thing I do is set the Eden size to 8 GB for a new application and reduce it if its not needed. This can dramatically reduce GC times. (To as low as once per day ;) This wouldn't be an option with a 32-bit JVM.
profilers : are they quite there for profiling 64-bit JVM apps ?
The VisualVM is pure Java and AFAIK works exactly the same. YourKit uses a native library and might need to ensure you are using the right version (it normally sets this up for you, but if you mess with your environment you might need to know there are two versions of the agent)
If you are worried about performance, don't create so many objects. You might be surprised how much slower creating objects freely makes in real world applications. It can slow an application by 2x to 10x or more. When I optimise code the first thing I do is reduce the discarding of object and I expect at least a three fold performance improvement.
By comparison using 64-bit vs 32-bit is likely to be 5%-10% difference. It can be faster or slower and both are just as likely. In terms of bloating your memory, use the latest JVMs and that is unlikely to be noticeable. This is because the 64-bit JVM uses 32-bit references by default when you use less than 32 GB of memory. The header overhead is still slightly higher but objects are not much bigger in 64-bit when -XX:+UseCompressedOops is on (the default for the latest releases).
Java: All about 64-bit programming
Test the size of common objects using 32-bit vs 64-bit JVMs Java: Getting the size of an Object
A extreme example of doing the same thing creating lots of objects and reflection vs not creating any and using dynamically generated code. 1000x performance improvement. Avoiding Java Serialization to increase performance
Using heap less memory can massively reduce your GC times. Collections Library for millions of elements
Using heap less memory can allow your application to use much more memory instead of passing data to another application Should server applications limit themselves to 4 GB?
There are so many aspects of your question that are dependent on your application that "real world" experiences with other peoples' applications are unlikely to tell you anything worthwhile with any degree of certainty.
My advice would be to stop wasting your time (and ours) asking what MIGHT happen, and just see what DOES happen when you use a 64-bit JVM instead of a 32-bit one. You are going to have to do the testing anyway ...
Our company is planning to move to 64 bit JVM in order to get away from 2 GB maximum heap size limit. Google gave me very mixed results about 64 bit JVM performance.
Has anyone tried moving to 64 bit java and share your experience
In a nutshell: 64-bit JVMs will consume more memory for object references and a few other types (generally not significant), consume more memory per thread (often significant on high-volume sites) and enable you to have larger heaps (generally only important if you have many long-lived objects)
Longer Answers/Comments:
The comment that Java is 32-bit by
design is misleading. Java
memory-addressing is either 32, or
64-bit, but the VM spec ensures that
most fields (e.g. int, long, double,
etc.) are the same regardless.
Also - the GC tuning comments while
pertinent for number of objects, may
not be relevant, GC can be quick on
JVMs with large heaps (I've worked
with heaps up to 15GB, with very
quick GC) - it depends more on how
you play with the generational
collector schemes, and what your
object usage pattern is. While in the
past people have spent lots of energy
tuning parameters, it's very workload
dependent, and modern (Java 5+) JVMs
are very good at self-tuning - unless
you have lots of data you're more
likely to harm yourself than help
with aggresive JVM tuning.
As mentioned on x86 architectures,
the 64-bit EMT64 or x64 processors
also include new instructions for
doing things like atomic writes, or
other options which may also impact
high-performance applications.
If you need the larger heap, then questions of performance are rather moot, aren't they? Or do you have a plan for horizontal scaling?
The main problem that I've heard with 64-bit apps is that a full garbage collection can take a very long time (because it's based on number of live objects). So you want to carefully tune the GC parameters to avoid full collections (I've heard one anecdote about a company that had 64 Gb of heap, and tuned their GC so that they'd never to a full GC; they'd simply shut down once a week).
Other than that, recognize that Java is 32-bit by design, so you're not likely to see any huge performance increase from moving data 64 bits at a time. And you're still limited to 32-bit array indices.
Works fine for us. Why don't you simply try setting it up and run your load test suite under a profiler like jvisualvm?
We've written directly to 64bit and I can see no adverse behavior...
Naively taking 32 bit JVM workloads and putting them on 64 bit produces a performance and space hit in my experience.
However, Most of the major JVM vendors have now implemented a good technology that essentially compresses some of the heap - it's called compressed references or compressed oops for 64 bit JVMs that aren't "big" (ie: in the 4-30gb range).
This makes a big difference and should make a 32->64 transition much lower impact.
Reference for the IBM JVM: link text
I'm playing around with writing some simple Spring-based web apps and deploying them to Tomcat. Almost immediately, I run into the need to customize the Tomcat's JVM settings with -XX:MaxPermSize (and -Xmx and -Xms); without this, the server easily runs out of PermGen space.
Why is this such an issue for Java VMs compared to other garbage collected languages? Comparing counts of "tune X memory usage" for X in Java, Ruby, Perl and Python, shows that Java has easily an order of magnitude more hits in Google than the other languages combined.
I'd also be interested in references to technical papers/blog-posts/etc explaining design choices behind JVM GC implementations, across different JVMs or compared to other interpreted language VMs (e.g. comparing Sun or IBM JVM to Parrot). Are there technical reasons why JVM users still have to deal with non-auto-tuning heap/permgen sizes?
The title of your question is misleading (not on purpose, I know): PermSize issues (and there are a lot of them, I was one of the first one to diagnose a Tomcat/Sun PermGen issue years ago, when there wasn't any knowledge on the issue yet) are not a Java specifity but a Sun VM specifity.
If you use a VM that doesn't use permanent generation (like, say, an IBM VM if I'm not mistaken) you cannot have permgen issues.
So it's is not a "Java" problem, but a Sun VM implementation problem.
Java gives you a bit more control about memory -- strike one for people wanting to apply that control there, vs Ruby, Perl, and Python, which give you less control on that. Java's typical implementation is also very memory hungry (because it has a more advanced garbage collection approach) wrt the typical implementations of the dynamic languages... but if you look at JRuby or Jython you'll find it's not a language issue (when these different languages use the same underlying VM, memory issues are pretty much equalized). I don't know of a widespread "Perl on JVM" implementation, but if there's one I'm willing to bet it wouldn't be measurably different in terms of footprint from JRuby or Jython!
Python/Perl/Ruby allocate their memory with malloc() or an optimization thereof. The limit to the heap space is determined by the operating system rather than the VM, so there's no need for options like -Xmxn. Also, the garbage collection is simpler, based mostly on reference counting. So there's a lot less to fine-tune.
Furthermore, dynamic languages tend to be implemented with bytecode interpreters rather than JIT compilers, so they aren't used for performance-critical code anyway.
The essence of #WizardOfOdds and #Alex-Martelli's answers appear to be correct: Java has an advanced set of GC options, and sometimes you need to tune them. However, I'm still not entirely clear on why you might design a JVM with or without a permanent generation. I have found a bunch of useful links about garbage collection in Java, though not necessarily in comparison to other languages with GC. Briefly:
The Sun GC evolves very slowly due to the fact that it is deployed everywhere and people may rely on quirks in its implementation.
Sun has detailed white papers on GC design and options, such as Tuning Garbage Collection with the 5.0 Java[tm] Virtual Machine.
There is a new GC in the wings, called the G1 GC. Alex Miller has a good summary of relevant blog posts and a link to the technical paper. But it still has a permanent generation (and doesn't necessarily do a great job with it).
Jon Masamitsu has (had?) an interesting blog at Sun various details of garbage collection.
Happy to update this answer with more details if anyone has them.
This is because Tomcat is running in the Java Virtual Machine, while other languages are either compiled or interpreted and run against your actual machine. When you set -Xmx and -Xms you are saying that you want to JVM to run like a computer with am amount of ram somewhere in the set range.
I think the reason so many people run in to this is that the default values are relatively low and people end up hitting the default ceiling pretty quickly (instead of waiting until you run out of actual ram as you would with other languages).
We have a low latency trading system (feed handlers, analytics, order entry) written in Java. It uses TCP and UDP extensively, it does not use Infiniband or other non-standard networking.
Can anyone comment on the tradeoffs of various OSes or OS configurations to deploy this system? While throughput is obviously important to keep up with modern price feeds, latency is our #1 priority.
Solaris seems like a natural candidate since they created Java; should I use Sparc or x64 processors?
I've heard good things about RHEL and SLERT, are those the right versions of Linux to use in our benchmarking.
Has anyone tested Windows against the above OSes? Or is it assumed to not keep up?
I'd like to leave the Java vs C++ debate for a different thread.
Vendors love this kind of benchmark. You have code, right?
IBM, Sun/Oracle, HP will all love to run your app on their gear to demonstrate their advantages.
Make them do this. If you have code, make the vendors run a demonstration on their gear to show which is best for your needs.
It's easy, painless, free, and factual. The final decision will be easy and obvious. And you will know how to install and tune to maximize performance.
What I hate doing is predicting this kind of thing before the code is written. Too many customers have asked for a H/W and OS recommendation before we've finished identifying all the use cases. Asking for that kind of precognition is simple craziness.
But you have code. You can produce test cases that exercise your code. That's perfect.
For a trading environment, in addition to low latency you are probably concerned about consistency as well as latency so focusing on reducing the impact of GC pauses as much as possible may well give you more benefit than differnt OS choices.
The G1 garbage collector in recent versions of Suns Hotspot VM improves stop the world pauses a lot, in a similar way to the JRockit VM
For real performance guarantees though, Azul Systems version of the Hotspot compiler on their Java Appliance delivers the lowest guaranteed pauses available - also it scales to a massive size - 100s of GB stack and 100s of cores.
I'd discount Java Realtime - although you'd get guarantees of response, you'd sacrifice throughput to get those guarantees
However, if your planning on using your trading system in an environment where every microsecond counts, you're really going to have to live with the lack of consistency you will get from the current generation of VM's - none of them (except realtime) guarantees low microsecond GC pauses. Of course, at this level your going to run into the same issues from OS activity (process pre-emption, interrupt handling, page faults, etc.). In this case one of the real time variants of Linux is going to help you.
I wouldn't rule out Windows from this just because it's Windows. My expirience over the last few years has been that the Windows versions of the Sun JVM was usually the most mature performance wise in contrast to Linux or Soaris x86 on the same hardware. The JVM for Solaris SPARC may be good too, but I guess with Windows on x86 you'll get more power for less money.
I would strongly recommend that you look into an operating system you already have experience with. Solaris is a strange beast if you only know Linux, e.g.
Also I would strongly recommend to use a platform actually supported by Sun, as this will make it much easier to get professional assistance when you REALLY, REALLY need it.
http://java.sun.com/javase/6/webnotes/install/system-configurations.html
I'd probably worry about garbage collection causing latency well before the operating system; have you looked into tuning that at all?
If I were willing to spend the time to trial different OSs, I'd try Solaris 10 and NetBSD, and probably a Linux variant for good measure.
I'd experiment with 32-vs-64 bit architectures; 64 bit will give you a larger heap address space... but will take longer to address each bit of memory.
I'm assuming you've profiled your application and know where the bottlenecks are; by the comment about GC, you've done that. In that case, your application shouldn't be CPU-bound, and chip architecture shouldn't be a primary concern.
I don't think managed code environments and real-time processing go together very well. If you really care about latency, remove the layer imposed by the managed code. This is not a Java vs C++ argument, but a Java/C#/... vs C/C++/FORTRAN/... argument, and I believe that is a valid design discussion to have.
And yes, I do mean FORTRAN, we run a number of near real-time systems with a FORTRAN foundation.
One way to manage latency is to have several JVM's dividing the work with smaller heaps so that a stop the world garbage collection isn't as time consuming when it happens and affects less processes.
Another approach is to load up a cluster of JVM's with enough memory and allocate the processes to ensure there won't be a stop the world garbage collection during the hours you care about latency (if this isn't a 24/7 app), and restart JVMs on off hours.
You should also look at other JVM implementations as a possibility (such as JRocket). Of course if any of them are appropriate depends entirely on your specific application.
If any of the above matters to your approach, it will affect the choice of OS. For example, if you go with another JVM implementation, that might limit OS choices, and if you go with clustering or otherwise running a several JVM's for the application, that might require some better underlying OS tools to manage effectively, further influencing the OS choice.
The choice of operating system or configurable is completely redundant considering the availability of faster network fabrics.
Look at 10GigE with ToE NICs, or the faster solution of 4X QDR (40Gbs) InfiniBand but with IPoIB presenting a standard Ethernet interface and routing.
For Java SE there are several JVM's available for running in production on x86:
IBM J9
Oracle JRockit - http://www.oracle.com/technology/products/jrockit/index.html
Apache Harmony - http://harmony.apache.org/
The one in OS X (if a Mac) which appears to be Sun with Aqua Swing.
OpenJDK
plus some custom offerings for running on a server:
Azul - http://www.azulsystems.com/
Google App Engine Java - http://code.google.com/intl/da/appengine/docs/java/overview.html
Other platforms:
Sun Solaris JVM - better scalability than x86?
(edit) GNU compiler for Java - http://gcc.gnu.org/java/ - can compile to native code on multiple platforms.
The Sun JVM has a distinct advantage with the jvisualvm program, which allows runtime inspection of running code. Is there any technical advantages of any other JVM that might make it a better choice for development and/or production?
In other words, is there a killer facility or scenario that would make any investment of time/effort/money worth it in another JVM?
(Please also suggest additional JVM's if they would be a good choice).
JRockit comes with JRockit Mission Control, which is a tools suite you can use to monitor the JVM and your application. You can download it here, it's free to use for development.
Mission Control has a lot of features that VisualVM is missing, for instance an online memory leak detector, a latency analyzer, Eclipse integration, JMX.logging to file. etc. If you want to compare VisualVM with Mission Control here are the release notes and the documentation for the latest version.
IBM J9
This is the kind of sales speech you can read or hear about J9:
IBM has released an SDK for Java 6. Product binaries are available for Linux on x86 and 64-bit AMD, and AIX for PPC for 32- and 64-bits. In addition to supporting the Java SE 6 Platform specification, the new SDK also focuses on, Data sharing between Java Virtual Machines, Enhanced diagnostics information, Operating system stack backtraces, Updated jdmpview tool, platform stability, and performance.
Some would say that the IBM SDK has some advantages beyond speed, that the use and expansion of PermGenSpace is much better than in the Sun SDK or GCJ (not a big deal for client applications, but heavy lifting J2EE servers, especially portal servers, can really cause the Sun JDK heartburn). But, according to this paper comparing Sun vs IBM JVM GC, it turns out that memory performance depends mostly on the application and not so much on the VM.
So, while it's true that the IBM JVM is well known for its troubleshooting features (more advanced than Sun's JVM), I'm not convinced by the differences at the GC level.
And Sun's JVM has a big advantage over IBM, at least on Solaris: DTrace providers. Actually, I've been mainly working with Weblogic on Solaris so Sun' JVM has always been the natural choice.
Oracle JRockit
I did some benchmarks of BEA/Oracle JRockit some years ago and it was indeed a fast VM and it was then supporting bigger heaps than Sun's VM at this time. But it has some stability problems which is not really good for production. Things might have changed since then though.
Apache Harmony
I might be wrong but, to me, Harmony is made of code donations from IBM (benefits: the community is doing maintenance) and I don't really see why I should consider Harmony rather than IBM J9.
Apple's JDK
I never had to use Mac for production so I can't really answer. I just remember Apple needed some time to bundle Java 6, and I don't know why. This is maybe not rational but this makes me suspicious.
OpenJDK
I know that some vendor are offering production support (e.g. RedHat with RHEL 5.3+, see this blog entry) for OpenJDK so it might be an option for platforms not supported by Sun. However, unless someone can tell me what makes OpenJDK work better than Sun's, I think I'll install Sun JVM on supported platforms.
So to me, the choices are actually: Sun's JVM unless I've to run some Websphere stuff, in which case I'd choose IBM J9. But to be honest, I've never faced a situation that I couldn't solve on a Sun's JVM and that could have justified (temporary) swapping to IBM' one so I can't actually tell if the troubleshooting features are that nice. But I admit that I may suffer from a lack of knowledge of IBM's JVM.
Some applications, like financial and computational science would benefit greatly from hardware implementations of decimal floating point. IBM has rolled out a series of processors (POWER6, the z9 and z10 mainframes) which implement the IEEE 754-2008 decimal floating point standard. The latest IBM JDK can use hardware acceleration for BigDecimals.
To allow developers to easily take advantage of the dedicated DFP hardware, IBM
Developer Kit for Java 6 has built-in support for 64-bit DFP through the
BigDecimal class library. The JVM seamlessly uses the DFP hardware when
available to remove the need for computationally expensive software-based decimal
arithmetic, thus improving application performance.
It's a very fringe case, but if you have a z10 mainframe handy and want to use its decimal floating point unit in Java, then the IBM JVM would be a better choice than the Sun JVM.
-- Flaviu Cipcigan
The typical feature/scenario that you should look at is performance and speed.
No matter what the white papers say, ultimately you need to benchmark and decide for yourself.
I am biased towards IBM, because I worked there a few years ago. I didn't personally deal with jvm development, but I remember that the emphasis of the jvm development group was on the following points:
Proprietary garbage collection optimizations. This includes not only faster GC, but also more configuration options, like GC for server and client. This was before Sun offered similar options.
Much faster (x10) native performance with the JNI interface. This was particularly important at the time when Eclipse/WSAD began to gain traction and swt was heavily used. If your app uses JNI a lot, then I think it's worth while for you to benchmark the IBM jdk against the Sun jdk.
Stability and reliability. I think this is only relevant if you buy commercial support from IBM, like SLA for a service tier (WebSphere and db2, clustered environment, etc.). In this case, IBM will guaranty the stability of their offering only if you use their jvm.
Regarding OpenJDK, I recommend that you look at this history of OpenJDK. My understanding is that OpenJDK 7 will be almost identical to Sun's jdk 7, so the performance is very likely to be identical. The primary differences will be licensing, and small components like webstart.
Oracle's jvm is useful if you want to run java code from within your database (via stored-procedure). It includes some optimizations that help the db run faster in this scenario.
As others have said, Sun has been catching up on their competitors. I think that in the 1.4 days the differences were much more noticeable, but no so much today. Regarding jvisualvm, other vendors also offer similar tools, so I don't think that is an issue.
Finally, there is one other metric (albeit a bit controversial) to indicate how serious are those vendors about their VM's. That is the number of related patents that they issue. It might be useful if you need to convince your boss, or if you like to read patents :)
Patent search: ibm and java - 4559 patents.
Patent search: oracle and java - 323.
Not strictly a JVM, but still a Java implementation: gcj. It has the advantage of supporting many processors, so if you target one of the embedded processors, gcj may be your only choice. Plus, since it is a true compiler (not just a JIT), you save the overhead of JIT compilation (both in memory and cycles) on the embedded target.
Back in the Java 1.4 days my team used the IBM JVM for a high-volume, message-based system running on Linux. Why did we do this? Because we benchmarked the different JVMs! JRockit was actually the fastest, but it would occasionally crash, so not so great for production.
As always with these things, measure it!
A few years back (JDK1.4), different JVMs had different advantages:
the IBM JVM was able to do heap dumps (programatically, on signals, or on OOM), and the heaproot utility was very useful to track memory leaks (less intrusive than profilers). No other JVM had this.
JRockit had many useful options that the Sun JVM didn't have, parallel collection. It was also (much) faster (and more stable than the Sun VM).
Today, the Sun one has these features, but I'm sure there are others.
Speed could be one. Garbage collection strategies another.
If you're using WebLogic, some bugs in the Sun JVM may lead to bugs in WebLogic. These bugs are more likely to be solved faster in JRockit.
From what I've been told, the main difference with Sun's JVM and IBM's JVM is in the actual garbage collectors, IBM's garbage collector(s?) are much more configurable than Sun's and are made only the business world in mind. Additionally IBM's JVM can tell a lot more than "I just crashed, here's my heapdump" in error situations which is obviously important in the business world which is the main living space of IBM's JVM.
So assuming I haven't been lied to, I'd say that IBM's JVM should be used when doing memory-intensive things in business software which relies on aggressive or otherwise highly tunable garbage collection.
In my own experience and at face value, I see simplicity in the IBM GC design. Sun's various and new GCs are excellent no doubt and offer a host of tuning options at minute levels, but even in some of the most active web apps I know that handle heavy/aggressive new objects and keep a lot in the heap for cache I rarely see GC ever exceed 1% even trying to keep the footprint low. Sure we could probably tune it better but there's a diminishing return.
I have had much more of a challenge in the exact same applications running IBM's JDK. In particular having issues with pinned clusters and having to tune -Xk.
Now I could mention about a dozen items that both IBM and Sun should implement for the killer JVM but not the scope of your question I presume :)
Incremental garbage collection and very small runtime size for realtime embedded systems is the only thing that would really matter enough to warrant chancing less stability or platform support. There was some jvm with this in mind but I forget its name now.