I noticed Sun is providing a 64-bit version of Java. Does it perform better than the 32-bit version?
Almost always 64 bits will be slower.
To quote Sun from the HotSpot FAQ:
The performance difference comparing
an application running on a 64-bit
platform versus a 32-bit platform on
SPARC is on the order of 10-20%
degradation when you move to a 64-bit
VM. On AMD64 and EM64T platforms this
difference ranges from 0-15% depending
on the amount of pointer accessing
your application performs.
There are more details at the link.
Define your workload and what "perform" means to you.
This is sort of a running annoyance to me, as a performance geek of long standing. Whether or not a particular change "performs better" or not is dependent, first and foremost, on the workload, ie, what you're asking the program to do.
64 bit Java will often perform better on things with heavy computation loads. Java programs, classically, have heavy I/O loads and heavy network loads; 64 bit vs 32 bit may not matter, but operating systems usually do.
64-bit perform better if you need much more than 1.2 GB. On some platforms you can get up to 3 GB but if you want 4 - 384 GB for example, 64-bit is your only option.
I believe Azul supports a 384 GB JVM, does anyone know if you can go higher?
I know that this question is quite old and the voted answers were probably correct at the time when they were written. But living in 2018 now, things have changed.
I just had an issue with a Java client application running on Win 10 64Bit on a Java 8 32Bit JVM. It was reading 174 MB of data from an HttpsURLConnection's InputStream in 26s which is awfully slow. The server and network were proven not to be the cause of this.
Thinking "Hey, there cannot be a huge difference between 32Bit and 64Bit JRE" it took some time until I tried having the very same code executed by a 64Bit JVM. Fortunately, in the end I did it: It was reading the very same 174MB in 5s!
I don't know if I could make it even faster, but the key take-away is this:
jre1.8.0_172 32Bit : 6.692MB/s
jre1.8.0_172 64Bit : 34.8MB/s
for the very same jar file being executed on Windows 10 64Bit.
I have no idea what could be the reason for this, but I can answer this question by "Yes, 64Bit Java is better than 32Bit Java". See also the numbers in the answer of my question regarding this issue.
On most CPU architecures 32-bits is faster than 64-bit, all else being equal. 64-bit pointers require twice as much bandwidth to transfer as 32-bits. However, the x64 instruction set architecture adds a bit of sanity over x86, so it ends up being faster. The amount of handling of long types is usually small.
Of course it also depends upon the implementation of Java. As well as the compiler, you might find differences in the implementation; for instance, NIO assumes 64-bit pointers. Also note that Sun previously only shipped the faster server HotSpot implementation for x64. This meant that if you specified -d64, you would also switch from client to server HotSpot, IIRC.
Some improvements: operations with doubles on 64 bits compute equally fast as floats on 32 bits, as well as operations on long at 64 bit compared to int.
So if you are running code with tons of longs you might see a real improvement.
My experience differs from the other answers.
Java 64bit may be faster than 32bit. At least with my tests it always was! The pointer argument is not valid when less than 4GB are used because then the 64bit-VM will also use short pointers internally. You get however the faster instruction set of the 64bit CPUs!
I tested this with Windows 7 and JDE1.8.0_144, but maybe the real reason are different internal JVM setting. When you use the 64-bit JVM it starts in "server" mode, while the 32-bit VM starts in "client" mode.
Yes, especially if your code is built to target a 64 bit platform.
Related
I am testing my java application and I noticed that a Java 64 bit version uses much more than executing the application in a Java 32 bit version.
The servers I tested was a Windows 7-64 bit and a Solaris-64 bit, but the same behavior has happened in both cases. By the way, the application runs with default VM parameters and Java version used was 8u65s.
As my servers are 64 bit that right choice would take a Java 64 bit, but is there any reason for this? In what case 32 bit version is better than a 64 bit?
Memory allocated in both:
32-bit : 74mb
64-bit: 249mb
It is correct that a 64-bit memory model takes up more memory.
Besides that I just wanted to mention a Solaris gotcha, so this is not really a full answer to your question, but the answer below can fully explain the difference from 74mb to 249mb that you are seeing.
It is correct that there's no longer a 32-bit version Java for Solaris, as is also the case for Mac OS X. Beware that for Java 7 on Solaris you would always get the 32-bit Java (even if you had installed 64-bit Java) unless you explicitly requested the 64-bit with the -d64 flag. So be sure not to compare apples and oranges here. A lot of people on Solaris thought that they have been running the 64-bit Java because they had installed it, unaware that it had to explicitly requested.
For Java 8 on Solaris there's no point in specifying -dXX as there's only a 64-bit version.
Therefore - simply as a consequence of this - the default values for memory settings have changed. What I'm saying is that solely as a consequence of this (and not the discussion about memory pointers) it will seem as if your Java 8 on Solaris is taking up more memory from the OS. This is in fact a mirage.
Here's a recap from a 16 GB system (the values will change depending on your amount of installed RAM):
Java 7 on Solaris
With Java 7 on Solaris without further command line options you would get a 32-bit memory model (-d32 is implied even with 64-bit version installed) and default values as follows:
memory model: 32 bit
-Xms default : 64 MB
-Xmx default : 1 GB
If you explicitly used -d64 you would get:
memory model: 64 bit
-Xms default : 256 MB
-Xmx default : 4 GB
Java 8 on Solaris
With Java 8 on Solaris without further command line options you would get a 64-bit memory model (-d64 is implied, -d32 is now illegal) and default values as follows:
memory model: 64 bit
-Xms default : 256 MB
-Xmx default : 4 GB
As for your comment that you've read that: "SPARC is on the order of 10-20% degradation when you move to a 64-bit VM". I doubt it. I can see that you've read it here but that document applies to Java 1.4 and potentially Java 5. A lot has happened since then.
This is normal behavior for Java (as well as Microsoft .NET), mostly because of their pointer model, and also their garbage collection model.
An object variable is actually a pointer to an object on the heap. In 64-bit versions, this pointer requires twice as much space. So, the pointers stored in containers will require more memory, and so will the pointers that are held by the garbage collector to allow collection. Since objects are mostly made up of pointers to other objects, the difference between 32-bit and 64-bit adds up very fast.
Added to that, the garbage collector has to keep track of all of these objects efficiently, and in 64-bit versions, the collector tends to use a larger minimum allocation size so it doesn't have to keep track of as many slices of memory. This makes sense because objects are bigger anyway. I believe the minimum sizes are typically 16 bytes in 32-bit mode and 32 bytes in 64-bit mode, but those are entirely up to the specific virtual machine you are using, so they will vary.
For example, if you have an object that only requires 12 bytes of heap memory, and you are running on a virtual machine with a 32-byte minimum allocation size, it will use 32 bytes, with 20 of those bytes wasted. If you allocate the same object on a machine with a 16-byte minimum size, it will use 16 bytes, with 4 wasted. The alternative to this is to waste a lot more memory in keeping track of those blocks, so this is actually the best approach, and will keep your application's performance and resource utilization balanced.
Another thing to keep in mind is that the Java runtime allocates blocks of memory from the operating system for its heap, then the program can allocate memory out of those blocks. The runtime tries to stay ahead of your program's memory needs, so it will allocate more than is needed and let your program grow into it. With a higher minimum allocation size, the 64-bit runtime will allocate bigger blocks for its heap, so you will have more unused memory than with a 32-bit runtime.
If memory consumption is a serious constraint for your particular application, you might consider native-compiled C++ (using actual C++ standards, not legacy C with pointers to objects!). Native C++ typically requires 1/5 of the memory of Java to accomplish the same thing, which is the reason that native code tends to be more popular on mobile devices (C++ and Objective C). Of course, C++ has its own issues, so unless you have a desperate need to reduce memory consumption, it is probably best to accept this as normal behavior and keep using 64-bit Java.
Years ago, I tried 64-bit JDK but it was really buggy.
How stable would you say it is now? Would you recommend it? Should I install 64-bit JDK + eclipse or stick to 32-bit? Also, are there any advantages of 64-bit over 32-bit other than bypassing the 4 GB memory limit?
Only begin to bother with that if you want to build an application that will use a lot of memory (namely, a heap larger than 2GB).
Allow me to quote Red Hat:
The real advantage of the 64-bit JVM is that heap sizes much larger than 2GB can be used. Large page memory with the 64-bit JVM give further optimizations. The following graph shows the results running from 4GB heap sizes, in two gigabyte increments, up to 20GB heap.
That, in a pretty graph:
See, no big difference.
See more (more graphs, yay!) in: Java Virtual Machine Tuning
I think the answer can be found pretty simple.
Answer the question: "Do I need more than 4GB of RAM?".
A 64 bit JVM is as stable as a 32 bit JVM. There are no differences. In fact a Java Application running in a 64 bit JVM will consume more RAM compared to a 32 bit JVM. All internal datastructures will need more RAM.
My eclipse is running in a 64bit JVM.
Are you going to deploy to a 32 or a 64 bit environment? If you're affinitized to an environment in production then your development environment should use the same environment type.
If you're platform agnostic, go with x64 and don't even think about it. The platform is mature and stable. It gives you tremendous room to scale up as you can just add memory and make your heaps bigger.
Nobody wants to tell a CEO, "Sorry, we chose x86 and can't scale up like we hoped. It's a month long project project to retest and replatform everything for x64."
The only differences between 32-bit and 64-bit builds of any program are the sizes of machine words, the amount of addressable memory, and the Operating System ABI in use. With Java, the language specification means that the differences in machine word size and OS ABI should not matter at all unless you're using native code as well. (Native code must be built to be the same as the word-size of the JVM that will load it; you can't mix 32-bit and 64-bit builds in the same process without very exotic coding indeed, and you shouldn't be doing that with Java about.)
The 64-bitter uses 64-bit pointers. If you have 4GB+ RAM, and are running Java programs that keep 4GB+ of data structures in memory, the 64-bitter will accommodate that. The big fat pointers can point to any byte in a 4GB+ memory space.
But if your programs use less memory, and you run the 64-bit JVM, pointers in will still occupy 64 bits (8 bytes) each. This will cause data structures to be bigger, which will eat up memory unnecessarily.
I just compiled a MQTT client in both the 32-bit JDK (jdk-8u281-windows-i586) and the 64-bit JDK (jdk-8u281-windows-x64). The class files produced had matching MD5 checksums.
FYI, it's perfectly safe to have multiple JDKs on your system. But if the version you use is important, you should be comfortable with setting your system path and JAVA_HOME to ensure the correct version is used.
Recently I've been doing some benchmarking of the write performance of my company's database product, and I've found that simply switching to a 64bit JVM gives a consistent 20-30% performance increase.
I'm not allowed to go into much detail about our product, but basically it's a column-oriented DB, optimised for storing logs. The benchmark involves feeding it a few gigabytes of raw logs and timing how long it takes to analyse them and store them as structured data in the DB. The processing is very heavy on both CPU and I/O, although it's hard to say in what ratio.
A few notes about the setup:
Processor: Xeon E5640 2.66GHz (4 core) x 2
RAM: 24GB
Disk: 7200rpm, no RAID
OS: RHEL 6 64bit
Filesystem: Ext4
JVMs: 1.6.0_21 (32bit), 1.6.0_23 (64bit)
Max heap size (-Xmx): 512 MB (for both 32bit and 64bit JVMs)
Constants for both JVMs:
Same OS (64bit RHEL)
Same hardware (64bit CPU)
Max heap size fixed to 512 MB (so the speed increase is not due to the 64bit JVM using a larger heap)
For simplicity I've turned off all multithreading options in our product, so pretty much all processing is happening in a single-threaded manner. (When I turned on multi-threading, of course the system got faster, but the ratio between 32bit and 64bit performance stayed about the same.)
So, my question is... Why would I see a 20-30% speed improvement when using a 64bit JVM? Has anybody seen similar results before?
My intuition up until now has been as follows:
64bit pointers are bigger, so the L1 and L2 caches overflow more easily, so performance on the 64bit JVM is worse.
The JVM uses some fancy pointer compression tricks to alleviate the above problem as much as possible. Details on the Sun site here.
The JVM is allowed to use more registers when running in 64bit mode, which speeds things up slightly.
Given the above three points, I would expect 64bit performance to be slightly slower, or approximately equal to, the 32bit JVM.
Any ideas? Thanks in advance.
Edit: Clarified some points about the benchmark environment.
From: http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#64bit_performance
"Generally, the benefits of being able to address larger amounts of memory come with a small performance loss in 64-bit VMs versus running the same application on a 32-bit VM. This is due to the fact that every native pointer in the system takes up 8 bytes instead of 4. The loading of this extra data has an impact on memory usage which translates to slightly slower execution depending on how many pointers get loaded during the execution of your Java program. The good news is that with AMD64 and EM64T platforms running in 64-bit mode, the Java VM gets some additional registers which it can use to generate more efficient native instruction sequences. These extra registers increase performance to the point where there is often no performance loss at all when comparing 32 to 64-bit execution speed.
The performance difference comparing an application running on a 64-bit platform versus a 32-bit platform on SPARC is on the order of 10-20% degradation when you move to a 64-bit VM. On AMD64 and EM64T platforms this difference ranges from 0-15% depending on the amount of pointer accessing your application performs."
Without knowing your hardware I'm just taking some wild stabs
Your specific CPU may be using microcode to 'emulate' some x86 instructions -- most notably the x87 ISA
x64 uses sse math instead of x87 math, I've noticed a %10-%20 speedup of some math-heavy C++ apps in this case. Math differences could be the real killer if you're using strictfp.
Memory. 64 bits gives you much more address space. Maybe the GC is a little less agressive on 64 bits mode because you have extra RAM.
Is your OS is in 64b mode and running a 32b jvm via some wrapper utility?
The 64-bit instruction set has 8 more registers, this should make the code faster overall.
But, since processsors nowaday mostly wait for memory or disk, i suppose that either the memory subsystem or the disk i/o might be more efficient in 64-bit mode.
My best guess, based on a quick google for 32- vs 64-bit performance charts,
is that 64 bit I/O is more efficient. I suppose you do a lot of I/O...
If memcpy is involved when moving the data, it's probably more efficient to copy longs than ints.
Realize that the 64-bit JVM is not magic pixie dust that makes Java apps
go faster. The 64-bit JVM allows heaps >> 4 GB and, as such, only makes sense
for applications which can take advantage of huge memory on systems which
have it.
Generally there is either a slight improvement (due to certain hardware
optimizations on certain platforms) or minor degradation (due to increased
pointer size). Generally speaking there will be a need for fewer GC's -- but
when they do occur they will likely be longer.
In memory databases or search engines that can use the increased memory
for caching objects and thus avoid IPC or disk accesses will see the biggest
application level improvements. In addition a 64-bit JVM will also
allow you to run many, many more threads than a 32-bit one, because
there's more address space for things like thread stacks, etc. The
maximum number of threads generally for a 32-bit JVM is ~1000but ~100000 threads with a 64-bit JVM.
Some drawbacks though:
Additional issues with the 64-bit JVM are that certain client
oriented features like Java Plug-in and Java Web Start
are not supported. Also any native code would also need
to be compatible (e.g. JNI for things like Type II JDBC drivers).
This is a bonus for pure-Java developers as pure apps should
just run out of the box.
More on this Thread at Java.net
I have a rather memory hungry java application.
On my 32 bit systems with Windows XP Professional the application will just run fine if I give it -Xmx1280m. Everything below will end up in an java.lang.OutOfMemoryError: Java heap space exception.
If I run the same application on a 64 bit Windows XP Professional (everything else exactly the same) it requires -Xms1400m to prevent the OutOfMemory condition.
To my understanding, if I have a C program and I compile it for 32 bit and for 64 bit the
64 bit version will need more memory because pointers are wider and so on.
In my java example however the virtual machine (Sun) is the same and the bytecode is the same.
Why does it need more memory on the 64 bit machine?
Probably because the virtual machine implementation differs between 32/64 bit architecture in such a way that it consumes more memory (wider types, different GC).
The bytecode is irrelevant when it passes on the tasks to the underlying system. Im not sure that Java and memory-efficient are two terms I would put together anyway :P
Even though your bytecode is the same, the JVM converts that to machine code, so it has all the same reasons as C to require a larger memory footprint.
It's the same reason you already listed for the C program. The 64 bit system uses large memory addresses, causing it to be "leakier" (I believe that's the term I've heard used to describe it).
How does the JVM handle a primitive "long", which is 64bits, on a 32bit processor?
Can it utilise mulitple cores in parallel when on a Multi-Core 32bit machine?
How much slower are 64bit operations on a 32bit machine?
It may use multiple cores to run different threads, but it does not use them in parallel for 64 bit calculations. A 64 bit long is basically stored as two 32 bit ints. In order to add them, two additions are needed, keeping track of the carry bit. Multiplication is kind of like multiplying two two-digit numbers, except each digit is in base 2^32 instead of base 10. So on for other arithmetic operations.
Edit about speed: I can only guess about the speed difference. An addition requires two adds instead of one, and a multiplication would (I think) require four multiplies instead of one. However, I suspect that if everything can be kept in registers then the actual time for the computation would be dwarfed by the time required to go to memory twice for the read and twice for the write, so my guess is about twice as long for most operations. I imagine that it would depend on the processor, the particular JVM implementation, the phase of the moon, etc. Unless you are doing heavy number crunching, I wouldn't worry about it. Most programs spend most of their time waiting for IO to/from the disk or network.
From TalkingTree, and the Java HotSpot FAQ:
Generally, the benefits of being able to address larger amounts of memory come with a small performance loss in 64-bit VMs versus running the same application on a 32-bit VM. This is due to the fact that every native pointer in the system takes up 8 bytes instead of 4. The loading of this extra data has an impact on memory usage which translates to slightly slower execution depending on how many pointers get loaded during the execution of your Java program.
The good news is that with AMD64 and EM64T platforms running in 64-bit mode, the Java VM gets some additional registers which it can use to generate more efficient native instruction sequences. These extra registers increase performance to the point where there is often no performance loss at all when comparing 32 to 64-bit execution speed.
The performance difference comparing an application running on a 64-bit platform versus a 32-bit platform on SPARC is on the order of 10-20% degradation when you move to a 64-bit VM. On AMD64 and EM64T platforms this difference ranges from 0-15% depending on the amount of pointer accessing your application performs.