Java algorithm performance difference between Linux and Windows - java

We observe a very significant performance difference of an algorithm in java between Linux (ubuntu) and Windows. The algorithm is running on JADE multi-agent system. In the system each agent has its own process. The agents perform computational intensive operations and communicate with each other. The difference in performance is 3x-4x in favor of Windows. Test machine is i3770 quad core (8 logical cores). We have checked with java 7 and 8, 32 bit and 64 bit version. Under Linux, all the cores have the same load: about 50% is spent in userspace and 50% in kernel. Under Windows 80% is spent in user mode, 20% in kernel.
The PNG files contain the Java profile information sorted by various columns from Windows and Linux. (jdk8-u25-x64)
Recently we doubled Linux performance using “chrt” command: “chrt -f 99 java …” which enables FIFO_SCHED scheduling for the java process.
We suspect that Linux needlessly switches between tasks too many times.
Is there a way to increase java performance on Linux to match Windows performance?

Related

Why is multithreaded Java Program not faster on 'super' Linux server vs laptop Win7?

Intro
So far, I have been working on a piece of software which I am now testing to see the benefit of concurrency. I am testing the same software using two different systems:
System 1: 2 x Intel(R) Xeon(R) CPU E5-2665 # 2.40GHz with a
total of 16 cores , 64GB of RAM running on
Scientific LINUX 6.1 and JAVA SE Runtime Enviroment (build
1.7.0_11-b21).
System 2 Lenovo Thinkpad T410 with Intel i5
processor # 2.67GHz with 4 cores, 4GB of ram running windows 7 64-bit
and JAVA SE Runtime Enviroment (build 1.7.0_11-b21).
Details: The program simulates patients with type 1 diabetes. It does some import (read from csv), some numerical computations(Dopri54 + newton) and some export (Write to csv).
I have exclusive rights to the server, so there should be no noise at all.
Results
These are my results:
Now as you can see system 1 is just as fast as system 2 despite it is a pretty powerfull machine. I have no idea why this is the case - and I am confident that the system is the same. The number of threads goes from 10-100.
Question:
Why would does the two runs have similar execution time despite system 1 being significantly more powerfull than system 2?
UPDATE!
Now, I just thought a bit about what you guys said about it being an I/O memory issue. So, I thought that if I could reduce the file size it would speed up the program, right? I managed to reduce the import file size with a factor of 5, however, no performance improvement at all. Do you guys still think it is the same problem?
As you write .csv files, it is possible that the bottleneck is not your camputation power, but the writing rate on your hard disk.
Almost certainly this means that either CPU time is not the bottleneck for this application, or that something about it is making it resistant to effective parallelization, or both.
For example if reading the data from disk is actually the limiting factor then faster disks are what matters, not faster processors.
If it's running out of memory then that will be a bigger bottlneck.
If it takes more time to spawn each thread than the actual processing inside the thread.
etc.
In this sort of optimization work metrics are king. You need real hard solid numbers for how long things are taking, and where in your program you are losing that time. Only then can you see where to focus your efforts and see if they are effective.

Isabelle: performance issues with version Isabelle2013-2

I have performance issues in Isabelle (i.e., the resent version Isabelle2013-2).
I use Isabelle/JEdit, based on the new interface.
So before, the situation was I had some trouble with the performance. But now it is worse, as I sometimes have to wait up to 10 seconds sometimes to enter the right. The performance issues get worse over time, to the point were I have to restart Isabelle after an hour or so.
My suspicion is that I can configure Isabelle better or apply some tricks that improve the performance.
Hardware:
recent CPU, it's an intel i7 quadcore (mobile labtop chip), 16GB ram, fast SSD harddisk.
Software:
64bit arch linux (kernel 3.12.5-1-ARCH)
no 32bit compatibility libraries
my java version is:
java version "1.7.0_45"
OpenJDK Runtime Environment (IcedTea 2.4.3) (ArchLinux build 7.u45_2.4.3-1-x86_64)
My theory file has the size 125KB, the whole theory I am working is in one file, but at the moment I would really want to have just one file.
Symptoms:
Isabelle displays only about 900mb in the lower right corner of UI. I have 16GB RAM, should I configure java to use more RAM? Sometimes a singe process consumes 600% of the CPU, i.e., 6 cores that the linux kernel sees.
Tricks I use:
One trick is that I insert *) at a line below the code I am working on. This leads to a syntax error and the below code is not checked. The second trick is that I went to the timing panel, and all proofs that took longer than 0.2 seconds I commented out and replaced with sorry.
The resent two Isabelle versions are really great improvements!
Any suggestions or tricks to how I can improve the performance of Isabelle?
A few general hints on performance tuning:
One needs to distinguish Isabelle/ML (i.e. the underlying Poly/ML runtime) versus Isabelle/Scala (i.e. the underlying JVM).
Isabelle/ML: Intel CPUs like i7 have hyperthreading, which virtually doubles the number of cores. On smaller mobile machines it is usually better to restrict the nominal number of cores to half of that. See the "threads" option in Isabelle/jEdit / Plugin Options / Isabelle / General. When running on batteries you might even go further below.
Isabelle/ML: Using x86 (32bit) Poly/ML generally improves performance. This is only relevant to Linux, because that platform usually lacks x86 libraries that other platforms provide routinely. There is rarely any benefit to fall back on bulky x86_64. Poly/ML 5.5.x is very good at working in the constant space of 32bit mode.
Isabelle/Scala: JVM performance can be improved by using native x86_64 (which is the default) and providing generous stack and heap parameters.
The main Isabelle application bundle bootstraps the JVM with some options that are hard-wired in a certain place, which can be edited nonetheless:
Linux: Isabelle2013-2/Isabelle2013-2.run
Windows: Isabelle2013-2/Isabelle2013-2.ini
Mac OS X: Isabelle2013-2.app/Contents/Info.plist
For example, the maximum heap size can be changed from -Xmx1024m to -Xmx4096m.
The isabelle jedit command-line tool is configured via the Isabelle settings environment. See also $ISABELLE_HOME/src/Tools/etc/settings for some examples of JEDIT_JAVA_OPTIONS, which can be copied to $ISABELLE_HOME_USER/etc/settings and adapted accordingly. It is also possible to monitor JVM performance via jconsole to get an idea if that is actually a source of problems.
Isabelle/Scala: Isabelle bundles a certain JVM, which is assumed here by default. This variable elimination of Java versions is important to regain some sanity --- otherwise you never know what you get. Are you sure that your OpenJDK is actually used here? It is unlikely, unless you have edited some Isabelle settings.
Further sources of performance problems on Linux is graphics. Java/AWT is known to be much slower on X11 than on Windows and Mac OS X. Using the quasi-native GTK look-and-feel on Linux degrades graphics performance even further.

java performance Windows vs Linux

I have developed a Java application that normally run on Linux. It's a POJO application with Swing. Performance is reasonably good.
Now I tried to run it on Windows XP with 2Gb RAM, in a machine with similar or greater potency, and performance is much worse. I observe that it uses 100% CPU.
For example:
A process that creates a window very heavy, with many components: Linux 5 seconds, Windows 12.
A process that accesses a PostgreSQL DB with a heavy query (the server and the JDBC driver are the same): 23 seconds Linux, Windows 43.
I tried also with a virtualized Windows machine with similar features, and the result is significantly better!
Is it normal? What parameters can I assign to improve performance?
Unless you are comparing Linux and Windows XP on the same machine it is very hard to say what the difference is. It could be that while the CPU is faster, the GFX card and disk subsystem is slower.
Java passes all of this IO and GFX acitvity to the underlying OS and the only thing you can do differently is to do less work or work more efficiently. This is likely to make both systems faster, as there is not particular to one OS which you can tune.
Try running Java Visual VM (which is distributed as part of the JDK): attach to your application, then use the CPU Profiler to determine precisely where all that CPU time is going.
There may be subtle differences in the behavior of JRE parts (Swing comes to mind), where the JRE responds very unforgiving to a bad practice (like doing thing from the wrong thread in Swing).
Since you have no clues, I would try profiling the same use case in both environments and see if any significant differences turn up where the time is spent. This will hopefully reveal a hint.
Edit: And ensure that you do not run Windows with brakes on (aka. Antivirus and other 'useful' software that can kill system performance).

eclipse performance arm vs intel

I'm using Intel Core 2 Duo T5550 with 3 GB ram, and SSD HDD for java development under Ubuntu 64, all is tweaked, but it's still slow. I mean switching between windows and other simple actions, even when it starts up, especially when you open few big projects.
I heard that arm has jazelle and thumb on newer processors, which execute java bytecode directly, and it's fast.
If I switch to such machine would eclipse(java) work faster?
Edit:
Thanks for anwserws. I know that Core i7 is at least 4 times faster for java ( just have a look http://infoscreens.org/benchmark_en.html ), but I thought that ARM, which are 2x2GHz and execute java directly would be faster (for java only).
I have Oracle Java, also I used JRockit, but it was strangely crashing during debugging.
I thing I'l buy i7 desktop in near future. Thanks :)
A Core 2 Duo machine with 3 GB of RAM should have not problem running Eclipse. An ARM chip running standard desktop-oriented OS and JVM is going to be extremely slow. Far slower than your Core 2 Duo machine. Regarding those new ARM instructions, in order for them to be useful, there needs to be a JVM that can work with them. If one exists, it is going to be of specialized sort likely designed for mobile device operating systems.
One common problem that Linux users have with Eclipse is that OpenJDK that comes with Linux distributions just doesn't perform as well as Oracle/Sun JDK. If you haven't installed Oracle JDK, I recommend installing it for use with Eclipse. Your performance problem may just go away.
If it doesn't and you are still considering buying a new machine, an i3/i5/i7 machine would be a far far better choice for a development platform than anything ARM that exists today or likely to exists in the near future.
Oh and one more thing... Eclipse has native components (SWT UI and file I/O) and there isn't a build available for any ARM architecture.
My guess is you are running low of memory, not just for the application but tocache disk access. Having more memory, regardless of your process is likely to be your problem.
When your system is running slow is you system waiting on IO or consming CPU. e.g. have a look on top.
BTW: I use IntelliJ CE with about 15,000 classes open and it works fine on a machine with 24 GB. ;)
ARM CPU is not as powerful as x86 CPU, so no. Also, I doubt eclipse will run on an ARM machine.

Running JAVA on Windows Intel vs Solaris Sparc (T1000)

Hi I'm trying to test my JAVA app on Solaris Sparc and I'm getting some weird behavior. I'm not looking for flame wars. I just curious to know what is is happening or what is wrong...
I'm running the same JAR on Intel and on the T1000 and while on the Windows machine I'm able to get 100% (Performance monitor) cpu utilisation on the Solaris machine I can only get 25% (prstat)
The application is a custom server app I wrote that uses netty as the network framework.
On the Windows machine I'm able to reach just above 200 requests/responses a second including full business logic and access to outside 3rd parties while on the Solaris machine I get about 150 requests/responses at only 25% CPU
One could only imagine how many more requests/responses I could get out of the Sparc if I can make it uses full power.
The servers are...
Windows 2003 SP2 x64bit, 8GB, 2.39Ghz Intel 4 core
Solaris 10.5 64bit, 8GB, 1Ghz 6 core
Both using jdk 1.6u21 respectively.
Any ideas?
The T1000 uses a multi-core CPU, which means that the CPU can run multiple threads simultaneously. If the CPU is at 100% utilization, it means that all cores are running at 100%. If your application uses less threads than the number of cores, then your application cannot use all the cores, and therefore cannot use 100% of the CPU.
Without any code, it's hard to help out. Some ideas:
Profile the Java app on both systems, and see where the difference is. You might be surprised. Because the T1 CPU lacks out-of-order execution, you might see performance lacking in strange areas.
As Erick Robertson says, try bumping up the number of threads to the number of virtual cores reported via prstat, NOT the number of regular cores. The T1000 uses UltraSparc T1 processors, which make heavy use of thread-level parallelism.
Also, note that you're using the latest-gen Intel processors and old Sun ones. I highly recommend reading Developing and Tuning Applications on UltraSPARC T1 Chip Multithreading Systems and Maximizing Application Performance on Chip Multithreading (CMT) Architectures, both by Sun.
This is quite an old question now, but we ran across similar issues.
An important fact to notice is that SUN T1000 is based on UltraSpac T1 processor which only have 1 single FPU for 8 cores.
So if you application does a lot or even some Float-Point calculation, then this might become an issue, as the FPU will become the bottleneck.

Categories