Isabelle: performance issues with version Isabelle2013-2 - java

I have performance issues in Isabelle (i.e., the resent version Isabelle2013-2).
I use Isabelle/JEdit, based on the new interface.
So before, the situation was I had some trouble with the performance. But now it is worse, as I sometimes have to wait up to 10 seconds sometimes to enter the right. The performance issues get worse over time, to the point were I have to restart Isabelle after an hour or so.
My suspicion is that I can configure Isabelle better or apply some tricks that improve the performance.
Hardware:
recent CPU, it's an intel i7 quadcore (mobile labtop chip), 16GB ram, fast SSD harddisk.
Software:
64bit arch linux (kernel 3.12.5-1-ARCH)
no 32bit compatibility libraries
my java version is:
java version "1.7.0_45"
OpenJDK Runtime Environment (IcedTea 2.4.3) (ArchLinux build 7.u45_2.4.3-1-x86_64)
My theory file has the size 125KB, the whole theory I am working is in one file, but at the moment I would really want to have just one file.
Symptoms:
Isabelle displays only about 900mb in the lower right corner of UI. I have 16GB RAM, should I configure java to use more RAM? Sometimes a singe process consumes 600% of the CPU, i.e., 6 cores that the linux kernel sees.
Tricks I use:
One trick is that I insert *) at a line below the code I am working on. This leads to a syntax error and the below code is not checked. The second trick is that I went to the timing panel, and all proofs that took longer than 0.2 seconds I commented out and replaced with sorry.
The resent two Isabelle versions are really great improvements!
Any suggestions or tricks to how I can improve the performance of Isabelle?

A few general hints on performance tuning:
One needs to distinguish Isabelle/ML (i.e. the underlying Poly/ML runtime) versus Isabelle/Scala (i.e. the underlying JVM).
Isabelle/ML: Intel CPUs like i7 have hyperthreading, which virtually doubles the number of cores. On smaller mobile machines it is usually better to restrict the nominal number of cores to half of that. See the "threads" option in Isabelle/jEdit / Plugin Options / Isabelle / General. When running on batteries you might even go further below.
Isabelle/ML: Using x86 (32bit) Poly/ML generally improves performance. This is only relevant to Linux, because that platform usually lacks x86 libraries that other platforms provide routinely. There is rarely any benefit to fall back on bulky x86_64. Poly/ML 5.5.x is very good at working in the constant space of 32bit mode.
Isabelle/Scala: JVM performance can be improved by using native x86_64 (which is the default) and providing generous stack and heap parameters.
The main Isabelle application bundle bootstraps the JVM with some options that are hard-wired in a certain place, which can be edited nonetheless:
Linux: Isabelle2013-2/Isabelle2013-2.run
Windows: Isabelle2013-2/Isabelle2013-2.ini
Mac OS X: Isabelle2013-2.app/Contents/Info.plist
For example, the maximum heap size can be changed from -Xmx1024m to -Xmx4096m.
The isabelle jedit command-line tool is configured via the Isabelle settings environment. See also $ISABELLE_HOME/src/Tools/etc/settings for some examples of JEDIT_JAVA_OPTIONS, which can be copied to $ISABELLE_HOME_USER/etc/settings and adapted accordingly. It is also possible to monitor JVM performance via jconsole to get an idea if that is actually a source of problems.
Isabelle/Scala: Isabelle bundles a certain JVM, which is assumed here by default. This variable elimination of Java versions is important to regain some sanity --- otherwise you never know what you get. Are you sure that your OpenJDK is actually used here? It is unlikely, unless you have edited some Isabelle settings.
Further sources of performance problems on Linux is graphics. Java/AWT is known to be much slower on X11 than on Windows and Mac OS X. Using the quasi-native GTK look-and-feel on Linux degrades graphics performance even further.

Related

java performance Windows vs Linux

I have developed a Java application that normally run on Linux. It's a POJO application with Swing. Performance is reasonably good.
Now I tried to run it on Windows XP with 2Gb RAM, in a machine with similar or greater potency, and performance is much worse. I observe that it uses 100% CPU.
For example:
A process that creates a window very heavy, with many components: Linux 5 seconds, Windows 12.
A process that accesses a PostgreSQL DB with a heavy query (the server and the JDBC driver are the same): 23 seconds Linux, Windows 43.
I tried also with a virtualized Windows machine with similar features, and the result is significantly better!
Is it normal? What parameters can I assign to improve performance?
Unless you are comparing Linux and Windows XP on the same machine it is very hard to say what the difference is. It could be that while the CPU is faster, the GFX card and disk subsystem is slower.
Java passes all of this IO and GFX acitvity to the underlying OS and the only thing you can do differently is to do less work or work more efficiently. This is likely to make both systems faster, as there is not particular to one OS which you can tune.
Try running Java Visual VM (which is distributed as part of the JDK): attach to your application, then use the CPU Profiler to determine precisely where all that CPU time is going.
There may be subtle differences in the behavior of JRE parts (Swing comes to mind), where the JRE responds very unforgiving to a bad practice (like doing thing from the wrong thread in Swing).
Since you have no clues, I would try profiling the same use case in both environments and see if any significant differences turn up where the time is spent. This will hopefully reveal a hint.
Edit: And ensure that you do not run Windows with brakes on (aka. Antivirus and other 'useful' software that can kill system performance).

eclipse performance arm vs intel

I'm using Intel Core 2 Duo T5550 with 3 GB ram, and SSD HDD for java development under Ubuntu 64, all is tweaked, but it's still slow. I mean switching between windows and other simple actions, even when it starts up, especially when you open few big projects.
I heard that arm has jazelle and thumb on newer processors, which execute java bytecode directly, and it's fast.
If I switch to such machine would eclipse(java) work faster?
Edit:
Thanks for anwserws. I know that Core i7 is at least 4 times faster for java ( just have a look http://infoscreens.org/benchmark_en.html ), but I thought that ARM, which are 2x2GHz and execute java directly would be faster (for java only).
I have Oracle Java, also I used JRockit, but it was strangely crashing during debugging.
I thing I'l buy i7 desktop in near future. Thanks :)
A Core 2 Duo machine with 3 GB of RAM should have not problem running Eclipse. An ARM chip running standard desktop-oriented OS and JVM is going to be extremely slow. Far slower than your Core 2 Duo machine. Regarding those new ARM instructions, in order for them to be useful, there needs to be a JVM that can work with them. If one exists, it is going to be of specialized sort likely designed for mobile device operating systems.
One common problem that Linux users have with Eclipse is that OpenJDK that comes with Linux distributions just doesn't perform as well as Oracle/Sun JDK. If you haven't installed Oracle JDK, I recommend installing it for use with Eclipse. Your performance problem may just go away.
If it doesn't and you are still considering buying a new machine, an i3/i5/i7 machine would be a far far better choice for a development platform than anything ARM that exists today or likely to exists in the near future.
Oh and one more thing... Eclipse has native components (SWT UI and file I/O) and there isn't a build available for any ARM architecture.
My guess is you are running low of memory, not just for the application but tocache disk access. Having more memory, regardless of your process is likely to be your problem.
When your system is running slow is you system waiting on IO or consming CPU. e.g. have a look on top.
BTW: I use IntelliJ CE with about 15,000 classes open and it works fine on a machine with 24 GB. ;)
ARM CPU is not as powerful as x86 CPU, so no. Also, I doubt eclipse will run on an ARM machine.

Why is the 64bit JVM faster than the 32bit one?

Recently I've been doing some benchmarking of the write performance of my company's database product, and I've found that simply switching to a 64bit JVM gives a consistent 20-30% performance increase.
I'm not allowed to go into much detail about our product, but basically it's a column-oriented DB, optimised for storing logs. The benchmark involves feeding it a few gigabytes of raw logs and timing how long it takes to analyse them and store them as structured data in the DB. The processing is very heavy on both CPU and I/O, although it's hard to say in what ratio.
A few notes about the setup:
Processor: Xeon E5640 2.66GHz (4 core) x 2
RAM: 24GB
Disk: 7200rpm, no RAID
OS: RHEL 6 64bit
Filesystem: Ext4
JVMs: 1.6.0_21 (32bit), 1.6.0_23 (64bit)
Max heap size (-Xmx): 512 MB (for both 32bit and 64bit JVMs)
Constants for both JVMs:
Same OS (64bit RHEL)
Same hardware (64bit CPU)
Max heap size fixed to 512 MB (so the speed increase is not due to the 64bit JVM using a larger heap)
For simplicity I've turned off all multithreading options in our product, so pretty much all processing is happening in a single-threaded manner. (When I turned on multi-threading, of course the system got faster, but the ratio between 32bit and 64bit performance stayed about the same.)
So, my question is... Why would I see a 20-30% speed improvement when using a 64bit JVM? Has anybody seen similar results before?
My intuition up until now has been as follows:
64bit pointers are bigger, so the L1 and L2 caches overflow more easily, so performance on the 64bit JVM is worse.
The JVM uses some fancy pointer compression tricks to alleviate the above problem as much as possible. Details on the Sun site here.
The JVM is allowed to use more registers when running in 64bit mode, which speeds things up slightly.
Given the above three points, I would expect 64bit performance to be slightly slower, or approximately equal to, the 32bit JVM.
Any ideas? Thanks in advance.
Edit: Clarified some points about the benchmark environment.
From: http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#64bit_performance
"Generally, the benefits of being able to address larger amounts of memory come with a small performance loss in 64-bit VMs versus running the same application on a 32-bit VM. This is due to the fact that every native pointer in the system takes up 8 bytes instead of 4. The loading of this extra data has an impact on memory usage which translates to slightly slower execution depending on how many pointers get loaded during the execution of your Java program. The good news is that with AMD64 and EM64T platforms running in 64-bit mode, the Java VM gets some additional registers which it can use to generate more efficient native instruction sequences. These extra registers increase performance to the point where there is often no performance loss at all when comparing 32 to 64-bit execution speed.
The performance difference comparing an application running on a 64-bit platform versus a 32-bit platform on SPARC is on the order of 10-20% degradation when you move to a 64-bit VM. On AMD64 and EM64T platforms this difference ranges from 0-15% depending on the amount of pointer accessing your application performs."
Without knowing your hardware I'm just taking some wild stabs
Your specific CPU may be using microcode to 'emulate' some x86 instructions -- most notably the x87 ISA
x64 uses sse math instead of x87 math, I've noticed a %10-%20 speedup of some math-heavy C++ apps in this case. Math differences could be the real killer if you're using strictfp.
Memory. 64 bits gives you much more address space. Maybe the GC is a little less agressive on 64 bits mode because you have extra RAM.
Is your OS is in 64b mode and running a 32b jvm via some wrapper utility?
The 64-bit instruction set has 8 more registers, this should make the code faster overall.
But, since processsors nowaday mostly wait for memory or disk, i suppose that either the memory subsystem or the disk i/o might be more efficient in 64-bit mode.
My best guess, based on a quick google for 32- vs 64-bit performance charts,
is that 64 bit I/O is more efficient. I suppose you do a lot of I/O...
If memcpy is involved when moving the data, it's probably more efficient to copy longs than ints.
Realize that the 64-bit JVM is not magic pixie dust that makes Java apps
go faster. The 64-bit JVM allows heaps >> 4 GB and, as such, only makes sense
for applications which can take advantage of huge memory on systems which
have it.
Generally there is either a slight improvement (due to certain hardware
optimizations on certain platforms) or minor degradation (due to increased
pointer size). Generally speaking there will be a need for fewer GC's -- but
when they do occur they will likely be longer.
In memory databases or search engines that can use the increased memory
for caching objects and thus avoid IPC or disk accesses will see the biggest
application level improvements. In addition a 64-bit JVM will also
allow you to run many, many more threads than a 32-bit one, because
there's more address space for things like thread stacks, etc. The
maximum number of threads generally for a 32-bit JVM is ~1000but ~100000 threads with a 64-bit JVM.
Some drawbacks though:
Additional issues with the 64-bit JVM are that certain client
oriented features like Java Plug-in and Java Web Start
are not supported. Also any native code would also need
to be compatible (e.g. JNI for things like Type II JDBC drivers).
This is a bonus for pure-Java developers as pure apps should
just run out of the box.
More on this Thread at Java.net

Running JAVA on Windows Intel vs Solaris Sparc (T1000)

Hi I'm trying to test my JAVA app on Solaris Sparc and I'm getting some weird behavior. I'm not looking for flame wars. I just curious to know what is is happening or what is wrong...
I'm running the same JAR on Intel and on the T1000 and while on the Windows machine I'm able to get 100% (Performance monitor) cpu utilisation on the Solaris machine I can only get 25% (prstat)
The application is a custom server app I wrote that uses netty as the network framework.
On the Windows machine I'm able to reach just above 200 requests/responses a second including full business logic and access to outside 3rd parties while on the Solaris machine I get about 150 requests/responses at only 25% CPU
One could only imagine how many more requests/responses I could get out of the Sparc if I can make it uses full power.
The servers are...
Windows 2003 SP2 x64bit, 8GB, 2.39Ghz Intel 4 core
Solaris 10.5 64bit, 8GB, 1Ghz 6 core
Both using jdk 1.6u21 respectively.
Any ideas?
The T1000 uses a multi-core CPU, which means that the CPU can run multiple threads simultaneously. If the CPU is at 100% utilization, it means that all cores are running at 100%. If your application uses less threads than the number of cores, then your application cannot use all the cores, and therefore cannot use 100% of the CPU.
Without any code, it's hard to help out. Some ideas:
Profile the Java app on both systems, and see where the difference is. You might be surprised. Because the T1 CPU lacks out-of-order execution, you might see performance lacking in strange areas.
As Erick Robertson says, try bumping up the number of threads to the number of virtual cores reported via prstat, NOT the number of regular cores. The T1000 uses UltraSparc T1 processors, which make heavy use of thread-level parallelism.
Also, note that you're using the latest-gen Intel processors and old Sun ones. I highly recommend reading Developing and Tuning Applications on UltraSPARC T1 Chip Multithreading Systems and Maximizing Application Performance on Chip Multithreading (CMT) Architectures, both by Sun.
This is quite an old question now, but we ran across similar issues.
An important fact to notice is that SUN T1000 is based on UltraSpac T1 processor which only have 1 single FPU for 8 cores.
So if you application does a lot or even some Float-Point calculation, then this might become an issue, as the FPU will become the bottleneck.

Java memory consumption, "top" and HP-Ux

We ship Java applications that are run on Linux, AIX and HP-Ux (PA-RISC). We seem to struggle to get acceptable levels of performance on HP-Ux from applications that work just fine in the other two environments. This is true of both execution time and memory consumption.
Although I'm yet to find a definitive article on "why", I believe that measuring memory consumption using "top" is a crude approach due to things like the shared code giving misleading results. However, it's about all we have to go on with a customer site where memory consumption on HP-Ux has become an issue. It only became an issue this time when we moved from Java 1.4 to Java 1.5 (on HP-Ux 11.23 PA-RISC). By "an issue", I mean that the machine ceased to create new processes because we had exhausted all 16GB of physical memory.
By measuring "before" and "after" total "free memory" we are trying to gauge how much has been consumed by a Java application. I wrote a quick app that stores 10,000 random 64 bit strings in an ArrayList and tried this approach to measuring consumption on Linux and HP-Ux under Java 1.4 and Java 1.5.
The results:
HP Java 1.4 ~60MB
HP Java 1.5 ~150MB
Linux Java 1.4 ~24MB
Linux Java 1.5 ~16MB
Can anyone explain why these results might arise? Is this some idiosyncrasy of the way "top" measures free memory? Does Java 1.5 on HP really consume 2.5 times more memory than Java 1.4?
Thanks.
The JVMs might just have different default parameters. The heap will grow to the size that you have configured to let it. The default on the Sun VM is a certain percentage of the RAM in the machine - that's to say that Java will, by default, use more memory if you use a machine with more memory on it.
I'd be really surprised if the HP-UX VM hadn't had lots of tuning for this sort of thing by HP. I'd suggest you fiddle with the parameters on both - figure out what the smallest max heap size you can use without hurting performance or throughput.
I don't have a HP box right now to test my hypothesis. However, if I were you, I would use a profiler like JConsole(comes with JDK) OR yourkit to measure what is happening.
However, it appears that you started measuring after you saw something amiss; So, I'm NOT discounting that it's happening -- just pointing you at something I'd have done in the same situation.
First, it's not clear what did you measure by "10,000 random 64 bit strings" test. You supposed to start the application, measure it's bootstrap memory footprint, and then run your test. It could easily be that Java 1.5 acquires more heap right after start (due to heap manager settings, for instance).
Second, we do run Java apps under 1.4, 1.5 and 1.6 under HP-UX, and they don't demonstrate any special memory requirements. We have Itanium hardware, though.
Third, why do you use top? Why not just print Runtime.getRuntime().totalMemory()?
Fourth, by adding values to ArrayList you create memory fragmentation. ArrayList has to double it's internal storage now and then. Depending on GC settings and ArrayList.ensureCapacity() implementation the amount of non-collected memory may differ dramatically between 1.4 and 1.5.
Essentially, instead of figuring out the cause of problem you have run a random test that gives you no useful information. You should run a profiler on the application to figure out where the memory leaks.
You might also want to look at the problem you are trying to solve... I don't imagine there are many problems that eat 16GB of memory that aren't due for a good round of optimization.
Are you launching multiple VMs? Are you reading large datasets into memory, and not discarding them quickly enough? etc etc etc.

Categories