I have a java program that is a typical machine learning algorithm, updating the values for some parameters by some equations:
for (int iter=0; iter<1000; iter++) {
// 1. Create many temporary variables and do some computations
// 2. Update the value for the parameters
}
The computations of updating parameters are rather complex, and I have to create many temporary objects, but they are not referenced out of the loop. The code in the loop is CPU-intensive, and does not access disk. This program loads a relatively large training dataset, therefore, I granted 10G memory (-Xmx10G) to JVM, which is much larger than it requires (peak at ~6G by "top" command or window's task manager).
I tested it on several linux machines (centos 6, 24G memory) and a window machine (win7, 12G), both with SUN Hotspot JDK/JRE 1.8 installed. I did not specify other JVM parameters except -Xmx. Both machines are dedicated to my program.
On windows, my program runs well: each iteration uses very similar running time. However, the running time on all of the centos machines is weird.
It initially runs properly, but slows down dramatically (~10 times slower) at 7th/8th iteration, and then keeps slow down ~10% in each iteration ever after.
I suspect it might be caused by Java's garbage collector. Therefore, I use jconsole to monitor my program. Minor GC happens very frequently on both machines , that is because the program creates many temporary variable in the loop. Furthermore, I used "jstat -gcutil $pid$ 1s" command and captured the statistics:
Centos: https://www.dropbox.com/s/ioz7ai6i1h57eoo/jstat.png?dl=0
Window: https://www.dropbox.com/s/3uxb7ltbx9kpm9l/jstat-winpng.png?dl=0
[Edited] However, the statistics on two kinds of machines differ a lot:
"S1” on windows jumps fast between 0 to 50, while stays at "0.00" on centos.
"E" on windows changes very rapidly from 0 to 100. As I print the stat for every second, the screenshot does not show its increment to 100. On centos, however, "E" increases rather slowly towards 100, and then reduces to 0, and increases again.
It seems the weird behaviour of my program is due to Java GC? I am new to Java performance monitor and do not have a good idea to optimize GC parameter setting. Do you have any suggestions? Thank you very much!
I'm sorry to post this as an answer but I don't have enough score to comment.
If you think it's a GC related issue I'd change it for the Garbage 1 Collector –XX:+UseG1GC
I found this brief explanation about it:
http://blog.takipi.com/garbage-collectors-serial-vs-parallel-vs-cms-vs-the-g1-and-whats-new-in-java-8/
Can you run your software under profiling? Try to use the jprofiler, VisualVM or even the netbeans profiler. It may help you a lot.
I noticed that you have your own encapsulation of a vector and matrix. Maybe your are spending a lot more memory than necessary with that too. But I don't think that is the problem.
Sorry again about not contributing as a comment. (It would be more appropriate)
I would consider declaring the vars outside the loop so mem allocation is done once and eliminate GC completely.
Giving Java (or any garbage collecting language) too much memory has an adverse effect on performance. The live (referenced) objects become increasing sparse in memory resulting in more frequent fetches from main memory. Note that in the examples you've shown us the faster windows is doing more quick and full GC than Linux - but GC cycles (especially full gcs) are usually bad for performance.
If running the training set does not take an exceptionally long time, then try benchmarking at different memory allocations.
A more radical solution, but one which should have a big impact is to eliminate (or reduce as much as possible) object creation within the loop by recycling objects in pools.
First, it is a common best practice to declare Variables outside of loops to avoid garbace collection.
as 'Wagner Tsuchiya' said, try running a profiler if you have doubts about the GC.
If you want some tips on GC tuning, i found nice blogpost.
You could try calling System.gc() every couple iterations to see if performance goes up or down. This may help you narrow it down to some of the previous answers diagnostics.
If the GC time is hundreds of milliseconds as shown in your screenshot then GC is likely not the issue here. I suggest you look into lock contention and possibly IO using a profiler (Netbeans is great). I know you stated your program did very little IO but with profiling (much like debugging) you have to remove all your assumptions and go step by step.
In my experience JAVA needs enough memory and 2+ CPU. Otherwise CPU usage will be very extensive when GC starts running.
Related
I'm programming in Java using eclipse and after running JVM for a couple of hours, my program tends to slow to a trickle. What's normally printed (or executed) in a few fraction's of a second, is taking a couple of minutes or hours.
I'm aware this is usually caused by a memory leak in program. However, I'm under the impression that a memory leak slows PC bec it uses the majority of CPU power for garbage collection. When I take a look at task manager I only see 22-25% of CPU being used at the moment (it has remained steady for the last couple of hours) and approx. 35% of memory free on my machine.
Could the slowing down of my program be caused by something other than a memory leak or is it for sure a memory leak (which means I now need to take a hard look to track down source of leak..) And if yes, why would CPU usage be relatively low?
Thanks
Sometimes this happens when you have loop relationships over your objects or entities. JVM tries to read the data or bind the data looping through same set of objects, this drastically effect the performance of the JVM; most of the time crash the application even. As on previous answer, you can use jconsole to check which time this happens and take an action. Hope you get the idea; may be this is not the case, this is what came to my mind when I read your question.
cheers!!!
Well, at first, Memory Leak/any other malfunction doesn't affect your PC or any other part of your computer unless you are referencing some external resource which is choking. To answer your question, Generically speaking, while there is a possibility that slowing down your program could be caused by CPU, in your case however since your program/process is going slow gradually, most likely there is a memory Leak in your code.
You could use any profiler / jVIsualVM to monitor the mermoy usage/ object's state to nail down the issue.
You may be aware that a modern computer system has more than one CPU core. A single threaded program will use only a single core, which is consistent with task manager reporting an overall cpu usage of 25% (1 core fully loaded, 3 cores idle = 25% total cpu capacity used).
Garbage collection can cause slowdowns, but usually only does so if the JVM is memory constrained. To verify whether it is garbage collection, you can use jconsole or jvisualvm (which are part of the JDK) to see how much CPU time was spent doing garbage collection.
To investigate why your program is slow, using a profiler is usually the most efficient approach.
I think We can not say anything straight forward for this issue. You need to check the behaviour of you program using jconsole or jvisualvm which is part of you JDK.
I am working on an application whose purpose is to compute reports has fast as possible.
My application uses a big amount of memory; more than 100 Go.
Since our last release, I notice a big performance slowdown. My investigation shows that, during the computation, I get many garbage collection between 40 and 60 seconds!!!
(JMC tells me that they are SerialOld but I don't know what it exactly means) and, of course, when the JVM is garbage collecting, the application is absolutely freezed
I am now investigating the origin of these garbage collections... and this is a very hard work.
I suspect that, if these garbage collections are so long, it is because they are spending many times in finalize functions (I know that, among all the libraries we integrate from other teams, some of them uses finalizers)
However, I don't know how to confrim (or not) this hypothesis; How to find which finalizer is time consuming.
I am looking for a good tool or even a good methodology
Here is data collected via JVisualVM
As you can see, I always have many "Pending Finalizers" when I have a
log Old Garbage
What is surprising is that when I am using JVisualVM, the above graph
scrolls regularly from right to left. When the Old Garbage is
triggered, the scrolling stops (until here, it looks normal, this is
end-of-world). However, when the scrolling suddenly restart, it does
not from the end of Old Garbage but from the end of Pending Serializer
This lets me think that the finalizers were blocking the JVM
Does anyone has an explaination for this?
Thank you very much
Philippe
My application uses a big amount of memory; more than 100 Go.
JMC tells me that they are SerialOld but I don't know what it exactly means
If you are using the serial collector for a 100GB heap then long pauses are to be expected because the serial collector is single-threaded and one core can only only chomp through so much memory per unit of time.
Simply choosing any one of the multi-threaded collectors should yield lower pause times.
However, I don't know how to confrim (or not) this hypothesis; How to find which finalizer is time consuming.
Generally: Gather more data. For GC-related things you need to enabled GC logging, for time spent in java code (be it your application or 3rd party libraries) you need a profiler.
Here is what I would do to investigate your finalizer theory.
Start the JVM using your favorite Java profiler.
Leave it running for long enough to get a full heap.
Start the profiler.
Trigger garbage collection.
Stop profiler.
Now you can use the profiler information to figure out which (if any) finalize methods are using a large amount of time.
However, I suspect that the real problem will be a memory leak, and that your JVM is getting to the point where the heap is filling up with unreclaimable objects. That could explain the frequent "SerialOld" garbage collections.
Alternatively, this could just be a big heap problem. 100Gb is ... big.
I've got a strange problem in my Clojure app.
I'm using http-kit to write a websocket based chat application.
Client's are rendered using React as a single page app, the first thing they do when they navigate to the home page (after signing in) is create a websocket to receive things like real-time updates and any chat messages. You can see the site here: www.csgoteamfinder.com
The problem I have is after some indeterminate amount of time, it might be 30 minutes after a restart or even 48 hours, the JVM running the chat server suddenly starts consuming all the CPU. When I inspect it with NR (New Relic) I can see that all that time is being used by the garbage collector -- at this stage I have no idea what it's doing.
I've take a number of screenshots where you can see the effect.
You can see a number of spikes, those spikes correspond to large increases in CPU usage because of the garbage collector. To free up CPU I usually have to restart the JVM, I have been relying on receiving a CPU alert from NR in my slack account to make sure I jump on these quickly....but I really need to get to the root of the problem.
My initial thought was that I was possibly holding onto the socket reference when the client closed it at their end, but this is not the case. I've been looking at socket count periodically and it is fairly stable.
Any ideas of where to start?
Kind regards, Jason.
It's hard to imagine what could have caused such an issue. But at first what I would do is taking a heap dump at the time of crash. This can be enabled with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path_to_your_heap_dump> JVM args. As a general practice don't increase heap size more the size of physical memory available on your server machine. In some rare cases JVM is unable to dump heap space because process is doomed; in such cases you can use gcore(if you're on Linux, not sure about Windows).
Once you grab the heap dump, analyse it with mat, I have debugged such applications and this worked perfectly to pin down any memory related issues. Mat allows you to dissect the heap dump in depth so you're sure to find the cause of your memory issue if it is not the case that you have allocated very small heap space.
If your program is spending a lot of CPU time in garbage collection, that means that your heap is getting full. Usually this means one of two things:
You need to allocate more heap to your program (via -Xmx).
Your program is leaking memory.
Try the former first. Allocate an insane amount of memory to your program (16GB or more, in your case, based on the graphs I'm looking at). See if you still have the same symptoms.
If the symptoms go away, then your program just needed more memory. Otherwise, you have a memory leak. In this case, you need to do some memory profiling. In the JVM, the way this is usually done is to use jmap to generate a heap dump, then use a heap dump analyser (such as jhat or VisualVM) to look at it.
(Fair disclosure: I'm the creator of a jhat fork called fasthat.)
Most likely your tenure space is filling up triggering a full collection. At this time the GC uses all the CPUS for sometime seconds at time.
To diagnose why this is happening you need to look at your rate of promotion (how much data is moving from young generation to tenured space)
I would look at increasing the young generation size to decrease rate of promotion. You could also look at using CMS as this has shorter pause times (though it uses more CPU)
Things to try in order:
Reduce the heap size
Count the number of objects of each class, and see if the numbers makes sense
Do you have big byte[] that lives past generation 1?
Change or tune GC algorithm
Use high-availability, i.e. more than one JVM
Switch to Erlang
You have triggered a global GC. The GC time grows faster-than-linear depending on the amount of memory, so actually reducing the heap space will trigger the global GC more often and make it faster.
You can also experiment with changing GC algorithm. We had a system where the global GC went down from 200s (happened 1-2 times per 24 hours) to 12s. Yes, the system was at a complete stand still for 3 minutes, no the users were not happy :-) You could try -XX:+UseConcMarkSweepGC
http://www.fasterj.com/articles/oraclecollectors1.shtml
You will always have stops like this for JVM and similar; it is more about how often you will get it, and how fast the global GC will be. You should make a heap dump and get the count of the different objects of each class. Most likely, you will see that you have millions of one of them, somehow, you are keeping a pointer to them unnecessary in a ever growing cache or sessions or similar.
http://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks001.html#CIHCAEIH
You can also start using a high-availability solution with at least 2 nodes, so that when one node is busy doing GC, the other node will have to handle the total load for a time. Hopefully, you will not get the global GC on both systems at the same time.
Big binary objects like byte[] and similar is a real problem. Do you have those?
At some time, these needs to be compacted by the global GC, and this is a slow operation. Many of the data-processing JVM based solution actually avoid to store all data as plain POJOs on the heap, and implement heaps themselves in order to overcome this problem.
Another solution is to switch from JVM to Erlang. Erlang is near real time, and they got by not having the concept of a global GC of the whole heap. Erlang has many small heaps. You can read a little about it at
https://hamidreza-s.github.io/erlang%20garbage%20collection%20memory%20layout%20soft%20realtime/2015/08/24/erlang-garbage-collection-details-and-why-it-matters.html
Erlang is slower than JVM, since it copies data, but the performance is much more predictable. It is difficult to have both. I have a websocket Erlang based solution, and it really works well.
So you run into a problem that is expected and normal for JVM, Microsoft CLR and similar. It will get worse and more common during the next couple of years when heap sizes grows.
I ran into this observation when I was trying to give a rough estimate of my code's runtime: The CPU usage stayed at around 10% for the first part of the code, which involved traversing 2 strings and tossing things into a hashset. When the code reached the second part (2 nested for loops, plus a gigantic array allocation), the CPU usage skyrocketed to 50%.
I don't think I did anything related to multithreading. Also if I slightly modify the first part (something really simple like adding another for loop), the CPU usage varies a lot too. Why is this happening? A bit curious about who's utilizing the CPU consumption.
This isn't necessarily a direct comparison between your code being ran and the CPU usage as the JVM will fluctuate CPU usage based on what is in the heap at that point in time while it manages garbage collection. If it suddenly went up to 50% then back down this may be coincidence as at the same time garbage collection kicks in.
You can get more detailed information by running jvisualvm.exe, which is located in your jdk bin folder. I'm not sure how much CPU usage info is available out of the box, but there are at least plugins that will give you CPU usage info. This is also a graphical way to see heap usage, as per david99world's answer.
http://visualvm.java.net/index.html
Short version is in the title.
Long version:
I am working on a program for scientific optimization using Java. The workload of the program can be divided into parallel and serial phases -- parallel phases meaning that highly parallelizable work is being performed. To speed up the program (it runs for hours/days) I create a number of threads equal to the number of CPU cores on the machine I'm using -- typically 4 or 8 -- and divide the work between them. I then start these threads and join() them before proceeding to a serial phase.
So far so good. What's bothering me is that the CPU utilization and speedup of the parallel phases is nowhere near the "theoretical maximum" -- e.g. if I have 4 cores, I expect to see somewhere between 350-400% "utilization" (as reported by top) but instead it bounces around between 180 and about 310. Using only a single thread, I get 100% CPU utilization.
The only reasons I know of for threads not to run at full speed are:
-blocking due to I/O
-blocking due to synchronization
No I/O whatsoever is going on in my parallel threads, nor any synchronization -- the only data structures shared by the threads are read-only, and are either basic types or (non-concurrent) collections. So I'm looking for other explanations. One possibility would be that several threads are repeatedly blocking for garbage collection, but that would only seem to make sense in a situation with memory pressure, and I am allocating well above the required maximum heap space.
Any suggestions would be appreciated.
Update: Just in case anyone is curious, after some more investigation I tweaked the code for general performance and am seeing better utilization, even though nothing I changed has to do with synchronization. However, some of the changes should have resulted in fewer new heap allocations in particular I got rid of some use of iterators and termporary boxed numbers (The CERN "Colt" library for high-performance Java computing was useful here: it provides collections like IntArrayList, DoubleArrayList etc for basic types.). So I think garbage collection was probably the culprit.
All graphics operations run on a single thread in swing. If they are rendering to the screen they will effectively be contending for access to this thread.
If you are running on Windows, all graphics operations run on a single thread no matter what. Other operating systems have similar limitations.
It's actually fairly difficult to get the proper granularity of threaded workers sometimes, and sometimes it's easy to make them too big or too small, which will typically give you less than 100% usage of all cores.
If you're not rendering much gui, the most likely culprit is that you're contending more than you think for some shared resource. This is easily seen with profiler tools like jprofiler. Some VM's like bea's jrockit can even tell you this straight out of the box.
This is one of those places where you dont want to act on guesswork. Get a profiler!
First of all, GC will not happen only "in situation with memory pressure", but at any time the JVM sees fit (unpredictable, as far as I know).
Second, if your threads allocate memory in the heap (you mention they use Collections so I guess they do assign memory in the heap), you can never be sure if this memory is currently in RAM or on a Virtual Memory page (the OS decides), and thus access to "memory" may generate blocking I/O access!
Finally, as suggested in a prior answer, you may find it useful to check what happens by using a profiler (or even JMX monitoring might give some hints there).
I believe it will be difficult to get further hints on your problem unless you provide more concrete (code) information.
Firstly, I assume you're not doing any other significant work on the box. If you are, that's clearly going to mess with things.
It does sound very odd if you're really not sharing anything. Can you give us more idea of what the code is really doing?
What happens if you run n copies of the program as different Java processes, with each only using a single thread? If that uses each CPU completely, then at least we know that it can't be a problem with the OS. Speaking of the OS, which one is this running on, and which JVM? If you can try different JVMs and different OSes, the results might give you a hint as to what's wrong.
Also an important point: Which Hardware do you use?
E.g. 4-8 Cores could mean you work on one of Suns Niagara CPUs. And despite having 4-8 Cores they have less FPUs. When computing scientific stuff it could happen, that the FPU is the bottleneck.
You try to use the full CPU capability for your calculations but the OS itself uses resources as well. So be aware that the OS will block some of your execution in order to satisfy its needs.
You are doing synchronization at some level.
Perhaps only in the memory allocation system, including garbage collection. While the JVM vendor has worked to keep blocking in these areas to a minimum, they can't reduce it to zero. Perhaps something about your application is pushing at a weak point in this area.
The accepted wisdom is "don't build your own memory reclaiming pool, let the GC work for you". This is true most of the time but not in at least one piece of code I maintain (proven with profiling). Perhaps you need to rework your Object allocation in some major way.
Try the latency analyzer that comes with JRockit Mission Control. It will show you what the CPU is doing when it's not doing anything, if the application is waiting for file I/O, TLA-fetches, object allocations, thread suspension, JVM-locks, gc-pauses etc. You can also see transitions, e.g. when one thread wakes up another. The overhead is negligible, 1% or so.
See this blog for more info. The tool is free to use for development and you can download it here