I have a program and a strong suspicion that memory swapping is occuring. The reason I belive that is the case is the fact that program just hangs from time to time. When I started logging, things became even more confusing as the program started hanging on different places, with different methods being executed.
At this time I'm using 32-bit IBM Java with 2gb dedicated to the program, so I'm right on the edge with the memory. Change to x64 is possible but before that:
Question 1: Can I programatically detect memory swapping at runtime? Or how could I at least give myslef some hints (via logging) that swapping is occuring.
And as of now, I dont have memory usage logs availible to me, however, if xmx is 2Gigs, that's just RAM and if memory swap occurs, would it even appear that i dont have enough memory?
Question 2: As I think about it now, can I log the start of garbage collenction? Can I detect it at runtime?
EDIT: said program exports very large amounts of data from database.
EDIT2: can I programatically forbid memory swapping for givem JVM?
Can I programatically detect memory swapping at runtime?
You can monitor in the OS how much swap is being used or how much is being written to the swap partitions. How you do this depends on yoru OS>
if memory swap occurs, would it even appear that i dont have enough memory?
If you had enough memory no swapping would occur. Note: even if no swapping occurs it that more memory wouldn't help. Most likely you need more than just your application memory e.g. disk caching also is important.
can I log the start of garbage collenction?
You can in another process, however you can't run anything while a stop-the-world action is occurring.
Change to x64 is possible but before that:
Java 5.0 was the last version where I would have said may be 32-bit is best. That was ten years ago.
program exports very large amounts of data from database.
1 GB is less than the cost of a cup of coffee these days. Even tens of GB is not much to worry about. Hundreds of GBs is getting big, and if you have a few TB you have an interesting problem. Some of my clients have 3 TB machines and they have very large amounts of data e.g. 100s TB.
I gave an old computer which hadn't been used for years to my oldest daughter and she gave it to my 8 year old daughter. It has 24 GB of memory and she uses it to mainly watch youtube videos.
can I programatically forbid memory swapping for givem JVM?
You can lock it into memory using JNI but when a machine swaps your heap space your machine is on the edge of dying anyway.
Related
I'm programming in Java using eclipse and after running JVM for a couple of hours, my program tends to slow to a trickle. What's normally printed (or executed) in a few fraction's of a second, is taking a couple of minutes or hours.
I'm aware this is usually caused by a memory leak in program. However, I'm under the impression that a memory leak slows PC bec it uses the majority of CPU power for garbage collection. When I take a look at task manager I only see 22-25% of CPU being used at the moment (it has remained steady for the last couple of hours) and approx. 35% of memory free on my machine.
Could the slowing down of my program be caused by something other than a memory leak or is it for sure a memory leak (which means I now need to take a hard look to track down source of leak..) And if yes, why would CPU usage be relatively low?
Thanks
Sometimes this happens when you have loop relationships over your objects or entities. JVM tries to read the data or bind the data looping through same set of objects, this drastically effect the performance of the JVM; most of the time crash the application even. As on previous answer, you can use jconsole to check which time this happens and take an action. Hope you get the idea; may be this is not the case, this is what came to my mind when I read your question.
cheers!!!
Well, at first, Memory Leak/any other malfunction doesn't affect your PC or any other part of your computer unless you are referencing some external resource which is choking. To answer your question, Generically speaking, while there is a possibility that slowing down your program could be caused by CPU, in your case however since your program/process is going slow gradually, most likely there is a memory Leak in your code.
You could use any profiler / jVIsualVM to monitor the mermoy usage/ object's state to nail down the issue.
You may be aware that a modern computer system has more than one CPU core. A single threaded program will use only a single core, which is consistent with task manager reporting an overall cpu usage of 25% (1 core fully loaded, 3 cores idle = 25% total cpu capacity used).
Garbage collection can cause slowdowns, but usually only does so if the JVM is memory constrained. To verify whether it is garbage collection, you can use jconsole or jvisualvm (which are part of the JDK) to see how much CPU time was spent doing garbage collection.
To investigate why your program is slow, using a profiler is usually the most efficient approach.
I think We can not say anything straight forward for this issue. You need to check the behaviour of you program using jconsole or jvisualvm which is part of you JDK.
I've got a strange problem in my Clojure app.
I'm using http-kit to write a websocket based chat application.
Client's are rendered using React as a single page app, the first thing they do when they navigate to the home page (after signing in) is create a websocket to receive things like real-time updates and any chat messages. You can see the site here: www.csgoteamfinder.com
The problem I have is after some indeterminate amount of time, it might be 30 minutes after a restart or even 48 hours, the JVM running the chat server suddenly starts consuming all the CPU. When I inspect it with NR (New Relic) I can see that all that time is being used by the garbage collector -- at this stage I have no idea what it's doing.
I've take a number of screenshots where you can see the effect.
You can see a number of spikes, those spikes correspond to large increases in CPU usage because of the garbage collector. To free up CPU I usually have to restart the JVM, I have been relying on receiving a CPU alert from NR in my slack account to make sure I jump on these quickly....but I really need to get to the root of the problem.
My initial thought was that I was possibly holding onto the socket reference when the client closed it at their end, but this is not the case. I've been looking at socket count periodically and it is fairly stable.
Any ideas of where to start?
Kind regards, Jason.
It's hard to imagine what could have caused such an issue. But at first what I would do is taking a heap dump at the time of crash. This can be enabled with -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<path_to_your_heap_dump> JVM args. As a general practice don't increase heap size more the size of physical memory available on your server machine. In some rare cases JVM is unable to dump heap space because process is doomed; in such cases you can use gcore(if you're on Linux, not sure about Windows).
Once you grab the heap dump, analyse it with mat, I have debugged such applications and this worked perfectly to pin down any memory related issues. Mat allows you to dissect the heap dump in depth so you're sure to find the cause of your memory issue if it is not the case that you have allocated very small heap space.
If your program is spending a lot of CPU time in garbage collection, that means that your heap is getting full. Usually this means one of two things:
You need to allocate more heap to your program (via -Xmx).
Your program is leaking memory.
Try the former first. Allocate an insane amount of memory to your program (16GB or more, in your case, based on the graphs I'm looking at). See if you still have the same symptoms.
If the symptoms go away, then your program just needed more memory. Otherwise, you have a memory leak. In this case, you need to do some memory profiling. In the JVM, the way this is usually done is to use jmap to generate a heap dump, then use a heap dump analyser (such as jhat or VisualVM) to look at it.
(Fair disclosure: I'm the creator of a jhat fork called fasthat.)
Most likely your tenure space is filling up triggering a full collection. At this time the GC uses all the CPUS for sometime seconds at time.
To diagnose why this is happening you need to look at your rate of promotion (how much data is moving from young generation to tenured space)
I would look at increasing the young generation size to decrease rate of promotion. You could also look at using CMS as this has shorter pause times (though it uses more CPU)
Things to try in order:
Reduce the heap size
Count the number of objects of each class, and see if the numbers makes sense
Do you have big byte[] that lives past generation 1?
Change or tune GC algorithm
Use high-availability, i.e. more than one JVM
Switch to Erlang
You have triggered a global GC. The GC time grows faster-than-linear depending on the amount of memory, so actually reducing the heap space will trigger the global GC more often and make it faster.
You can also experiment with changing GC algorithm. We had a system where the global GC went down from 200s (happened 1-2 times per 24 hours) to 12s. Yes, the system was at a complete stand still for 3 minutes, no the users were not happy :-) You could try -XX:+UseConcMarkSweepGC
http://www.fasterj.com/articles/oraclecollectors1.shtml
You will always have stops like this for JVM and similar; it is more about how often you will get it, and how fast the global GC will be. You should make a heap dump and get the count of the different objects of each class. Most likely, you will see that you have millions of one of them, somehow, you are keeping a pointer to them unnecessary in a ever growing cache or sessions or similar.
http://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks001.html#CIHCAEIH
You can also start using a high-availability solution with at least 2 nodes, so that when one node is busy doing GC, the other node will have to handle the total load for a time. Hopefully, you will not get the global GC on both systems at the same time.
Big binary objects like byte[] and similar is a real problem. Do you have those?
At some time, these needs to be compacted by the global GC, and this is a slow operation. Many of the data-processing JVM based solution actually avoid to store all data as plain POJOs on the heap, and implement heaps themselves in order to overcome this problem.
Another solution is to switch from JVM to Erlang. Erlang is near real time, and they got by not having the concept of a global GC of the whole heap. Erlang has many small heaps. You can read a little about it at
https://hamidreza-s.github.io/erlang%20garbage%20collection%20memory%20layout%20soft%20realtime/2015/08/24/erlang-garbage-collection-details-and-why-it-matters.html
Erlang is slower than JVM, since it copies data, but the performance is much more predictable. It is difficult to have both. I have a websocket Erlang based solution, and it really works well.
So you run into a problem that is expected and normal for JVM, Microsoft CLR and similar. It will get worse and more common during the next couple of years when heap sizes grows.
I am running an application using NetBeans and in the project properties I have set the Max JVM heap space to 1 GiB.
But still, the application crashes with Out of Memory.
Does the JVM have memory stored in system? If so how to clear that memory?
You'll want to analyse your code with a profiler - Netbeans has a good one. This will show you where the memory is tied up in your application, and should give you an idea as to where the problem lies.
The JVM will garbage collect objects as much as it can before it runs out of memory, so chances are you're holding onto references long after you no longer need them. Either that, or your application is genuinely one that requires a lot of memory - but I'd say it's far more likely to be a bug in your code, especially if the issue only crops up after running the application for a long period of time.
I do not fully understand all details of your question, but I guess the important part is understandable.
The OutOfMemoryError (not an exception) is thrown if the memory allocated to your JVM does not suffice for the objects created in your program. In your case it might help to increase the available heap space to more than 1 GByte. If you think 1 GByte is enough, you may have a memory leak (which, in Java, most likely means that you have references to objects that you do not need anymore - maybe in some sort of cache?).
Java reserves virtual memory for its maximum heap size on startup. As the program uses this memory, more main memory is allocated to it by the OS. Under UNIX this appear as resident memory. While Java programs can swap to disk, the Garbage Collection performs extremely badly if part of the heap is swapped and it can result in the whole machine locking up or having to reboot. If your program is not doing this, you can be sure it is entirely in main memory.
Depending on what your application does it might need 1 GB, 10 GB or 100 GB or more. If you cannot increase the maximum memory size, you can use a memory profiler to help you find ways to reduce consumption. Start with VisualVM as it is built in and free and does a decent job. If this is not enough, try a commercial profiler such as YourKit for which you can get a free evaluation license (usually works long enough to fix your problem ;)
The garbage collector automatically cleans out the memory as required and may be doing this every few seconds, or even more than once per second. If it is this could be slowing down your application, so you should consider increasing the maximum size or reducing consumption.
As mentioned by #camobap the reason for the OutOfMemory was because Perm Gen size was set very low. Now the issue is resolved.
Thank you all for the answers and comments.
The Java compiler doesn't allocate 1 GiB as I think you are thinking. Java dynamically allocates the needed memory and garbage collects it too, every time it allocates memory it checks whether it has enough space to do so, and if not crashes. I am guessing somewhere in your code, because it would be near impossible to write code that allocates that many variables, you have an array or ArrayList that takes up all the memory. In the case of an array you probably has a variable allocating the size of it and you did some calculation to it that made it take too much memory. In the case of an ArrayList I believe you might have a loop that goes too many iterations adding elements to it.
Check your code for the above errors and you should be good.
I have a server java component which has a huge memory demand at startup which mellows down gradually. So as an example at startup the memory requirement might shoot unto 4g; which after the initial surge is over will go down to 2g. I have configured the component to start with memory of 5g and the component starts well; the used memory surges upto 4g and then comes down close to 2g. The memory consumed by the heap at this point still hovers around 4g and I want to bring this down (essentially keep the free memory down to few hundred mb's rather than 2g. I tried using the MinFreeHeapRatio and MaxFreeHeapRatio by lowering them down from the default values but this resulted in garbage collection not being triggered after the initial run during the initial spike and the used memory stayed at a higher than usual level. Any pointers would greatly help.
First, I ask why you are worried about freeing up 2 GB of ram on a server? 2GB of ram is about $100 or less. If this is on a desktop I guess I can understand the worry.
If you really do have a good reason to think about it, this may be tied to the garbage collection algorithm you are using. Some algorithms will release unused memory back to the OS, some will not. There are some graphs and such related to this at http://www.stefankrause.net/wp/?p=14 . You might want to try the G1 collector, as it seems to release memory back to the OS easily.
Edit from comments
What if they all choose to max their load at once? Are you ok with some of them paging memory to disk and slowing the server to a crawl? I would run them on a server with enough memory to run ALL applications at max heap, + another 4-6GB for the OS / caching. Servers with 32 or 64 GB are pretty common, and you can get more.
You have to remember that the JVM reserves virtual memory on startup and never gives it back to the OS (until the program exits) The most you can hope for is that unused memory is swapped out (this is not a good plan) If you don't want the application to use more than certain amount of memory, you have to restrict it to that amount.
If you have a number of components which use a spike of memory, you should consider re-writing them to use less memory on startup or use multiple components in the same JVM so the extra memory is less significant.
Hi
I am debugging a Java application that fails when certain operations are invoked after VM memory is swapped to disk. Since I have to wait about an hour for windows to swap I was wondering if there is a way of forcing windows into swapping
You can create another application that allocates and accesses a large amount of memory. Assuming that you don't have enough memory for both to run, Windows will be forced to swap the inactive app to make room for the active app.
But before you do that, you might find help if you describe the exact problem that you're having with your app (including stack traces and sample code). The likelihood of swapping causing any problems other than delays is infinitesimally low.