I am running a multithreaded import which runs for around 1-2 hours.
and in the import, before putting data into the table.
i am checking
if(debug.isEnabled())
logger.debug("Object="+MyObject);
where MyObject uses the ToStringBuilder in the toString method.
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:2694)
at java.lang.String.<init>(String.java:203)
at java.lang.StringBuffer.toString(StringBuffer.java:561)
at org.apache.commons.lang3.builder.ToStringBuilder.toString(ToStringBuilder.java:1063)
I am thinking that toStringBuilder is causing this issues. am i correct? if yes what are ways to fix this?
Not necessarily. All that error means is that you're almost out of heap space, and the garbage collector is giving up on trying to reclaim space because it has run too much without reclaiming enough space. The fact that it happened at that point in the code doesn't necessarily mean anything. It could be that something entirely different ate up the space, but that call kicked off the GC one more time, when it finally gave up. You'd need to take a heap dump and look at it in a profiler like YourKit or VisualVM to see what's really going on.
Your object construct a too-large string in memory, probably within the toStringBuilder() method.
To avoid this, try to do importing as per row into your database like
if (debug.isEnabled()) {
// since you are importing rows it must be a collection/array
logger.debug("Object=");
for(Row r in MyObject.getRows()) {
logger.debug(r);
}
}
or, in rare case, should you really have a big object to log, have the object log itself by streaming content to log, rather than creating an overwhelming string.
Related
This question already has answers here:
Error java.lang.OutOfMemoryError: GC overhead limit exceeded
(22 answers)
Closed 6 months ago.
After a lot of effort I can't seem to overcome the problem of getting a
GC overhead limit exceeded
error in my Java program.
It occurs inside a large method that contains large string manipulation, many lists of objects and accesses to DB.
I have tried the following:
after the use of each ArrayList, I have added: list=new ArrayList<>(); list=null;
for the strings, instead of having e.g. 50 appends (str+="....") I try to have one append with the total text
after each DB access I close the statements and the resultSets.
This method is called from main like this:
for(int i=0; i<L; i++) {
cns = new Console(i);
cns.processData();//this is the method
cns=null;
}
When this loop gets executed 1 or 2 times, everything is ok. For L>=3 it's almost certain that I will get the garbage collector error.
Shouldn't the fact that I have a cns=null after each execution of the method, force the GC and free everything from the previous execution?
Should I also delete all private attributes of the object before setting it to null? Maybe putting a Thread.sleep() could force the GC after each loop?
There's actually no reason to set cns to null at the end of each loop. You're setting it to a new Console() at the beginning of the loop anyway - if anything could be freed by setting it to null, it's also going to be freed by setting it to a new object.
You can try a System.gc(); call to suggest the system do a garbage collection, but I don't know if that would help you or make it worse. The system IS already attempting garbage collection - if it wasn't, you wouldn't get this error.
You don't show us exactly how you're building your Strings, but keep in mind that += isn't the only culprit. If you have something like String s = "Hello " + "to the" + " world";, that's just as bad as putting that on three lines and using +=. If it's an issue, StringBuilder may be your friend.
You can read the answers at Error java.lang.OutOfMemoryError: GC overhead limit exceeded for some other suggestions on how to avoid this error. It seems that for some people it's triggered when you're almost, but not quite, out of memory. So increasing the amount of memory available to Java may (or may not) help.
Basically, an "GC overhead limit exceeded" is a symptom of having too much reachable data. The heap is filling up with things that cannot be garbage collected ... 'cos they are not garbage! The JVM is running the GC again and again in an attempt to make space. Eventually, it decides that too much time is being spent garbage collecting, and it gives up. This is usually the correct thing to do.
The following ideas from your question (and the other answer) are NOT solutions.
Forcing the GC to run by calling System.gc() won't help. The GC is already running too often.
Assigning null to cns won't help. It immediately gets something else assigned to it. Besides, there is no evidence that the Console object is occupying much memory.
(Note that the constructor for the java.io.Console class is not public, so your example is not meaningful as written. Perhaps you are actually calling System.getConsole()? Or perhaps this is a different Console class?)
Clearing private attributes of an object before setting it to null is unlikely to make any difference. If an object is not reachable, then the values of its attributes are irrelevant. The GC won't even look at them.
Calling Thread.sleep() will make no difference. The GC runs when it thinks it needs to.
The real problem is ... something that we can't determine from the evidence that you have provided. Why is there so much reachable data?
In general terms, the two most likely explanations are as follows:
Your application (or some library you are) is accumulating more and more objects in some data structure that is surviving beyond a single iteration of the for loop. In short, you have a memory leak.
To solve this, you need to find the storage leak; see How to find a Java Memory Leak. (The leak could be related to database connections, statements or resultsets not being closed, but I doubt it. The GC should find and close these resources if they have become unreachable.)
Your application simply needs more memory. For example, if a single call to processData needs more memory than is available, you will get an OOME no matter what you try to get the GC to do. It cannot delete reachable objects, and it obviously cannot find enough garbage to fast enough.
To solve this, first see if there are ways modify the program so that it needs less (reachable) memory, Here are a couple of ideas:
If you are building a huge string to represent the output before writing it to an OutputStream, Writer or similar. You would save memory if you wrote directly to the output sink.
In some cases, consider using StringBuilder rather than String concatenation when assembling large strings. Particularly when the concatenations are looped.
However, note that 1) in Java 8 and earlier javac already emits StringBuilder sequences for you for concatenation expressions, and 2) in Java 9+, javac emits invokedynamic code that is better than using StringBuilder; see
JDK 9/JEP 280: String Concatenations Will Never Be the Same
If that doesn't help, increase the JVM's heap size or split the problem into smaller problems. Some problems just need lots of memory to solve.
First of all, I can't really show code, I am sorry, these software belongs to the company I work for, not me. I will try to explain my problem the best I can.
I am developing a little application based on JavaFX, that shows values in LineCharts, these are refreshed every 800ms-1000ms (0,8-1 seconds), and calls System.gc() every time I refresh (Around once every 0,8-1 seconds).
I am having RAM usage peaks every 10-20 seconds:
In this specific example, this doesn't look like a problem, but in some cases it goes up to 700-750 MB (Making the Heap Size go up to 1.2-1.3 GB, and taking a long time to release it back to the OS).
I know about (and currently use, without noticing any huge improvement) Heap Tuning Paremeters, but I don't think these can fix the problem here, they are helping at specific points, and slightly reduce the memory consumption, but not solve the problem.
Any ideas on how can I design my code not to have these RAM peaks? I don't have a process that uses memory and releases it every 10-20 seconds, so I assume there is something else allocating and releasing that ammount of RAM (Maybe JavaFX?), JVisualVM only says int[], byte[] and char[], and I am not even using Integer values in my code (I work with Double values in this software).
Thank you all.
Sorry, but the only reasonable answer here: you have to do profiling in order to understand where those peaks are coming from. You have to identify the root cause of this problem; and that is nothing that we can help with.
This program runs in your setup, with your data, and shows behavior that needs to analyzed over time.
My guess would be that your program is creating large amounts of objects that will be thrown away quickly afterwards ( I guess you have those calls to System.gc() in there for a reason). And guess what: creating garbage on high rate is a bad idea. Because it keeps your GC constantly spinning; and it (obviously?!) contributes to high memory load.
So, as said: you have to identify the root cause and fix that. In that sense: you have to study the tooling you are using. An alternative to profiling might be to have the GC log its activities; and analyze that output. See here for some information on that.
I found the solution:
Both MrSmith42 and GhostCat pointed out that calling System.gc() doesn't really help me here. They were right, in fact, that was the problem.
Removing System.gc() solved the problem for me
Thank you, MrSmith42 and GhostCat.
System.gc() does not trigger a garbage collection directly it is more like a hint to the VM that you think performing a garbage collection would be a good idea. What your VM does is its own decision based on the implementation.
Only if the VM runs out of memory it will sure perform a garbage collection but that also without you calling System.gc().
A quite long discussion about this topic can be found here:
When does System.gc() do anything
I'm using collection objects (Arraylist, hashmap mainly).
My program is running 24*7. In between sometimes, it throws an exception out of memory Error: Java heap space.
I have already given 1gb of space for JVM
My Question is whether I need to use Global objects of Collection or local objects for each methods?
(Almost 1000000 data I have to process per day continuously 24*7)
You could also set the heap space to 2GB and see if the problem still occurs. Thats the poor mans memory leak detection process. Otherwise use a profiler like VisualVM and check for memory leaks.
You can use a source code quality tool like Sonar.
You can also use Eclipse Memory Analysis tool. It enables you to analyse the heap dump & figure out which process is using the maximum memory. Analyze productive heap dumps with hundreds of millions of objects, quickly calculate the retained sizes of objects, see who is preventing the Garbage Collector from collecting objects, run a report to automatically extract leak suspects.
I always use it to fix OutOfMemory exceptions.
All the answer were really helpful :
1.When the requirement of the program is to run 24*7 then use local variable across the method.
2.The program must be thread safe(if Thread is used).
3.Use Connection pooling because when your connection object is used in infinite loop then creating & destroying every time is a big performance issue , so always make 10 or 15 connection in the beginning & checkout and checkin the connection.
4. Use Memory Analysis tool to analyse the heap dump & figure out which process is using the maximum memory.
You should use local variable until and unless it is being used across the methods. and Try to make Global variable null whenever its value is not going to use anymore. But you should be more sure while making object null.
These null objects get garbage collected easily, which helps you to avoid memory exceptions. Also review your code for infinite loops while iterating collections, arrays etc.
This is related to my question Java Excel POI stops after multiple execution by quartz.
My program stops unexpectedly after a few iterations. I tried profiling and found that I was consuming a lot of heap memory per iteration (And a memory leak somewhere.. havn't found the bugger yet). So, as a temporary solution, I tried inserting System.gc(); at the end of each complete execution of the program (kindly read the linked question for a brief description of the program). I was not expecting much, maybe a few more heap space available after each iteration. But, it appears that the program uses less heap memory when I inserted System.gc();.
The top graph shows the program running with the System.gc(); while the bottom graph is the one without. As you can see the top graph shows that I'm only using less than a 100mb after 4 iteratioins of the program while the bottom graph shows over 100mb in usage for the same amount of iterations. Can anyone clarify how and why System.gc(); causes this effect in my heap? If there are any disadvantages if I were to use this in my program? Or I'm completely hopless in programming and take up photography instead?
Note that I inserted GC at the end of each program iteration. So I assume that heap usage must be the same as without the GC inserted until it meets the the System.gc(); command
Thanks!
Can anyone clarify how and why System.gc(); causes this effect in my heap?
System.gc is kind of a request service for the Garbage Collector to Run. Note that I have used request and not trigger in my statement. GC based upon the heap state might/not carry on collection.
If there are any disadvantages if I were to use this in my program?
From experience, GC works best when left alone. In your example you shouldn't worry or use System.gc. Because GC will run when it is best to run and manually requesting it might reduce the performance. Even though only a small difference, you can observe that "time spent on gc" is better in the below graph than the first one.
As per memory, both the graphs are OK. Seems like your max heap is a bit high. Hence GC did-not run it in second graph. If it was really required, it would have ran it.
As per the Java specs, calling gc() does not guarantee that it will run, you only hint to the JVM that you need it to run, so the result is unreliable (You should avoid calling gc() at not matter what). But, in your case here and since the heap is reaching critical limits incrementally, that's why perhaps your hints are being executed.
GC usually runs based on specific algorithms to prevent the heap from being exhausted and when it fails to reclaim the much needed space while having no more heap for you app to survive, you'll face the OutOfMemoryException.
While the GC is running, your application will experience some pauses as a result of its activities, so you won't really want it to run more often!
Your best bet is to solve the leak and practice better memory management for a healthy runtime experience.
Using System.gc() shouldn't impact the heap size allocated to JVM. Heap size is dependent only on startup arguments we provide to our JVM. I will recommend you to run the same program 3-4 times and take average values with System.gc() and without.
Coming back to the problem of finding the memory leak; I will recommend to use JProfiler or other tools which would tell you exact memory footprint; and different objects in the heap.
Last but not the least; you are a reasonable programmer. No need of going for a photo shoot :)
I have a web app that serializes a java bean into xml or json according to the user request.
I am facing a mind bending problem when I put a little bit of load on it, it quickly uses all allocated memory, and reach max capacity. I then observe full GC working really hard every 20-40 seconds.
Doesnt look like a memory leak issue... but I am not quite sure how to trouble shoot this?
The bean that is serialized to xml/json has reference to other beans and those to others. I use json-lib and jaxb to serialize the beans.
yourkit memory profiler is telling me that a char[] is the most memory consuming live object...
any insight is appreciated.
There are two possibilities: you've got a memory leak, or your webapp is just generating lots of garbage.
The brute-force way to tell if you've got a memory leak is to run it for a long time and see if it falls over with an OOME. Or turn on GC logging, and see if the average space left after garbage collection continually trends upwards over time.
Whether or not you have a memory leak, you can probably improve performance (reduce the percentage GC time) by increasing the max heap size. The fact that your webapp is seeing lots of full GCs suggests to me that it needs more heap. (This is just a bandaid solution if you have a memory leak.)
If it turns out that you are not suffering from a memory leak, then you should take a look at why your application is generating so much garbage. It could be down to the way that you are doing the XML and JSON serialization.
Why do you think you have a problem? GC is a natural and normal thing to happen. We have customers that GC every second (for less than 100ms duration), and that's fine as long as memory keeps getting reclaimed.
GCing every 20-40 seconds isn't a problem IMO - as long as it doesn't take a large % of that 20-40s. Most major commercial JVMs aim to keep GC in the 5-10% of time range (so 1-4 seconds of that 20-40s). Posting more data in the form of the GC logs might help, and I'd also suggest tools like GCMV would help you visualize and get recommendations on what your GC profile looks like.
It's impossible to diagnose this without a lot more information - code and GC logs - but my guess would be that you're reading data in as large strings, then chopping out little bits with substring(). When you do that, the substring string is made using the same underlying character array as the parent string, and so as long as it's alive, will keep that array in memory. That means code like this:
String big = a string of one million characters;
String small = big.substring(0, 1);
big = null;
Will still keep the huge string's character data in memory. If this is the case, then you can address it by forcing the small strings to use fresh, smaller, character arrays by constructing new instances:
small = new String(small);
But like i said, this is just a guess.
I'm not sure how much of it is in your code and how much might be in the tools you are using, but there are some key things to watch for.
One of the worst is if you constantly add to strings in loops. A simple "hello"+"world" is no problem at all, it's actually very smart about that, but if you do it in a loop it will constantly reallocate the string. Use StringBuilder where you can.
There are profilers for Java that should quickly point you to where the allocations are taking place. Just fool around with a profiler for a while while your java app is running and you will probably be able to reduce your GCs to virtually nothing unless the problem is inside your libraries--and even then you may figure out some way to fix it.
Things you allocate and then free quickly don't require time in the GC phase--it's pretty much free. Be sure you aren't keeping Strings around longer than you need them. Bring them in, process them and return to your previous state before returning from your request handler.
You should attach yourkit and record allocations (e.g., every 10th allocation; including all large ones). They have a step by step guide on diagnosing excessive gc:
http://www.yourkit.com/docs/90/help/excessive_gc.jsp
To me that sounds like you are trying to serialize a recursive object by some encoder which is not prepared for it.
(or at least: very deep/almost recursive)
Java's native XML API is really "noisy" and generally wasteful in terms of resources which means that if your requests and XML/JSON generation cycles are short-lived, the GC will have lots to clean up for.
I have debugged a very similar case and found out this the hard way, only way I could at least somewhat improve the situation without major refactorings was implicitly calling GC with the appropriate VM flags which actually turn System.gc(); from a non-op call to maybe-op call.
I would start by inspecting my running application to see what was being created on the heap.
HPROF can collect this information for you, which you can then analyse using HAT.
To debug issues with memory allocations, InMemProfiler can be used at the command line. Collected object allocations can be tracked and collected objects can be split into buckets based on their lifetimes.
In trace mode this tool can be used to identify the source of memory allocations.