Writing too much logs can cause OutOfMemory error? - java

I am using Java, and for logging sl4j/log4j is used. Just out of my curiosity, I want to know that whether writing too much logs can cause Java OutOfMemory error?
What is too much here?
Hmm say 100 mb of log in 10 mins while doing excessive db operations (I know that this information is not sufficient for a pin point answer, but even I am not looking for a pin point answer, just want to know in general that can this situation ever come).
Also, my logging process is not asynchronous.

You are getting an error that the heap is exhausted, right? not a PermGen error?
Clearly the logging lib is not leaking, but there's no question that if you wrote some code that created tons of strings in a really short amount of time you could exhaust memory. You would think that would cause a permgen error, as it's hard to imagine your logging objects would be enough to sack the heap.
Excessive db operations are a typical OOME culprit: returning large result sets particularly. I wouldn't call that a memory leak. Leaks usually cause failures after a long time running. If you have a table with a million records and 20 columns and try and return that in a query, that will run until the heap is gone and crash, first time, no leak or reference issues.
How to avoid OOM (Out of memory) error when retrieving all records from huge table?

I want to know that whether writing too much logs can cause Java OutOfMemory error?
I very much doubt it.
If you have a problem with OOME's, the most likely cause is a memory leak. You should investigate this methodically, rather than (basically) guess at possible causes.
(If the memory leak was in one of the logging libraries, that would be a bug in the library. This is not impossible, but the chances are that such a bug would have been found and fixed a long time ago.)
Reference: General strategy to resolve Java memory leak?

Related

How to analyze heapdump with common leak suspect

The application hitting slowness issue and generate some heapdump file, the heapdump file is 1.2GB, and I need to run my ha456.jar using 8.4GB RAM only can open the heapdump.
Before this, when I analyze the heapdump, I will try to see the Bigger LeakSize and check for the Leak Suspect value, and I can see that which class or which method of my application holding the big memory. And then I will try to fix the code so that it can run with better performance.
For this time, I cant really get the point that which module/method of my application causing the out of memory issue. The following are some of my screen shot of my HeapAnalyzer:
For me, its just common class, for example java/lang/object, java/lang/Long, or java/util/HashMap. I cant really know which method of my application causing the out of memory.
Appreciate your advise on how to analyze on this.
Finding memory leak is always very difficult for anyone in front of the code, let alone from so far. So I can only give you some suggestions:
you got an heap dump, filter by your own objects and analyze who creates the most numerous
run your application and monitor it with VisualVM, use the application a little bit and then force a GC run... 9 times out of 10 the objects whose number does not decrease significantly or do not completely reset are your memory leak
This maybe happening due to a lot of records are read from somewhere like database, queue which is of type Long. There could be a cartesian join or something of that sort. Once i had a ton of strings causing oom and the culprit was a logger accumulating logs.
A couple of thoughts-
When you get oom error trace that back to the suspect method.
Get thread dump and see which threads are active and what they are executing.

Used Heap grow from 150Mb to 2Gb in half a second

Hi guys,
as you can see I Have a strange spike of used heap, that never return to the normal level. In the image, in 0.27 seconds the used heap grow from 100Mb to 706Mb. We tryed to give more memory to Tomcat, but the problem remain the same, except for the fact that now the used heap grow from 150Mb to 1.7Gb of memory.
We are monitoring the situation in ant way we know, but neither the memory monitor neither the various log give us a solution.
Have you a hint?
Thank You,
Marco
Are you getting out of memory errors? If not, this is a non issue. Many things utilize caching and will take up as much memory as you can give them to improve performance. So unless you are getting out of memory errors or GC thrashing, this is not something to be concerned about.

Can extensive exception logging cause java.lang.OutOfMemoryError: GC overhead limit exceeded

I'm getting java.lang.OutOfMemoryError: GC overhead limit exceeded in my production server.
The only thing which I noticed in the log file is that there are too many full stacked trace exceptions thrown by a method which is not able to find some id (which is part of business logic).
I'm using org.slf4j for logging.
So my question is - can extensive logging cause this issue or I should be focusing on some other parts to check the memory leak?
For anything like this you need real information not guesses. Use a profiler (Netbeans and most other IDEs have one built in) and that can tell you exactly where the memory has gone. For what it's worth unless you have something very odd in your setup logging is unlikely to cause the issue. Whatever happened in the exception though may (or may not) be connected.
Logging, extensive as it may be, only involves short-lived objects, which will reach the Old Generation only in extreme cases. On the other hand, a GC overhead limit exceeded error means that practically all of the heap is strongly reachable and there is just a tiny bit of objects to reclaim on it. The GC must work hard to identify those few objects, and must do so very often.
Therefore your extensive logging may contribute to your problem and exacerbate it, but it will almost never be its true source. You must find what is permanently occupying your heap.

Java Memory Leak Due to Massive Data Processing

I am currently developing an application that processes several files, containing around 75,000 records a piece (stored in binary format). When this app is ran (manually, about once a month), about 1 million records are contained entirely with the files. Files are put in a folder, click process and it goes and stores this into a MySQL database (table_1)
The records contain information that needs to be compared to another table (table_2) containing over 700k records.
I have gone about this a few ways:
METHOD 1: Import Now, Process Later
In this method, I would import the data into the database without any processing from the other table. However when I wanted to run a report on the collected data, it would crash assuming memory leak (1 GB used in total before crash).
METHOD 2: Import Now, Use MySQL to Process
This was what I would like to do but in practice it didn't seem to turn out so well. In this I would write the logic in finding the correlations between table_1 and table_2. However the MySQL result is massive and I couldn't get a consistent output, sometimes causing MySQL giving up.
METHOD 3: Import Now, Process Now
I am currently trying this method and although the memory leak is subtle, It still only gets to about 200,000 records before crashing. I have tried numerous forced garbage collections along the way, destroying properly classes, etc. It seems something is fighting me.
I am at my wits end trying to solve the issue with memory leaking / the app crashing. I am no expert in Java and have yet to really deal with very large amounts of data in MySQL. Any guidance would be extremely helpful. I have put thought into these methods:
Break each line process into individual class, hopefully expunging any memory usage on each line
Some sort of stored routine where once a line is stored into the database, MySQL does the table_1 <=> table_2 computation and stores the result
But I would like to pose the question to the many skilled Stack Overflow members to learn properly how this should be handled.
I concur with the answers that say "use a profiler".
But I'd just like to point out a couple of misconceptions in your question:
The storage leak is not due to massive data processing. It is due to a bug. The "massiveness" simply makes the symptoms more apparent.
Running the garbage collector won't cure a storage leak. The JVM always runs a full garbage collection immediately before it decides to give up and throw an OOME.
It is difficult to give advice on what might actually be causing the storage leak without more information on what you are trying to do and how you are doing it.
The learning curve for a profiler like VirtualVM is pretty small. With luck, you'll have an answer - at least a very big clue - within an hour or so.
you properly handle this situation by either:
generating a heap dump when the app crashes and analyzing that in a good memory profiler
hook up the running app to a good memory profiler and look at the heap
i personally prefer yjp, but there are some decent free apps as well (e.g. jvisualvm and netbeans)
Without knowing too much about what you're doing, if you're running out of memory there's likely some point where you're storing everything in the jvm, but you should be able to do a data processing task like this the severe memory problems you're experiencing. In the past, I've seen data processing pipelines that run out of memory because there's one class reading stuff out of the db, wrapping it all up in a nice collection, and then passing it off to another, which of course requires all of the data to be in memory simultaneously. Frameworks are good for hiding this sort of thing.
Heap dumps/digging with virtualVm hasn't been terribly helpful for me , as the details I'm looking for are often hidden - e.g. If you've got a ton of memory filled with maps of strings it doesn't really help to tell you that Strings are the largest component of your memory useage, you sort of need to know who owns them.
Can you post more detail about the actual problem you're trying to solve?

Tune Java GC, so that it would immediately throw OOME, rather than slow down indeterminately

I've noticed, that sometimes, when memory is nearly exhausted, the GC is trying to complete at any price of performance (causes nearly freeze of the program, sometimes multiple minutes), rather that just throw an OOME (OutOfMemoryError) immediately.
Is there a way to tune the GC concerning this aspect?
Slowing down the program to nearly zero-speed makes it unresponsive. In certain cases it would be better to have a response "I'm dead" rather than no response at all.
Something like what you're after is built into recent JVMs.
If you:
are using Hotspot VM from (at least) Java 6
are using the Parallel or Concurrent garbage collectors
have the option UseGCOverheadLimit enabled (it's on by default with those collectors, so more specifically if you haven't disabled it)
then you will get an OOM before actually running out of memory: if more than 98% of recent time has been spent in GC for recovery of <2% of the heap size, you'll get a preemptive OOM.
Tuning these parameters (the 98% in particular) sounds like it would be useful to you, however there is no way as far as I'm aware to tune those thresholds.
However, check that you qualify under the three points above; if you're not using those collectors with that flag, this may help your situation.
It's worth reading the HotSpot JVM tuning guide, which can be a big help with this stuff.
I am not aware of any way to configure the Java garbage collector in the manner you describe.
One way might be for your application to proactively monitor the amount of free memory, e.g. using Runtime.freeMemory(), and declare the "I'm dead" condition if that drops below a certain threshold and can't be rectified with a forced garbage collection cycle.
The idea is to pick the value for the threshold that's large enough for the process to never get into the situation you describe.
I strongly advice against this, Java trying to GC rather than immediately throwing an OutOfMemoryException makes far much more sense - don't make your application fall over unless every alternative has been exhausted.
If your application is running out of memory, you should be increasing your max heap size or looking at it's performance in terms of memory allocation and seeing if it can be optimised.
Some things to look at would be:
Use weak references in places where your objects would not be required if not referenced anywhere else.
Don't allocated larger objects than you need (ie storing a huge array of 100 objects when you are only going to need access to three of them through the array lifecycle), or using a long datatype when you only need to store eight values.
Don't hold onto references to objects longer than you would need!
Edit: I think you misunderstand my point. If you accidentally leave a live reference to an object that no longer needs to be used it will obviously still not be garbage collected. This is nothing to do with nulling just incase - a typical example to this would be using a large object for a specific purpose, but when it goes out of scope it is not GC because a live reference has accidentally been left elsewhere, somewhere that you don't know about causing a leak. A typical example of this would be in a hashtable lookup which can be solved with weak references as it will be eligible for GC when only weakly reachable.
Regardless these are just general ideas off the top of my head on how to improve performance with memory allocation. The point I am trying to make is that asking how to throw an OutOfMemory error quicker rather than letting Java GC try it's best to free up space on the heap is not a great idea IMO. Optimize your application instead.
Well, turns out, there is a solution since Java8 b92:
-XX:+ExitOnOutOfMemoryError
When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors.
-XX:+CrashOnOutOfMemoryError
If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled).
A good idea is to combine one of the above options with the good old -XX:+HeapDumpOnOutOfMemoryError
I tested these options, they actually work as expected!
Links
See the feature description
See List of changes in that Java release

Categories