Memory Leakage due to HashMap - java

I am facing memory leakage because of HashMap in my code. When I first login to the application this HashMap populate and I use this Map to cache some data.
I used this cached Data several places in my application.
Its size grows continuously after Login when nothing is running in the application.
The size decreases only in the situation when the garbage collector is called automatically or I call that.
But after that it again starts increasing. It is a memory leak for sure but how can I avoid it?
My profiler also showing ResultSet.getString() and Statement.execute() as the hotspot memory allocation. these methods used to populate this cache.
Is the memory leak there because of these methods? I have close the DB Connection in finally block.
Why it is still showing me these methods?

As the comments above explain, this does not sound like a memory leak.
In a java application, the JVM will create objects and use up memory. As time goes on, some of the objects will go out of scope (become eligible for garbage-collection) but until the next collection happens, they will still be in the heap, 'using up memory'. This is not a problem, it is how java works. When the JVM decides it needs to free up memory, it will run a collection and the used memory should drop.
Should you care about what you are seeing? I can think of two reasons you should and one you why shouldn't. If your garbage-collections free up enough memory for the application keep running, the collections do not affect performance and you are a busy person with other things to do, then I see no reason why you should care.
If however, you are worried that you do not understand how the application works in detail, or you have a reason why "so much memory" is an issue (you will want to run the application with even more data in future, or will want to run it with less heap assigned in future), then you may want to investigate.
If the memory is being used up when the application is doing nothing, then I would focus on that. What is it really doing when it is doing 'nothing'? I bet it's doing 'something'

Related

How to prioritize Java G1 garbage collection for memory over speed?

I have a Java/Spring data transfer service that reads records in from a .csv file, parses them, collates them in memory, and then loads them into a database.
Each run of parses a file that contains ~800k records.
The service is deployed to a Kubernetes container. Neither the production environment nor the application design are ideal for this application, but I have to operate within some restrictions.
After two or three runs, the service crashes due to a memory error. We are seeing long garbage collection times, and assume that garbage collection is not keeping pace.
We are using the G1 garbage collector, and I want to tune the collection to prioritize memory over speed. I don't care how efficient or fast the service is, it only has to perform this data transfer a few times.
What settings will accomplish this?
We are seeing long garbage collection times, and assume that garbage collection is not keeping pace.
Long GC times are a symptom of the problem rather than the root cause of the problem. If the GC is simply not keeping up, that should not cause OOMEs.
(It possible that heavy use of finalizers, Reference objects or similar make it harder for the GC to keep up, but that is still a symptom. It seems likely that this is relevant in your use-case.)
My theory is that the real cause of the long collection times is that your heap is too small. When your heap is nearly full, the GC has to run more and more often and is able to reclaim less and less space. That leads to long collection times. Then finally, you get an OOME because either you run out of heap space entirely, or because you hit the GC overhead threshold.
Another possibility is that your heap is too big for the available RAM ... and you are getting virtual memory thrashing
In either case, simply tweaking the GC settings is not going to help. You need to identify the root cause of the problem before you can fix it.
My take is that either you have a memory leak, or not enough RAM, or there is a problem with your application's design.
On the design side, rather than reading / parsing the entire file as an in-memory data structure, use a streaming / event-based parser. Read records one at a time, process them and then discard them ... keeping as little information about them in memory as you can get away with. In other words, make the application less memory hungry.

How to find which Finalizer is time consuming

I am working on an application whose purpose is to compute reports has fast as possible.
My application uses a big amount of memory; more than 100 Go.
Since our last release, I notice a big performance slowdown. My investigation shows that, during the computation, I get many garbage collection between 40 and 60 seconds!!!
(JMC tells me that they are SerialOld but I don't know what it exactly means) and, of course, when the JVM is garbage collecting, the application is absolutely freezed
I am now investigating the origin of these garbage collections... and this is a very hard work.
I suspect that, if these garbage collections are so long, it is because they are spending many times in finalize functions (I know that, among all the libraries we integrate from other teams, some of them uses finalizers)
However, I don't know how to confrim (or not) this hypothesis; How to find which finalizer is time consuming.
I am looking for a good tool or even a good methodology
Here is data collected via JVisualVM
As you can see, I always have many "Pending Finalizers" when I have a
log Old Garbage
What is surprising is that when I am using JVisualVM, the above graph
scrolls regularly from right to left. When the Old Garbage is
triggered, the scrolling stops (until here, it looks normal, this is
end-of-world). However, when the scrolling suddenly restart, it does
not from the end of Old Garbage but from the end of Pending Serializer
This lets me think that the finalizers were blocking the JVM
Does anyone has an explaination for this?
Thank you very much
Philippe
My application uses a big amount of memory; more than 100 Go.
JMC tells me that they are SerialOld but I don't know what it exactly means
If you are using the serial collector for a 100GB heap then long pauses are to be expected because the serial collector is single-threaded and one core can only only chomp through so much memory per unit of time.
Simply choosing any one of the multi-threaded collectors should yield lower pause times.
However, I don't know how to confrim (or not) this hypothesis; How to find which finalizer is time consuming.
Generally: Gather more data. For GC-related things you need to enabled GC logging, for time spent in java code (be it your application or 3rd party libraries) you need a profiler.
Here is what I would do to investigate your finalizer theory.
Start the JVM using your favorite Java profiler.
Leave it running for long enough to get a full heap.
Start the profiler.
Trigger garbage collection.
Stop profiler.
Now you can use the profiler information to figure out which (if any) finalize methods are using a large amount of time.
However, I suspect that the real problem will be a memory leak, and that your JVM is getting to the point where the heap is filling up with unreclaimable objects. That could explain the frequent "SerialOld" garbage collections.
Alternatively, this could just be a big heap problem. 100Gb is ... big.

Why does Android app RAM usage increase steadly forever? even Hello World app

I was working on my app, when I noticed that the memory usage of my app just keeps climbing....steadly...forever...and every once in a while, the Garbace Collector kicks in, and cuts down the memory usage.
Eventually, I found out that this is the case even for the template Hello World application as well.
Why is this happening? Is it happening for you? How can I make the memory usage stop increasing for no reason? I don't remember this being an issue before...I'm going nuts yo!
Here is a picture of the Hello World app, and its memory usage.
An Android app in its entirety doesn't consist of just client (your) code. In addition an app will also have a GUI aspect, which implies that it is being run in a loop and that it needs to be re-drawn every time. It is easy to speculate that there is at least one object allocation that happens inside that loop (there may as well be more than just one). Considering the fact that the reference to the aforementioned object gets out of scope, GC will only mark it and doesn't immediately free memory. Hence, the new objects will constantly consume more available memory. When there's no free memory to allocate, the GC will take care of the marked objects and there is now free memory to grab again. This is roughly what you can see on the picture above. This is a normal behavior and not something you should be worried about. Generally speaking, most optimizations and profiling should be done after the main application logic is complete.

Why do heap memory usage and number of loaded classes keep increasing?

I am using JVM Explorer - link to JVM Explorer , to profile my Spring application. I have following questions about it.
Why 'Used Heap Memory' keeps increasing even after the application
has started up and have not received any requests yet? (Image 1)
Why even after garbage collection and before receiving any requests
'Used Heap Memory' keeps increasing? (Image2)
Why after garbage collection, by sending some requests to the application number of loaded classes is increasing? Is not the application supposed to use previous classes? why is it just increasing almost everything (heap, number of loaded classes)? (Image3)
After application starts up - enlarge image
After clicking on 'Run Garbage Collector' button. - enlarge image
After sending some requests to the application following completion of Garbage Collection Procedure - enlarge image
Why 'Used Heap Memory' keeps increasing even after the application has started up and have not received any requests yet? (Image 1)
Something in your JVM is creating objects. You would need a memory profiler to see what is doing this. It could be part of Swing, or yoru application or another library.
BTW Most profiling tools use JMX which processes a lot of garbage. e.g. when I run FlightRecorder or VisualVM on some of my applications it shows the JMX monitoring is creating most of the garbage.
Why even after garbage collection and before receiving any requests 'Used Heap Memory' keeps increasing? (Image2)
Whatever was creating objects is still creating objects.
Why after garbage collection, by sending some requests to the application number of loaded classes is increasing?
Classes are lazily loaded. Some classes are not needed until you do something.
Is not the application supposed to use previous classes?
Yes, but this doesn't mean it won't need more classes.
why is it just increasing almost everything (heap, number of loaded classes)? (Image3)
Your application is doing more work.
If you wan't to know what work the application is doing, I suggest using a memory profiler like VisualVM or Flight Recorder. I use YourKit for these sort of questions.
Note: it takes hard work to tune a Java program so that it doesn't produce garbage and I would say most libraries only try to reduce garbage if it causes a known performance problem.
I like #PeterLawrey's good answer, however this is missing there:
The memory is primarily meant to be used, not to be spared. It may easily be the case that your application is just well written: it can work with a small memory and it can re-create all it needs, but it can also efficiently exploit the fact that your system has a lot of memory and the application uses all the possible memory to work much more efficiently.
I can easily imagine that the thing which keeps taking up the memory is for instance a cache. If the cache contains a lot of data, the application works faster.
If you do not have issues like OutOfMemoryError you do not have to worry necessarily. You should still be vigilant and inspect it further, but your described situation does not automatically mean that something is wrong.
It is analogous to the constant lamentation of Windows users that "I have bought more memory but my Windows uses it all up" - it is good when the memory is being used! That's what we buy it for!

Tune Java GC, so that it would immediately throw OOME, rather than slow down indeterminately

I've noticed, that sometimes, when memory is nearly exhausted, the GC is trying to complete at any price of performance (causes nearly freeze of the program, sometimes multiple minutes), rather that just throw an OOME (OutOfMemoryError) immediately.
Is there a way to tune the GC concerning this aspect?
Slowing down the program to nearly zero-speed makes it unresponsive. In certain cases it would be better to have a response "I'm dead" rather than no response at all.
Something like what you're after is built into recent JVMs.
If you:
are using Hotspot VM from (at least) Java 6
are using the Parallel or Concurrent garbage collectors
have the option UseGCOverheadLimit enabled (it's on by default with those collectors, so more specifically if you haven't disabled it)
then you will get an OOM before actually running out of memory: if more than 98% of recent time has been spent in GC for recovery of <2% of the heap size, you'll get a preemptive OOM.
Tuning these parameters (the 98% in particular) sounds like it would be useful to you, however there is no way as far as I'm aware to tune those thresholds.
However, check that you qualify under the three points above; if you're not using those collectors with that flag, this may help your situation.
It's worth reading the HotSpot JVM tuning guide, which can be a big help with this stuff.
I am not aware of any way to configure the Java garbage collector in the manner you describe.
One way might be for your application to proactively monitor the amount of free memory, e.g. using Runtime.freeMemory(), and declare the "I'm dead" condition if that drops below a certain threshold and can't be rectified with a forced garbage collection cycle.
The idea is to pick the value for the threshold that's large enough for the process to never get into the situation you describe.
I strongly advice against this, Java trying to GC rather than immediately throwing an OutOfMemoryException makes far much more sense - don't make your application fall over unless every alternative has been exhausted.
If your application is running out of memory, you should be increasing your max heap size or looking at it's performance in terms of memory allocation and seeing if it can be optimised.
Some things to look at would be:
Use weak references in places where your objects would not be required if not referenced anywhere else.
Don't allocated larger objects than you need (ie storing a huge array of 100 objects when you are only going to need access to three of them through the array lifecycle), or using a long datatype when you only need to store eight values.
Don't hold onto references to objects longer than you would need!
Edit: I think you misunderstand my point. If you accidentally leave a live reference to an object that no longer needs to be used it will obviously still not be garbage collected. This is nothing to do with nulling just incase - a typical example to this would be using a large object for a specific purpose, but when it goes out of scope it is not GC because a live reference has accidentally been left elsewhere, somewhere that you don't know about causing a leak. A typical example of this would be in a hashtable lookup which can be solved with weak references as it will be eligible for GC when only weakly reachable.
Regardless these are just general ideas off the top of my head on how to improve performance with memory allocation. The point I am trying to make is that asking how to throw an OutOfMemory error quicker rather than letting Java GC try it's best to free up space on the heap is not a great idea IMO. Optimize your application instead.
Well, turns out, there is a solution since Java8 b92:
-XX:+ExitOnOutOfMemoryError
When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors.
-XX:+CrashOnOutOfMemoryError
If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled).
A good idea is to combine one of the above options with the good old -XX:+HeapDumpOnOutOfMemoryError
I tested these options, they actually work as expected!
Links
See the feature description
See List of changes in that Java release

Categories