Tomcat memory usage grows in IDLE - java

I have problem with growing memory consuming on Tomcat.
Just after start nothing happens,but if some user login, after this memory usage start growing in Edem. PermGen does not grow, but anyway, it anormal.
My analyze shows that thread RMI TCP Connection produces lot of Object[] char[] and String[] objects. I can not understand what's wrong and where to dig. Who starts this thread, is is postgres connections and what is this?

This is normal, and is NOT a memory leak. Objects are created and destroyed constantly by the threads used to manage the application. You see the memory increasing because the JVM garbage collector is not eagerly reclaiming unused memory. This happens periodically (based on previous statistics) or when memory is running low. If it were a real memory leak, you would not see the Eden memory usage drop down to almost zero after a collection. A memory leak is shown as the lowest point (right after a GC) increasing over time.

You are observing that you are observing:
The JVM gathers statistical information about itself and sends it to you. This consumes memory and uses the RMI transfer facilities.
What is RMI TCP Accept, Attach Listener, and Signal Dispatcher in Visual VM?
I also don't see a problem with what that image shows. Eden is basically always growing slowly since there is always a bit work that consumes memory.
Once Eden gets collected (~200MB worth towards the end) you can see that most of the memory is completely free and very little (~8MB) ends in the survivor spaces since there are probably still references to these objects. But they don't seem to leave survivor since OldGen is not growing, also the Histogram at the bottom shows that typical survivor objects make it to level 2 and are gone then.
This all looks pretty normal to me.

Related

Zig-zag heap memory patterns in Akka http scala service

I've a AKKA-HTTP based service which is written in scala. This service works as a proxy for an API call. It creates a host connection pool for calling API using
https://doc.akka.io/docs/akka-http/current/client-side/host-level.html
The service is integrated with NewRelic and has the attached snapshots
I would like to understand the reasons for this kind of zig-zag patterns even when there is no traffic on the service and the connections in the host-pool gets terminated because of idle-timeout.
Moreover, I would also like to know Does the FULL GC will only occur after it reached a threshold say 7GB? or it can also occur at some other time when there is no traffic?
The service has XmX of 8GB. Moreover, there are also multiple dispatchers(fork-join-executor) which performs multiple tasks.
First, your graphs show a very healthy application. This "chainsaw" pattern is overall seen as a very good thing, without much to worry about.
When exactly a Full GC is going to happen is a bit hard to predict (I would use the word impossible, too). When your "live" objects have nowhere to move (because there is simply no space for that), a Full GC may be triggered. There are certain thresholds of when a concurrent phase (marking) is going to be initiated, but if that results in a Full GC or not is decided later.
Considering that G1 also re-sizes regions (makes them less/more) based on heuristics, and the fact that it can also shrink or grow your heap (up to -Xmx), the exact conditions when a Full GC might happen is not easy to predict (I guess some GC experts that know the exact internal details might be able to do that). Also, G1GC can do partial collections: when it collects young regions + some of the old regions (not all), still making it far better than a Full GC time-wise.
Unfortunately, your point about no traffic is correct. When there is very limited traffic, you might not get a Full GC, but immediately as traffic comes in, such a thing might happen. Old regions might slowly build up during your "limited traffic" and as soon as you have a spike - surprise. There are ways to trigger a Full GC on demand, and though I have heard of such applications that do this - I have not worked with one in practice.
In general with a GC that's not reference-counting, you'll see that zig-zag pattern because memory is only reclaimed when a GC runs.
G1 normally only collects areas of the heap where it expects to find a lot of garbage relative to live objects ("garbage collection" is a bit of a misnomer: it actually involves collecting the live objects and (in the case of a relocating garbage collector like G1) moving the live objects to a different area of the heap, which allows the area it collected in to then be declared ready for new allocations; therefore the fewer live objects it needs to handle, the less work it needs to do relative to the memory freed up).
At a high-level, G1 works by defining an Eden (a young generation) where newly created objects where newly created objects are allocated and it divides Eden into multiple regions with each thread being mapped to a region. When a region fills up, only that region is collected, with the survivors being moved into an older generation (this is simplifying). This continues until the survivor generation is full, at which point the survivor and eden generations are collected, with the surviving survivors being promoted to the old generation, and when the old generation fills up, you have a full GC.
So there isn't necessarily a fixed threshold where a full GC will get triggered, but in general the more heap gets used up, the more likely it becomes that a full GC will run. Beyond that, garbage collectors on the JVM tend to be more or less autonomous: most will ignore System.gc and/or other attempts to trigger a GC.
Conceivably with G1, if you allocated a multi-GiB array at startup, threw away the reference, and then after every period of idleness reallocated an array of the same size as the one you allocated at startup and then threw away the reference, you'd have a decent chance of triggering a full GC. This is because that array is big enough to bypass eden and go straight to the old generation where it will consume heap until the next full GC. Eventually there won't be enough contiguous free space in the old generation to allocate these arrays, and that will trigger a full GC. The only complications to this approach are that:
You'll eventually have to outsmart the JIT optimizer, which will see that you're allocating this array and throwing it away and decide that it doesn't actually have to allocate the array
If you have a long enough busy time that a full GC ran since the last allocate-and-throw-away, there's no guarantee that the allocation of the large array will succeed after a full GC, which will cause an OOM.

Need advice on turning Java Garbage Collection

So, the jest of it is, a version of an application at my company is having some memory issues lately, and I'm not fully sure the best way to fix it that isn't just "Allocate more memory", so I wanted to get some guidance.
For the application, It looks like the eden heap is getting full pretty quickly when it has a concurrent users, so objects that won't be alive very long end up in the old heap. After running for a while, the old heap simply gets fulls, and never seems to automatically clean up, but manually running the garbage collection in VisualVM will clear it out (So I assume this means the old heap is full of dead objects)
Is there any setting suggested I could add so garbage collection gets run on the old heap once it gets to a certain threshold? And is there any pitfalls from changing the old/edin ratio from the stock 2:1 to 1:1? For the application, the majority of objects created are what I would consider short lived (From milliseconds to a few minutes)
It looks like the eden heap is getting full pretty quickly when it has a concurrent users, so objects that won't be alive very long end up in the old heap.
This is called "premature promotion"
After running for a while, the old heap simply gets fulls,
When it fills, the GC triggers a major or even a full collection.
never seems to automatically clean up
In which case, it is either used or it is not completely full. It might appear to be almost full, but the GC will be performed when it is actually full.
but manually running the garbage collection in VisualVM will clear it out
So the old gen wasn't almost but not actually full.
I could add so garbage collection gets run on the old heap once it gets to a certain threshold?
You can run System.gc() but this means more work for you application and slow it down. You don't want to be doing this.
If you use the CMS collector you can change the threshold at which it kicks in but unless you need low latency you might be better off leaving your settings as they are.
And is there any pitfalls from changing the old/edin ratio from the stock 2:1 to 1:1?
You reduce the old gen, you you may half the number of GCs you perform and double the amount of time an object can live and not end up in the old gen.
I work in the low latency space and usually set the young space to 24 GB and the old gen to 2 GB. I also use a lot of off heap data so I don't need much old gen. This is not an average use case, but it can work depending on your requirements.
If you are using < 32 GB, just adding a few more GB may be the simplest answer. Also you can use something like -Xmn4g -Xms6g to set the young space and maximum heap not worry about ratios.
For the application, the majority of objects created are what I would consider short lived (From milliseconds to a few minutes)
In that case, ideally you want your eden space large enough so you have a minor collection every few minutes. This way most of your objects will die in the eden space, and not be copied around.
Note: in extreme cases it is possible to have an application produce less than one GB per hour of garbage and run all day with a 24 GB Eden space without even a minor collection.

Is it a memory leak if the garbage collector runs abnormally?

I have developed a J2ME web browser application, it is working fine. I am testing its memory consumption. It seems to me that it has a memory leak, because the green curve that represents the consumed memory of the memory monitor (of the wireless toolkit) reaches the maximum allocated memory (which is 687768 bytes) every 7 requests done by the browser, (i.e. when the end user navigates in the web browser from one page to other for 7 pages) after that the garbage collector runs and frees the allocated memory.
My question is:
is it a memory leak when the garbage collector runs automatically every 7 page navigation?
Do I need to run the garbage collector (System.gc()) manually one time per request to prevent the maximum allocated memory to be reached?
Please guide me, thanks
To determine if it is a memory leak, you would need to observe it more.
From your description, i.e. that once the maximum memory is reached, the GC kicks in and is able to free memory for your application to run, it does not sound like there is a leak.
Also you should not call GC yourself since
it is only an indication
could potentially affect the underlying algorithm affecting its performance.
You should instead focus on why your application needs so much memory in such a short period.
My question is: is it a memory leak when the garbage collector runs automatically every 7 page navigation?
Not necessarily. It could also be that:
your heap is too small for the size of problem you are trying to solve, or
your application is generating (collectable) garbage at a high rate.
In fact, given the numbers you have presented, I'm inclined to think that this is primarily a heap size issue. If the interval between GC runs decreased over time, then THAT would be evidence that pointed to a memory leak, but if the rate stays steady on average, then it would suggest that the rate of memory usage and reclamation are in balance; i.e. no leak.
Do I need to run the garbage collector (System.gc()) manually one time per request to prevent the maximum allocated memory to be reached?
No. No. No.
Calling System.gc() won't cure a memory leak. If it is a real memory leak, then calling System.gc() will not reclaim the leaked memory. In fact, all you will do is make your application RUN A LOT SLOWER ... assuming that the JVM doesn't ignore the call entirely.
Direct and indirect evidence that the default behaviour of HotSpot JVMs is to honour System.gc() calls:
"For example, the default setting for the DisableExplicitGC option causes JVM to honor Explicit garbage collection requests." - http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.express.doc/info/exp/ae/rprf_hotspot_parms.html
"When JMX is enabled in this way, some JVMs (such as Sun's) that do distributed garbage collection will periodically invoke System.gc, causing a Full GC." - http://static.springsource.com/projects/tc-server/2.0/getting-started/html/ch11s07.html
"It is best to disable explicit GC by using the flag -XX:+DisableExplicitGC." - http://docs.oracle.com/cd/E19396-01/819-0084/pt_tuningjava.html
And from the Java 7 source code:
./openjdk/hotspot/src/share/vm/runtime/globals.hpp
product(bool, DisableExplicitGC, false, \
"Tells whether calling System.gc() does a full GC") \
where the false is the default value for the option. (And note that this is in the OS / M/C independent part of the code tree.)
I wrote a library that makes a good effort to force the GC. As mentioned before, System.gc() is asynchronous and won't do anything by itself. You may want to use this library to profile your application and find the spots where too much garbage is being produced. You can read more about it in this article where I describe the GC problem in detail.
That is (semi) normal behavior. Available (unreferenced) storage is not collected until the size of the heap reaches some threshold, triggering a collection cycle.
You can reduce the frequency of GC cycles by being a bit more "heap aware". Eg, a common error in many programs is to parse a string by using substring to not only parse off the left-most word, but also shorten the remaining string by substringing to the right. Creating a new String for the word is not easily avoided, but one can easily avoid repeatedly substringing the "tail" of the original string.
Running System.GC will accomplish nothing -- on most platforms it's a no-op, since it's so commonly abused.
Note that (outside of brain-dead Android) you can't have a true "memory leak" in Java (unless there's a serious JVM bug). What's commonly referred to as a "leak" in Java is the failure to remove all references to objects that will never be used again. Eg, you might keep putting data into a chain and never clear pointers to the stuff on the far end of the chain that is no longer going to be used. The resulting symptom is that the MINIMUM heap used (ie, the size immediately after GC runs) keeps rising each cycle.
Adding to the other excellent answers:
Looks like you are confusing memory leak with garbage collection.
Memory leak is when unused memory cannot be garbage collected because it still has references somewhere (although they're not used for anything).
Garbage collection is when a piece of software (the garbage collector) frees unreferenced memory automatically.
You should not call the garbage collector manually because that would affect its performance.

Possible Memory Leak with JOGL using VBOs

We are currently developing an application which visualizes huge vector fields (> 250'000) on a sphere/plane in 4D. To speed up the process we are using VBOs for the vertices, normals and colors. To prepare the data before sending down to the GPU we are using Buffers (FloatBuffer, ByteBuffer, etc..).
Some data to the cylinders:
Each cylinder uses 16 * 9 + 16 * 3 = 192 floats -> 192 * 4 Bytes = 768 bytes.
After sending down the vertices we are doing the following cleanup:
// clear all buffers
vertexBufferShell.clear();
indexBufferShell.clear();
vertexBufferShell = null;
indexBufferShell = null;
We have monitored it with JConsole and we found out that the GarbageCollector is not run "correctly". Even if we switch down the cylinder count, the memory does not get freed up. In the JConsole Monitoring Tool there is a button to Run the GC and if we do that manually it frees up the memory (If we have loaded a huge amount of cylinders and decrease it a lot, sometimes over 600mb gets cleaned by the GC).
Here an image of the JConsole:
Now the question is how can we clean up this Buffers by ourself in the code? Calling the clear method and set the reference to null is not enough. We have also tried to call System.gc() but with no effect. Do you have any idea?
There is any number of reason the memory usage could increase. I would say its not a memory leak unless the memory increases every time you perform this operation. If it only occurs the first time, it may be that this library needs some memory to load.
I suggest you take a heap dump or at least jmap -histo:live before and after to see where the memory increase is.
If you use a memory profiler like VisualVM or YourKit it will show you where and why memory is being retained.
Its not really a memory leak if the gc is able to clean it up. It might be a waste of memory, but your app seems to be configured to allow it to use over 800MB of heap. This is a trade-off between garbage collection performance and memory usage. You could also try to simply run your application with a smaller heap size.
There might not be a memory leak, but objects going to the Ternured (area where the objects that passed alive in a minor gc goes).
These big step you see might be the Young Eden that is full and after a minor gc is moving alive objects to Ternure.
You can also try to tune up the garbage collector and the memory.
you might have plenty middle length live objects that are constantly passing to the Ternured releasing them in full gc. If you dimension them well those objects go minor gc.
There are plenty jvm arguments to do this.
A good place to look at is here.
This one is suitable for you:
-XX:NewSize=2.125m
Default size of new generation (in bytes)
[5.0 and newer: 64 bit VMs are scaled 30% larger; x86: 1m; x86, 5.0 and older: 640k]
Regards.
The JVM will not free any objects until it has to (e.g.-Xmx reached). Thats one of the main concepts behind all GCs which you can find in the current JVM. They are all optimised for throughput, even the concurrent one. I see nothing unusual on the GC graph.
It would be a leak if the used heap after a full GC would constantly grow over time - if it doesn't -> all good.
in short: foo=null; will not release the object only the reference. The GC can free the memory whenever it likes to.
also:
buffer.clear() does not clear the buffer, it sets pos=0 and limit=capacity - thats all. Please refer to the javadoc for more info.
VisualVM +1
have fun :)
(offtopic: if the buffers are large and static you should allocate them in the permgen. Buffers.newDirectFloatBuffer() would be one of those utility methods in latest gluegen-rt)

How to ensure JVM starts with value of Xms

When I run a java program with the starting heap size of 3G (set by -Xms3072m VM argument), JVM doesn't start with that size. It start with 400m or so and then keeps on acquiring more memory as required.
This is a serious problem for me. I know JVM is going to need the said amount after some time. And when JVM increases is its memory as per the need, it slows down. During the time when JVM acquires more memory, considerable amount of time is spent in garbage collection. And I suppose memory acquisition is an expensive task.
How do I ensure that JVM actually respects the start heap size parameter?
Update: This application creates lots of objects, most of which die quickly. Some resulting objects are required to stay in memory (which get transferred out of young heap.) During this operation, all these objects need to be in memory. After the operation, I can see that all the objects in young heap are claimed successfully. So there are no memory leaks.
The same operation runs smoothly when the heap size reaches 3G. That clearly indicates the extra time required is spent in acquiring memory.
This Sun JDK 5.
If I am not mistaken, Java tries to get the reservation for the memory from the OS. So if you ask for 3 GB as Xms, Java will ask the OS, if this is available but not start with all the memory right away... it might even reserve it (not allocate it). But these are details.
Normally, the JVM runs up to the Xms size before it starts serious old generation garbage collection. Young generation GC runs all the time. Normally GC is only noticeable when old gen GC is running and the VM is in between Xms and Xmx or, in case you set it to the same value, hit roughly Xmx.
If you need a lot of memory for short lived objects, increase that memory area by setting the young area to... let's say 1 GB -XX:NewSize=1g because it is costly to move the "trash" from the young "buckets" into the old gen. Because in case it has not turned into real trash yet, the JVM checks for garbage, does not find any, copies it between the survivor spaces, and finally moves into the old gen. So try to suppress the check for the garbage in the young gen, when you know that you do not have any and postpone this somehow...
Give it a try!
I believe your problem is not coming from where you think.
It looks like what's costing you the most are the GC cycles, and not the allocation of heap size. If you are indeed creating and deleting lots of objects.
You should be focusing your effort on profiling, to find out exactly what is costing you so much, and work on refactoring that.
My hunch - object creation and deletion, and GC cycles.
In any case, -Xms should be setting minimum heap size (check this with your JVM if it is not Sun). Double-check to see exactly why you think it's not the case.
i have used sun's vm and started with minimum set to 14 gigs and it does start off with that.
maybe u should try setting both the xms and xmx values to the same amt, ie try this-
-Xms3072m -Xmx3072m
Why do you think the heap allocation is not right? Taking any operating system tool that shows only 400m does not mean it isn't allocated.
I don't get really what you are after. Is the 400m and above already a problem or is your program supposed to need that much? If you really have the need to deal with that much memory and it seems you need a lot of objects than you can do several things:
If the memory consumption doesn't match your gut feeling it is the right amount than you probably are leaking memory. That would explain why it "slows down" over time. Maybe you missed to remove objects from one structure so they don't get garbage collected and are slowing lookups and such down.
Your memory settings are maybe the trouble in itself. Garbage collection is not run per se. It is only called if there is some threshold reached. If you give it a big heap setting and your operating system has plenty of memory the garbage collection runs not often.
The characteristics you mentioned would be a scenario where a lot of objects are created and shortly after they would be deleted again. Otherwise the garbage collection wouldn't be a problem (some sort of generational gc). That means you have only "young" objects. Consider using an object pool if you are needing objects only a short period of time. That would eliminate the garbage collection at all.
If you know there are good times in your code for running gc you can consider running it manually to be able to see if it changes anything. This is what you would need
Runtime r = Runtime.getRuntime();
r.gc();
This is just for debugging purposes. The gc is doing a great job most of the time so there shouldn't be the need to invoke the gc on your own.

Categories