Recently I come across this article which does the GC tuning the Jenkins which talks about this paramter : -XX:SoftRefLRUPolicyMSPerMB
https://jenkins.io/blog/2016/11/21/gc-tuning/
I understand it prevents OOM error, Because it clears the Soft reference object when it reaches to the threshold.
1) What is this threshold (default = 1000ms in -XX:SoftRefLRUPolicyMSPerMB) means? What does this value denotes?
2) My jenkins seems to have 80% of the softreference (observed using HProf)
3) As suggested in the above article, If I reduce this -XX:SoftRefLRUPolicyMSPerMB flag to 10ms What will be consequence?
NOTE : We use G1GC
Thanks,
Harry
1) From Oracle:
Starting with 1.3.1, softly reachable objects will remain alive for
some amount of time after the last time they were referenced. The
default value is one second of lifetime per free megabyte in the heap.
This value can be adjusted using the -XX:SoftRefLRUPolicyMSPerMB flag,
which accepts integer values representing milliseconds. For example,
to change the value from one second to 2.5 seconds, use this flag:
-XX:SoftRefLRUPolicyMSPerMB=2500
3) I mean, it specifically says in the article you linked to. You'll possibly free up heap space by potentially sacrificing some performance. What more can we tell you?
If Jenkins consumes excessive old generation memory, it may help to
make soft references easier to flush by reducing
-XX:SoftRefLRUPolicyMSPerMB from its default (1000) to something smaller (say 10-200). The catch is that SoftReferences are often used
for objects that are relatively expensive to load, such lazy-loaded
build records and pipeline FlowNode data
Related
Is there any condition when application will never perform Garbage Collection ? Theoretically is it possible to have such application design ?
Yes, there is. Please read about memory leaks in Java. An example is described in Effective Java Item 6: Eliminate obsolete object references
Garbage collection happens on objects which are not referenced anymore in your application.
With Java 11, there is a way to never purposely perform garbage collection, by running your JVM with the newly introduced Epsilon GC, a garbage collector which handles memory allocation but never releases the allocated memory.
There is at least one product in the market that implements high frequency trading using Java and jvm technology.
Obviously, an application that needs to react in microseconds can't afford a garbage collector to kick in and halt the system for arbitrary periods of time.
In this case, the solution was to write the whole application to never create objects that turn into garbage. For example, all input data is kept in fixed byte arrays (that are allocated once at start time) which are then used as buffers for all kinds of processing.
Unless I am mistaken, you can listen to more details on the software engineering radio podcast. I think it should be this episode: http://www.se-radio.net/2016/04/se-radio-episode-255-monica-beckwith-on-java-garbage-collection/
Is there any condition when application will never perform Garbage Collection ?
You can prevent the GC from running by having a Thread which doesn't reach a safe point.
Unless you use a concurrent collector, the GC will only be performed when a memory region, e.g. when the Eden or Tenure spaces fill.
If you make these large enough, and your garbage rate low enough, the GC won't run for long enough that you can either perform a GC overnight, in a maintenance window or restart the process.
Theoretically is it possible to have such application design?
I have worked on applications which GC less than once per day (and some of them are restarted every day)
For example, say you produce 300KB of garbage per second, or 1 GB per hour, with a 24 GB Eden size you can run for a whole day without a collection.
In reality, if you move most of your data off-heap e.g. Chronicle Map or Queue, you might find a 4 GB, can run for a day or even a week with a minor collection.
My simple goal: monitor the memory usage of a Java application so I can be warned when the application is getting dangerously close to throwing an OutOfMemoryError.
Yes, simple to state, but coming up with a correct solution seems very complicated. Some of the complicating factors are:
There are different heap regions, each of which can throw an OutOfMemoryError:
The permgen space, which has it's own size limit (set via -XX:MaxPermSize=)
The overall heap space (set via -Xmx)
The VM may allocate almost all of the heap before bothering to garbage collect. If the application uses a lot of soft references, then in fact this will surely happen. So just a high heap allocation percentage does not imply the application is near to throwing an OutOfMemoryError.
It would be nice if System.gc() guaranteed that the VM would reclaim all possibly reclaimable object (unreferenced and/or weakly referenced object), but it doesn't. So invoking System.gc() and then Runtime.freeMemory() is not reliable.
Objects that are queued for finalization take up memory, but (usually) are freed after they are finalized. So whether the finalizer thread has gotten to them or not affects the (apparent) memory usage (does the VM run the finalizer as a last desparate act before throwing OOM? Doesn't look like it.)
Native code takes up memory as well and too much usage of it can lead to OOM (this is not a likely case in my specific application, but does add another complication to the overall picture).
So what is a good and reliable way to answer the question: Is my Java application getting to throwing an OutOfMemoryError?
Put another way, suppose application version X runs fine and has no memory leak, but version X + 1 has a slow unrecognized memory leak. I'd like to be alerted by this monitoring before version X + 1 throws an OutOfMemoryError, but I'd like the exact same monitoring to not give false positives for version X. There may be some tuning required in setting up this monitoring - that's OK.
One possible answer might be something like: what is the maximum, over the past N "full" GC runs, of the heap utilization immediately after the GC run? If this value exceeds X% of the total allocated memory, then sound the alarms.
The idea is to determine "application memory usage" in simple number like a percentage, or even something like LOW, MEDIUM, or HIGH, and then monitor this value.
The jstat command gives lots of relevant information, the problem is boiling it down to a simple answer and avoiding false positives (or negatives) caused by the complicating factors listed above.
If you watch a memory graph of a long-running application (collected with a tool like jconsole, for example) you'll see a characteristic sawtooth pattern: memory usage climbs, then is GC'd back to a baseline, and then it climbs again. For a healthy app, the peaks and valleys are in two straight horizontal lines. For a leaking app, though, the baseline climbs. That's really what you need to watch for: if each successive GC is less effective than the last, then something is rotten in Denmark.
Search the Oracle docs page for the term Detecting Low Memory and Threshold Notifications -- you may be able to devise some alert system based upon built-in MXBeans. Garbage collection appears to be a focus of at least some of the metrics collection.
I have developed a J2ME web browser application, it is working fine. I am testing its memory consumption. It seems to me that it has a memory leak, because the green curve that represents the consumed memory of the memory monitor (of the wireless toolkit) reaches the maximum allocated memory (which is 687768 bytes) every 7 requests done by the browser, (i.e. when the end user navigates in the web browser from one page to other for 7 pages) after that the garbage collector runs and frees the allocated memory.
My question is:
is it a memory leak when the garbage collector runs automatically every 7 page navigation?
Do I need to run the garbage collector (System.gc()) manually one time per request to prevent the maximum allocated memory to be reached?
Please guide me, thanks
To determine if it is a memory leak, you would need to observe it more.
From your description, i.e. that once the maximum memory is reached, the GC kicks in and is able to free memory for your application to run, it does not sound like there is a leak.
Also you should not call GC yourself since
it is only an indication
could potentially affect the underlying algorithm affecting its performance.
You should instead focus on why your application needs so much memory in such a short period.
My question is: is it a memory leak when the garbage collector runs automatically every 7 page navigation?
Not necessarily. It could also be that:
your heap is too small for the size of problem you are trying to solve, or
your application is generating (collectable) garbage at a high rate.
In fact, given the numbers you have presented, I'm inclined to think that this is primarily a heap size issue. If the interval between GC runs decreased over time, then THAT would be evidence that pointed to a memory leak, but if the rate stays steady on average, then it would suggest that the rate of memory usage and reclamation are in balance; i.e. no leak.
Do I need to run the garbage collector (System.gc()) manually one time per request to prevent the maximum allocated memory to be reached?
No. No. No.
Calling System.gc() won't cure a memory leak. If it is a real memory leak, then calling System.gc() will not reclaim the leaked memory. In fact, all you will do is make your application RUN A LOT SLOWER ... assuming that the JVM doesn't ignore the call entirely.
Direct and indirect evidence that the default behaviour of HotSpot JVMs is to honour System.gc() calls:
"For example, the default setting for the DisableExplicitGC option causes JVM to honor Explicit garbage collection requests." - http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/topic/com.ibm.websphere.express.doc/info/exp/ae/rprf_hotspot_parms.html
"When JMX is enabled in this way, some JVMs (such as Sun's) that do distributed garbage collection will periodically invoke System.gc, causing a Full GC." - http://static.springsource.com/projects/tc-server/2.0/getting-started/html/ch11s07.html
"It is best to disable explicit GC by using the flag -XX:+DisableExplicitGC." - http://docs.oracle.com/cd/E19396-01/819-0084/pt_tuningjava.html
And from the Java 7 source code:
./openjdk/hotspot/src/share/vm/runtime/globals.hpp
product(bool, DisableExplicitGC, false, \
"Tells whether calling System.gc() does a full GC") \
where the false is the default value for the option. (And note that this is in the OS / M/C independent part of the code tree.)
I wrote a library that makes a good effort to force the GC. As mentioned before, System.gc() is asynchronous and won't do anything by itself. You may want to use this library to profile your application and find the spots where too much garbage is being produced. You can read more about it in this article where I describe the GC problem in detail.
That is (semi) normal behavior. Available (unreferenced) storage is not collected until the size of the heap reaches some threshold, triggering a collection cycle.
You can reduce the frequency of GC cycles by being a bit more "heap aware". Eg, a common error in many programs is to parse a string by using substring to not only parse off the left-most word, but also shorten the remaining string by substringing to the right. Creating a new String for the word is not easily avoided, but one can easily avoid repeatedly substringing the "tail" of the original string.
Running System.GC will accomplish nothing -- on most platforms it's a no-op, since it's so commonly abused.
Note that (outside of brain-dead Android) you can't have a true "memory leak" in Java (unless there's a serious JVM bug). What's commonly referred to as a "leak" in Java is the failure to remove all references to objects that will never be used again. Eg, you might keep putting data into a chain and never clear pointers to the stuff on the far end of the chain that is no longer going to be used. The resulting symptom is that the MINIMUM heap used (ie, the size immediately after GC runs) keeps rising each cycle.
Adding to the other excellent answers:
Looks like you are confusing memory leak with garbage collection.
Memory leak is when unused memory cannot be garbage collected because it still has references somewhere (although they're not used for anything).
Garbage collection is when a piece of software (the garbage collector) frees unreferenced memory automatically.
You should not call the garbage collector manually because that would affect its performance.
I want to write a cache using SoftReferences using as much memory as possible, as long as it doesn't get too inefficient.
Trying to estimate the used size by calculating object sizes or by getting some used memory approximation of the JVM are dead ends.
The javadoc even states that SoftReferences are good for memory-aware caches, but there is no hard rule on how a JVM implementation shall handle SoftReferences. I'm only talking about the Oracle implementation of the JVM (Version 6.22 and above and Version 7).
Now my questions (please feel free to answer partial, grouped or in any way you please):
Does the JVM take the last access of the object into account and only remove the old ones? Javadoc states: Virtual machine implementations are, however, encouraged to bias against clearing recently-created or recently-used soft references.
What happens when memory gets tight? The JVM panics and just eats all objects?
Is there a parameter for telling the JVM to only eat as much to survive (no OOMEs) and live healthy (not having the CPU only run the GC)
I don't think there is an order. (I'm not sure though about the order of events)
But what happens with soft references is that it is always guaranteed that they will be released before there is an out of memory exception. Unless you have a hard reference pointing to them.
But you should be aware that you might try to access them and they are gone. My guess is that the garbage collector will just eat the first soft reference that fits the amount needed for the operation.
Although SoftReferences are a cool feature, I personally don't dare using them in large
projects where I don't know the memory requirements of every other component. Will a memory-hogging SoftReference cache make other parts perform badly?
I'd instead of using SoftReferences I'd consider using EHCache. It let's you limit the size of particular caches in terms of number of entries, or even better, the bytes used in memory (this is a new feature in the upcoming version 2.5). Different eviction strategies can be configured, of course, such as LRU. There's lots you can configure with EHCache.
If you're using Spring, then version 3.1 will also provide you with some nice #Cachable method-level annotations; EHCache can be used as a caching implementation there.
What happens when memory gets tight? The JVM panics and just eats all
objects?
I know for a fact that with Oracle 1.6 JVM this is not the case. I am aware of a situation where a server that processes concurrent requests uses a response the contains the actual data inside a soft reference. I have observed that when a low memory situation is reported by one thread the other threads' soft references continue to hold on to their contents (the referenced objects).
Is there a parameter for telling the JVM to only eat as much to
survive (no OOMEs) and live healthy (not having the CPU only run the
GC)
What is enough to survive? You mean that if X amount of memory is required then only reclaim soft-references till X is available? I didn't find any such tuning parameter but as I said JVM does not seem to be reclaiming all soft references when it needs to reclaim one.
I'm on my way with implementing a caching mechanism for my Android application.
I use SoftReference, like many examples I've found. The problem is, when I scroll up or down in my ListView, the most of the images are already cleared. I can see in LogCat that my application is garbage collected everytime the application loads new images. That means that the most of the non-visible images in the ListView are gone.
So, everytime I scroll back to an earlier position (where I really downloaded images before) I have to download the images once again - they're not cached.
I've also researched this topic. According to Mark Murphy in this article, it seems that there is (or was?) a bug with the SoftReference. Some other results indicates the same thing (or the same result); SoftReferences are getting cleared too early.
Is there any working solution?
SoftReference are the poor mans Cache. The JVM can hold those reference alive longer, but doesn't have to. As soon as there's no hard reference anymore, the JVM can garbage collect a the soft-referenced Object. The behavior of the JVM you're experiencing is correct, since the JVM doesn't have to hold such object longer around. Of course most JVMs try to keep the soft reference object alive to some degree.
Therefore SoftReferences are kind of a dangerous cache. If you really want to ensure a caching-behavior, you need a real cache. Like a LRU-cache. Especially if you're caching is performance-critical, you should use a proper cache.
From Android Training site:
http://developer.android.com/training/displaying-bitmaps/cache-bitmap.html
In the past, a popular memory cache implementation was a SoftReference
or WeakReference bitmap cache, however this is not recommended.
Starting from Android 2.3 (API Level 9) the garbage collector is more
aggressive with collecting soft/weak references which makes them
fairly ineffective. In addition, prior to Android 3.0 (API Level 11),
the backing data of a bitmap was stored in native memory which is not
released in a predictable manner, potentially causing an application
to briefly exceed its memory limits and crash.
More information in link.
We shoud use LruCache instead.
Cache each image on persistent storage instead of just in memory.
Gamlor's answer is correct in your situtation. However, for additional information, see the GC FAQ, question 32.
The Java HotSpot Server VM uses the maximum possible heap size (as set by the -Xmx option) to calculate free space remaining.
The Java HotSpot Client VM uses the current heap size to calculate the free space.
This means that the general tendency is for the Server VM to grow the heap rather than flush soft references, and -Xmx therefore has a significant effect on when soft references are garbage collected.
Jvm follows this simple equation to determine if a soft reference should get cleared:
interval <= free_heap * ms_per_mb
interval is duration between last gc cycle timestamp and the last access timestamp of soft reference.
free heap is heap space available at that moment.
ms_per_mb is milliseconds allocated to every MB available in heap. (Constant default 1000 ms)
If above equation is false, reference gets cleared.
So, even if you have a lot of free memory, if your soft references have not been accessed for an ample amount of time, they will get cleared.
-XX:SoftRefLRUPolicyMSPerMB= jvm arg can be used to tweak ms_per_mb constant.