Use Memory Effectively And Less Volume - java

How we can compress the amount of memory used in the application
I checked the application memory consumption on the profile 'App' and found that the application consumes about 35 megabytes of phone memory and this is considered inefficient

You could use java memory heap analyzer to identify the parts of your application that use up most of the memory. You can then either optimize your data structures, or decide release parts of the data by setting all references to it to null.
Unintended references to data that is not needed anymore are also refered as "memory leaks". Settings those references to null will cause the garbage collector to remove it from java memory heap.
Use WeakReferences this will help you
A weak reference, simply put, is a reference that isn't strong enough to force an object to remain in memory. Weak references allow you to leverage the garbage collector's ability to determine reachability for you, so you don't have to do it yourself. 

Related

Does garbage collection change the object addresses in Java?

I read that garbage collection can lead to memory fragmentation problem at run-time. To solve this problem, compacting is done by the JVM where it takes all the active objects and assigns them contiguous memory.
This means that the object addresses must change from time to time? Also, if this happens,
Are the references to these objects also re-assigned?
Won't this cause significant performance issues? How does Java cope with it?
I read that garbage collection can lead to memory fragmentation problem at run-time.
This is not an exclusive problem of garbage collected heaps. When you have a manually managed heap and free memory in a different order than the preceding allocations, you may get a fragmented heap as well. And being able to have different lifetimes than the last-in-first-out order of automatic storage aka stack memory, is one of the main motivations to use the heap memory.
To solve this problem, compacting is done by the JVM where it takes all the active objects and assigns them contiguous memory.
Not necessarily all objects. Typical implementation strategies will divide the memory into logical regions and only move objects from a specific region to another, but not all existing objects at a time. These strategies may incorporate the age of the objects, like generational collectors moving objects of the young generation from the Eden space to a Survivor space, or the distribution of the remaining objects, like the “Garbage First” collector which will, like the name suggests, evacuate the fragment with the highest garbage ratio first, which implies the smallest work to get a free contiguous memory block.
This means that the object addresses must change from time to time?
Of course, yes.
Also, if this happens,
Are the references to these objects also re-assigned?
The specification does not mandate how object references are implemented. An indirect pointer may eliminate the need to adapt all references, see also this Q&A. However, for JVMs using direct pointers, this does indeed imply that these pointers need to get adapted.
Won't this cause significant performance issues? How does Java cope with it?
First, we have to consider what we gain from that. To “eliminate fragmentation” is not an end in itself. If we don’t do it, we have to scan the reachable objects for gaps between them and create a data structure maintaining this information, which we would call “free memory” then. We also needed to implement memory allocations as a search for matching chunks in this data structure or to split chunks if no exact match has been found. This is a rather expensive operation compared to an allocation from a contiguous free memory block, where we only have to bump the pointer to the next free byte by the required size.
Given that allocations happen much more often than garbage collection, which only runs when the memory is full (or a threshold has been crossed), this does already justify more expensive copy operations. It also implies that just using a larger heap can solve performance issues, as it reduces the number of required garbage collector runs, whereas the number of survivor objects will not scale with the memory (unreachable objects stay unreachable, regardless of how long you defer the collection). In fact, deferring the collection raises the chances that more objects became unreachable in the meanwhile. Compare also with this answer.
The costs of adapting references are not much higher than the costs of traversing references in the marking phase. In fact, non-concurrent collectors could even combine these two steps, transferring an object on first encounter and adapting subsequently encountered references, instead of marking the object. The actual copying is the more expensive aspect, but as explained above, it is reduced by not copying all objects but using certain strategies based on typical application behavior, like generational approaches or the “garbage first” strategy, to minimize the required work.
If you move an object around the memory, its address will change. Therefore, references pointing to it will need to be updated. Memory fragmentation occurs when an object in a contigous (in memory) sequence of objects gets deleted. This creates a hole in the memory space, which is generally bad because contigous chunks of memory have faster access times and a higher probability of fitting in chache lines, among other things. It should be noted that the use of indirection tables can prevent reference updates up to the maximum level of indirection used.
Garbage collection has a moderate performance overhead, not just in Java but in other languages as well, such as C# for example. As how Java copes with this, the strategies for performing garbage collection and how to minimize its impact on performance depends on the particular JVM being used, since each JVM can implement garbage collection however it pleases; the only requirement is that it meets the JVM specification.
However, as a programmer, there are some best practices you should follow to make the best out of garbage collection and to minimze its performance hit on your application. See this, also this, this, this blog post, and this other blog post. You might want to check the JVM specs but it's a bit dense.

Java Empty the cache before OutOfMemoryError

Is there a reliable approach to empty the cache before the memory is full?
Or even better limit the cache according to current available "actual" free memory (hard-referenced objects)?
A soft referenced cache is not a good idea due to high GC penalty, once hit the limit all cache entries need to be reloaded.
Also the value runtime.freeMemory() is not that reliable for my purpose because even if it is too low, after the next GC cycle there might be plenty of free space so it's not a good indication of the actual used memory.
I tried to figure out how much memory each primitive time would consume so I would know the actual memory usage of the cache and put a limit on it, but couldn't find a reliable way to figure out how much memory would be used to store a String reference of size n.
Have two or three collections. If you want degrading service with memory availability you can have.
a map on the most recent entries, e.g. LinkedHashMap.
a map of soft references.
a map of weak references.
You can control how large each map should be with the knowledge that weak references can be cleared after a minor collection, soft references will be cleared if needed, and the strong references map has the core data which will always be retained.
BTW: If you are hitting your memory limit often, you should consider buying more memory up to about 32 GB per JVM. You can buy 32 GB for less than $200.
Try one of the more recent Oracle 1.7 incarnations. They should offer a GarbageCollectorMXBean and GarbageCollectionNotificationInfo. Use that to monitor the amount of used/unused memory after each GC cycle. There is some sample code here.
You can then use a multi-level cache as suggested by Peter to clean out the outer level when memory is tight, but retain the smaller first-level cache.
I would suggest that the simplest solution would be to change your references to weak references.
This way the references can still finalized and garbage collected when all strong references have gone out of scope.
See: http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/ref/WeakReference.html

Java SoftReference, panicing GC and GC behavior

I want to write a cache using SoftReferences using as much memory as possible, as long as it doesn't get too inefficient.
Trying to estimate the used size by calculating object sizes or by getting some used memory approximation of the JVM are dead ends.
The javadoc even states that SoftReferences are good for memory-aware caches, but there is no hard rule on how a JVM implementation shall handle SoftReferences. I'm only talking about the Oracle implementation of the JVM (Version 6.22 and above and Version 7).
Now my questions (please feel free to answer partial, grouped or in any way you please):
Does the JVM take the last access of the object into account and only remove the old ones? Javadoc states: Virtual machine implementations are, however, encouraged to bias against clearing recently-created or recently-used soft references.
What happens when memory gets tight? The JVM panics and just eats all objects?
Is there a parameter for telling the JVM to only eat as much to survive (no OOMEs) and live healthy (not having the CPU only run the GC)
I don't think there is an order. (I'm not sure though about the order of events)
But what happens with soft references is that it is always guaranteed that they will be released before there is an out of memory exception. Unless you have a hard reference pointing to them.
But you should be aware that you might try to access them and they are gone. My guess is that the garbage collector will just eat the first soft reference that fits the amount needed for the operation.
Although SoftReferences are a cool feature, I personally don't dare using them in large
projects where I don't know the memory requirements of every other component. Will a memory-hogging SoftReference cache make other parts perform badly?
I'd instead of using SoftReferences I'd consider using EHCache. It let's you limit the size of particular caches in terms of number of entries, or even better, the bytes used in memory (this is a new feature in the upcoming version 2.5). Different eviction strategies can be configured, of course, such as LRU. There's lots you can configure with EHCache.
If you're using Spring, then version 3.1 will also provide you with some nice #Cachable method-level annotations; EHCache can be used as a caching implementation there.
What happens when memory gets tight? The JVM panics and just eats all
objects?
I know for a fact that with Oracle 1.6 JVM this is not the case. I am aware of a situation where a server that processes concurrent requests uses a response the contains the actual data inside a soft reference. I have observed that when a low memory situation is reported by one thread the other threads' soft references continue to hold on to their contents (the referenced objects).
Is there a parameter for telling the JVM to only eat as much to
survive (no OOMEs) and live healthy (not having the CPU only run the
GC)
What is enough to survive? You mean that if X amount of memory is required then only reclaim soft-references till X is available? I didn't find any such tuning parameter but as I said JVM does not seem to be reclaiming all soft references when it needs to reclaim one.

SoftReference gets garbage collected too early

I'm on my way with implementing a caching mechanism for my Android application.
I use SoftReference, like many examples I've found. The problem is, when I scroll up or down in my ListView, the most of the images are already cleared. I can see in LogCat that my application is garbage collected everytime the application loads new images. That means that the most of the non-visible images in the ListView are gone.
So, everytime I scroll back to an earlier position (where I really downloaded images before) I have to download the images once again - they're not cached.
I've also researched this topic. According to Mark Murphy in this article, it seems that there is (or was?) a bug with the SoftReference. Some other results indicates the same thing (or the same result); SoftReferences are getting cleared too early.
Is there any working solution?
SoftReference are the poor mans Cache. The JVM can hold those reference alive longer, but doesn't have to. As soon as there's no hard reference anymore, the JVM can garbage collect a the soft-referenced Object. The behavior of the JVM you're experiencing is correct, since the JVM doesn't have to hold such object longer around. Of course most JVMs try to keep the soft reference object alive to some degree.
Therefore SoftReferences are kind of a dangerous cache. If you really want to ensure a caching-behavior, you need a real cache. Like a LRU-cache. Especially if you're caching is performance-critical, you should use a proper cache.
From Android Training site:
http://developer.android.com/training/displaying-bitmaps/cache-bitmap.html
In the past, a popular memory cache implementation was a SoftReference
or WeakReference bitmap cache, however this is not recommended.
Starting from Android 2.3 (API Level 9) the garbage collector is more
aggressive with collecting soft/weak references which makes them
fairly ineffective. In addition, prior to Android 3.0 (API Level 11),
the backing data of a bitmap was stored in native memory which is not
released in a predictable manner, potentially causing an application
to briefly exceed its memory limits and crash.
More information in link.
We shoud use LruCache instead.
Cache each image on persistent storage instead of just in memory.
Gamlor's answer is correct in your situtation. However, for additional information, see the GC FAQ, question 32.
The Java HotSpot Server VM uses the maximum possible heap size (as set by the -Xmx option) to calculate free space remaining.
The Java HotSpot Client VM uses the current heap size to calculate the free space.
This means that the general tendency is for the Server VM to grow the heap rather than flush soft references, and -Xmx therefore has a significant effect on when soft references are garbage collected.
Jvm follows this simple equation to determine if a soft reference should get cleared:
interval <= free_heap * ms_per_mb
interval is duration between last gc cycle timestamp and the last access timestamp of soft reference.
free heap is heap space available at that moment.
ms_per_mb is milliseconds allocated to every MB available in heap. (Constant default 1000 ms)
If above equation is false, reference gets cleared.
So, even if you have a lot of free memory, if your soft references have not been accessed for an ample amount of time, they will get cleared.
-XX:SoftRefLRUPolicyMSPerMB= jvm arg can be used to tweak ms_per_mb constant.

What is a use case for a soft reference in Java?

What is a use case for a soft reference in Java? Would it be useful to garbage collect non-critical items when a JVM has run out of memory in order to free up enough resources to perhaps dump critical information before shutting down the JVM?
Are they called soft-references in they they are soft and break when "put under stress" ie:the JVM has run out of memory. I understand weak references and phantom references but not really when these would be needed.
One use is for caching. Imagine you want to maintain an in-memory cache of large objects but you don't want that cache to consume memory that could be used for other purposes (for the cache can always be rebuilt). By maintaining a cache of soft-references to the objects, the referenced objects can be freed by the JVM and the memory they occupied reused for other purposes. The cache would merely need to clear out broken soft-references when it encounters them.
Another use may be for maintaining application images on a memory-constrained device, such as a mobile phone. As the user opens applications, the previous application images could be maintained as soft-references so that they can be cleared out if the memory is needed for something else but will still be there if there is not demand for memory. This will allow the user to return to the application more quickly if there is no pressure on memory and allow the previous application's memory to be reclaimed if it is needed for something else.
This article gave me a good understanding of each of them (weak, soft and phantom references). Here's a summarized cite:
A weak reference, simply put, is a reference that isn't strong enough to force an object to remain in memory. Weak references allow you to leverage the garbage collector's ability to determine reachability for you, so you don't have to do it yourself.
A soft reference is exactly like a weak reference, except that it is less eager to throw away the object to which it refers. An object which is only weakly reachable (the strongest references to it are WeakReferences) will be discarded at the next garbage collection cycle, but an object which is softly reachable will generally stick around for a while.
A phantom reference is quite different than either SoftReference or WeakReference. Its grip on its object is so tenuous that you can't even retrieve the object -- its get() method always returns null. The only use for such a reference is keeping track of when it gets enqueued into a ReferenceQueue, as at that point you know the object to which it pointed is dead.
The best example I can think of is a cache. You might not mind dumping the oldest entries in the cache if memory became a problem. Caching large object graphs might make this likely as well.
An example of how a SoftReference can be used as a cache can be found in this post.

Categories