#Service
public class test(){
public Map<String, String> map= new HashMap<>();
}
in a web application using spring i annotated a class with #Service and defined a global variable map and inserting values in it.
i assumed
map hold inserted values until some one restart the server or remove by using map.remove();
but my senior told me
it will hold only for some time after some ideal time and garbage collector will remove it like after 2 or 3 days is that true ?
(1) Does Map hold inserted values until some one restart the server or
remove by using map.remove() ?
Yes, you are right, the data will be available until the server is restarted or map.remove() or map.clear() is called.
(2) It will hold only for some time after some ideal time and garbage
collector will remove it like after 2 or 3 days is that true ?
No, this is wrong, Garbage Collector will not clear the object unless you call the remove() or clear() on the Map. The data can be pushed/hold in the map, until it is allowed (i.e., the maxheap size) and beyond that you will get OutOfMemory exceptions in the server.
P.S.: But, one point you need to know is, it is NOT a good idea to store the data like this into a Map inside Service, rather you have to consider Caching frameworks (Ehcache, HazelCast, etc..) for caching/storing the data, which provide advanced features like cache expiry, distributing data across the servers, etc..
Map holds list of addresses in memory. Those addresses hold the values which you store using
map.put(object);
every time you'd like to retrieve a specific value which you have stored in memory you pass in the address of location where you stored your value at and as a result you receive your stored value.
Coming back to your question, when you remove the value from the map, what really happens is you lose the address of location where you have your value stored. And there is no other way to access that value anymore. Although the value is still in memory but you cannot use it since you have deleted the address of its location.
Garbage Collector will flush this value but you cannot enforce garbage collector to run. The best you can do is call System.gc() which simply is a hint to the garbage collector that you want it to do a collection. And it will run any random time or when JVM requires more memory.
As your senior mentioned that it will be garbage collected in 2-3 days, I suppose he is just speaking hypothetically, because you never know when the garbage collector will run and clear the memory.
Related
I have a situation where I load some data at application level inside a HashMapin my android application. I use one of the entries (with a particular keyA in the HashMap) in this map to initialise some data inside my Activity and this Activity hangs around for a while. While the user is on this activity, the HashMap from which I referenced the object for keyA might change. The code to update the HashMap is written by me. When I want to update the HashMap, I want to clear the entire HashMap(so that its size() returns 0) and want to populate everything again. If I call HashMap.clear(), would it make the old objects to be garbage collected?
If yes, what is the best way to clear the entire HashMap so that I don't loose the old objects if they are being referred to anywhere else in the code. What would be the best to reassign values to the HashMap in this case?
PS: I am using ReentrantReadWriteLock for maintaining the access if that might help.
I call HashMap.clear(), would it make the old objects to be garbage collected?
No. No Java object will ever be garbage collected if it is reachable. If there is any reference (or chain of references) that allows your code to access an object, then the object is safe from garbage collection.
If I call HashMap.clear(), would it make the old objects to be garbage collected?
A HashMap is just storing a single reference to an object. When you call clear(), then the object can be gc'd if (and only if) the only reference to the object was the HashMap. If other parts of the code have references to the object then it won't be. That's how gc works. HashMap isn't magic.
What would be the best to reassign values to the HashMap in this case?
If you are updating the values in a shared map then your code is going to have to re-get the values from the map whenever they use them – or maybe every so often. I'd first re-get the value each time you use it and then prove to yourself that it's too expensive before doing anything else.
PS: I am using ReentrantReadWriteLock for maintaining the access if that might help.
I'd switch to using a ConcurrentHashMap and not both with the expense and code maintenance of doing the locking yourself.
If you need to update specifically keyA alone then you can use HashMap.put("keyA","value");
This will replace the value of keyA in the HashMap object with the value you specify, also it will not affect the other values saved in the HashMap object.
I have a map that I use to store dynamic data that are discarded as soon as they are created (i.e. used; they are consumed quickly). It responds to user interaction in the sense that when user clicks a button the map is filled and then the data is used to do some work and then the map is no longer needed.
So my question is what's a better approach for emptying the map? should I set it to null each time or should I call clear()? I know clear is linear in time. But I don't know how to compare that cost with that of creating the map each time. The size of the map is not constant, thought it may run from n to 3n elements between creations.
If a map is not referenced from other objects where it may be hard to set a new one, simply null-ing out an old map and starting from scratch is probably lighter-weight than calling a clear(), because no linear-time cleanup needs to happen. With the garbage collection costs being tiny on modern systems, there is a good chance that you would save some CPU cycles this way. You can avoid resizing the map multiple times by specifying the initial capacity.
One situation where clear() is preferred would be when the map object is shared among multiple objects in your system. For example, if you create a map, give it to several objects, and then keep some shared information in it, setting the map to a new one in all these objects may require keeping references to objects that have the map. In situations like that it's easier to keep calling clear() on the same shared map object.
Well, it depends on how much memory you can throw at it. If you have a lot, then it doesn't matter. However, setting the map itself to null means that you have freed up the garbage collector - if only the map has references to the instances inside of it, the garbage collector can collect not only the map but also any instances inside of it. Clear does empty the map but it has to iterate over everything in the map to set each reference to null, and this takes place during your execution time that you can control - the garbage collector essentially has to do this work anyways, so let it do its thing. Just note that setting it to null doesn't let you reuse it. A typical pattern to reuse a map variable may be:
Map<String, String> whatever = new HashMap<String, String();
// .. do something with map
whatever = new HashMap<String, String>();
This allows you to reuse the variable without setting it to null at all, you silently discard the reference to the old map. This is atrocious practice in non-memory managed applications since they must reference the old pointer to clear it (this is a dangling pointer in other langauges), but in Java since nothing references this the GC marks it as eligible for collection.
I feel nulling the existing map is more cheaper than clear(). As creation of object is very cheap in modern JVMs.
Short answer: use Collection.clear() unless it is too complicated to keep the collection arround.
Detailed answer: In Java, the allocation of memory is almost instantaneous. It is litle more than a pointer that gets moved inside the VM. However, the initialization of those objects might add up to something significant. Also, all objects that use an internal buffer are sensible to resizing and copying of their content. Using clear() make sure that buffers eventually stabilize to some dimension, so that reallocation of memory and copying if old buffer to new buffer will never be necessary.
Another important issue is that reallocating then releasing a lot of objects will require more frequent execution of the Garbage collector, which might cause suddenly lag.
If you always holds the map, it will be prompted to the old generation. If each user has one corresponding map, the number of map in the old generation is proportionate to the number of the user. It may trigger Full GC more frequently when the number of users increase.
You can use both with similar results.
One prior answer notes that clear is expected to take constant time in a mature map implementation. Without checking the source code of the likes of HashMap, TreeMap, ConcurrentHashMap, I would expect their clear method to take constant time, plus amortized garbage collection costs.
Another poster notes that a shared map cannot be nulled. Well, it can if you want it, but you do it by using a proxy object which encapsulates a proper map and nulls it out when needed. Of course, you'd have to implement the proxy map class yourself.
Map<Foo, Bar> myMap = new ProxyMap<Foo, Bar>();
// Internally, the above object holds a reference to a proper map,
// for example, a hash map. Furthermore, this delegates all calls
// to the underlying map. A true proxy.
myMap.clear();
// The clear method simply reinitializes the underlying map.
Unless you did something like the above, clear and nulling out are equivalent in the ways that matter, but I think it's more mature to assume your map, even if not currently shared, may become shared at a later time due to forces you can't foresee.
There is another reason to clear instead of nulling out, even if the map is not shared. Your map may be instantiated by an external client, like a factory, so if you clear your map by nulling it out, you might end up coupling yourself to the factory unnecessarily. Why should the object that clears the map have to know that you instantiate your maps using Guava's Maps.newHashMap() with God knows what parameters? Even if this is not a realistic concern in your project, it still pays off to align yourself to mature practices.
For the above reasons, and all else being equal, I would vote for clear.
HTH.
So because Javolution does not work (see here) I am in deep need of a Java Map implementation that is efficient and produces no garbage under simple usage. java.util.Map will produce garbage as you add and remove keys. I checked Trove and Guava but it does not look they have Set<E> implementations. Where can I find a simple and efficient alternative for java.util.Map?
Edit for EJP:
An entry object is allocated when you add an entry, and released to GC when you remove it. :(
void addEntry(int hash, K key, V value, int bucketIndex) {
Entry<K,V> e = table[bucketIndex];
table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
if (size++ >= threshold)
resize(2 * table.length);
}
Taken literally, I am not aware of any existing implementation of Map or Set that never produces any garbage on adding and removing a key.
In fact, the only way that it would even be technically possible (in Java, using the Map and Set APIs as defined) is if you were to place a strict upper bound on the number of entries. Practical Map and Set implementations need extra state proportional to the number of elements they hold. This state has to be stored somewhere, and when the current allocation is exceeded that storage needs to be expanded. In Java, that means that new nodes need to be allocated.
(OK, you could designed a data structure class that held onto old useless nodes for ever, and therefore never generated any collectable garbage ... but it is still generating garbage.)
So what can you do about this in practice ... to reduce the amount of garbage generated. Let's take HashMap as an example:
Garbage is created when you remove an entry. This is unavoidable, unless you replace the hash chains with an implementation that never releases the nodes that represent the chain entries. (And that's a bad idea ... unless you can guarantee that the free node pool size will always be small. See below for why it is a bad idea.)
Garbage is created when the main hash array is resized. This can be avoided in a couple of ways:
You can give a 'capacity' argument in the HashMap constructor to set the size of the initial hash array large enough that you never need to resize it. (But that potentially wastes space ... especially if you can't accurately predict how big the HashMap is going to grow.)
You can supply a ridiculous value for the 'load factor' argument to cause the HashMap to never resize itself. (But that results in a HashMap whose hash chains are unbounded, and you end up with O(N) behaviour for lookup, insertion, deletion, etc.
In fact, creating garbage is not necessarily bad for performance. Indeed, hanging onto nodes so that the garbage collector doesn't collect them can actually be worse for performance.
The cost of a GC run (assuming a modern copying collector) is mostly in three areas:
Finding nodes that are not garbage.
Copying those non-garbage nodes to the "to-space".
Updating references in other non-garbage nodes to point to objects in "to-space".
(If you are using a low-pause collector there are other costs too ... generally proportional to the amount of non-garbage.)
The only part of the GC's work that actually depends on the amount of garbage, is zeroing the memory that the garbage objects once occupied to make it ready for reuse. And this can be done with a single bzero call for the entire "from-space" ... or using virtual memory tricks.
Suppose your application / data structure hangs onto nodes to avoid creating garbage. Now, when the GC runs, it has to do extra work to traverse all of those extra nodes, and copy them to "to-space", even though they contain no useful information. Furthermore, those nodes are using memory, which means that if the rest of the application generates garbage there will be less space to hold it, and the GC will need to run more often.
And if you've used weak/soft references to allow the GC to claw back nodes from your data structure, then that's even more work for the GC ... and space to represent those references.
Note: I'm not claiming that object pooling always makes performance worse, just that it often does, especially if the pool gets unexpectedly big.
And of course, that's why HashMap and similar general purpose data structure classes don't do any object pooling. If they did, they would perform significantly badly in situations where the programmer doesn't expect it ... and they would be genuinely broken, IMO.
Finally, there is an easy way to tune a HashMap so that an add immediately followed by a remove of the same key produces no garbage (guaranteed). Wrap it in a Map class that caches the last entry "added", and only does the put on the real HashMap when the next entry is added. Of course, this is NOT a general purpose solution, but it does address the use case of your earlier question.
I guess you need a version of HashMap that uses open addressing, and you'll want something better than linear probing. I don't know of a specific recommendation though.
http://sourceforge.net/projects/high-scale-lib/ has implementations of Set and Map which do not create garbage on add or remove of keys. The implementation uses a single array with alternating keys and values, so put(k,v) does not create an Entry object.
Now, there are some caveats:
Rehash creates garbage b/c it replaces the underlying array
I think this map will rehash given enough interleaved put & delete operations, even if the overall size is stable. (To harvest tombstone values)
This map will create Entry object if you ask for the entry set (one at a time as you iterate)
The class is called NonBlockingHashMap.
One option is to try to fix the HashMap implementation to use a pool of entries. I have done that. :) There are also other optimizations for speed you can do there. I agree with you: that issue with Javolution FastMap is mind-boggling. :(
When a entry in a map has weak key reference, the entry will be removed at the next garbage collection, right?
I can understand that the MapMaker class provides the weakKeys method. But I am confused with the weakValue(). when should I use weakValue or softValue in MapMaker?
You'd use weakValues() when you want entries whose values are weakly reachable to be garbage collected. For an example of when this might be useful... say you have a class that allows users to add objects to it and stores them as values in a Map for whatever reason. This class is typically used as a singleton, so it'll stick around the whole time your application is running. However, the objects the user adds to it aren't necessarily so long-lived. The application will be done with them long before it finishes. You don't want the user to have to manually remove these objects from your class when it is finished with them, but you don't want a memory leak by keeping references to them in your class forever (in other words garbage collection should just work like normal, ignoring your class). The solution is to give the map weakValues() and everything will work as you want.
softValues() is good for caching... if you have a Map<Integer, Foo> and you want entries to to be removable in response to memory demand, you'd want to use it. You wouldn't want to use weakKeys() or softKeys() because they both use == identity, which would cause problems for you (wouldn't be able to get a value with key 300 out because the key you pass in probably wouldn't == the key in the map).
I am working on querying the address book via J2ME and returning a custom
Hashtable which I will call pimList. The keys in pimList {firstname, lastname} maps to an object (we'll call this object ContactInfo) holding (key, value) pairs e.g. work1 -> 44232454545, home1 -> 44876887787
Next I take firstName and add it into a tree.
The nodes of the tree contains the characters from the firstName.
e.g. "Tom" would create a tree with nodes:
"T"->"o"->"m"-> ContactInfo{ "work1" -> "44232454545", "home1" -> "44876887787" }
So the child of the last character m points to the same object instance in pimList.
As I understand it, the purpose of WeakReferences is to that its pointer is weak and the object it points to can be easily GC'ed. In a memory constraint device like mobile phones, I would like to ensure I don't leak or waste memory. Thus, is it appropriate for me to make:
pimList's values to be a WeakReference
The child of node "m" to point to WeakReference
?
It should work. You will need to handle the case where you are using the returned Hashtable and the items are collected however... which might mean you want to rethink the whole thing.
If the Hashtable is short lived then there likely isn't a need for the weak references.
You can remove the items out of the Hashtable when you are done with them if you want them to be possibly cleaned up while the rest of the Hashtable is stll being used.
Not sure I exactly understood what you try to do but an objects reachability is determined by the strongest reference to it (hard reference is stronger than soft reference which is stronger than weak reference which is stronger than phantom reference).
Hard referenced objects won't be garbage collected. Soft referenced objects will be garbage collected only if JVM runs out of memory, weak referenced objects will be garbage collected as soon as possible (this is theory it depends on the JVM and GC implementation).
So usually you use softreference to build a cache (you want to reference information as long as possible). You use weakreference to associate information to an object that is hard referenced somewhere, so if the hardreferenced object is no longer referenced the associated information can be garbage collected - use weakhashmap for that.
hope this helps...
I am not sure if the WeakMap is the right thing here. If you do not hold strong references anywhere in your application, the data in the map will disappear nearly immediately, because nobody is referencing it.
A weak map is a nice thing, if you want to find things again, that are still in use elsewhere and you only want to have one instance of it.
But I might not get your data setup right... to be honest.