ModCount in map and list - java

While Debugging the collections in eclispse I just Inspect that there is thing called modCount for example if we debug list we will see while inspecting in debugging what this modCount represents..!!please advise

See the javadoc
The number of times this list has been structurally modified. Structural modifications are those that change the size of the list, or otherwise perturb it in such a fashion that iterations in progress may yield incorrect results.
This field is used by the iterator and list iterator implementation returned by the iterator and listIterator methods. If the value of this field changes unexpectedly, the iterator (or list iterator) will throw a ConcurrentModificationException in response to the next, remove, previous, set or add operations. This provides fail-fast behavior, rather than non-deterministic behavior in the face of concurrent modification during iteration.
Use of this field by subclasses is optional. If a subclass wishes to provide fail-fast iterators (and list iterators), then it merely has to increment this field in its add(int, E) and remove(int) methods (and any other methods that it overrides that result in structural modifications to the list). A single call to add(int, E) or remove(int) must add no more than one to this field, or the iterators (and list iterators) will throw bogus ConcurrentModificationExceptions. If an implementation does not wish to provide fail-fast iterators, this field may be ignored.

It's a counter used to detect modifications to the collection when iterating the collection: iterators are fail fast, and throw an exception if the collection has been modified during the iteration. modCount is used to track the modifications.
FYI, the sources of the standard classes are part of the JDK, and you may read them to understand how the standard classes work.

Related

ArrayList$Itr.checkForComodification throws java.util.ConcurrentModificationException on iterated NOT modified array list [duplicate]

I am using a Collection (a HashMap used indirectly by the JPA, it so happens), but apparently randomly the code throws a ConcurrentModificationException. What is causing it and how do I fix this problem? By using some synchronization, perhaps?
Here is the full stack-trace:
Exception in thread "pool-1-thread-1" java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(Unknown Source)
at java.util.HashMap$ValueIterator.next(Unknown Source)
at org.hibernate.collection.AbstractPersistentCollection$IteratorProxy.next(AbstractPersistentCollection.java:555)
at org.hibernate.engine.Cascade.cascadeCollectionElements(Cascade.java:296)
at org.hibernate.engine.Cascade.cascadeCollection(Cascade.java:242)
at org.hibernate.engine.Cascade.cascadeAssociation(Cascade.java:219)
at org.hibernate.engine.Cascade.cascadeProperty(Cascade.java:169)
at org.hibernate.engine.Cascade.cascade(Cascade.java:130)
This is not a synchronization problem. This will occur if the underlying collection that is being iterated over is modified by anything other than the Iterator itself.
Iterator it = map.entrySet().iterator();
while (it.hasNext()) {
Entry item = it.next();
map.remove(item.getKey());
}
This will throw a ConcurrentModificationException when the it.hasNext() is called the second time.
The correct approach would be
Iterator it = map.entrySet().iterator();
while (it.hasNext()) {
Entry item = it.next();
it.remove();
}
Assuming this iterator supports the remove() operation.
Try using a ConcurrentHashMap instead of a plain HashMap
Modification of a Collection while iterating through that Collection using an Iterator is not permitted by most of the Collection classes. The Java library calls an attempt to modify a Collection while iterating through it a "concurrent modification". That unfortunately suggests the only possible cause is simultaneous modification by multiple threads, but that is not so. Using only one thread it is possible to create an iterator for the Collection (using Collection.iterator(), or an enhanced for loop), start iterating (using Iterator.next(), or equivalently entering the body of the enhanced for loop), modify the Collection, then continue iterating.
To help programmers, some implementations of those Collection classes attempt to detect erroneous concurrent modification, and throw a ConcurrentModificationException if they detect it. However, it is in general not possible and practical to guarantee detection of all concurrent modifications. So erroneous use of the Collection does not always result in a thrown ConcurrentModificationException.
The documentation of ConcurrentModificationException says:
This exception may be thrown by methods that have detected concurrent modification of an object when such modification is not permissible...
Note that this exception does not always indicate that an object has been concurrently modified by a different thread. If a single thread issues a sequence of method invocations that violates the contract of an object, the object may throw this exception...
Note that fail-fast behavior cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast operations throw ConcurrentModificationException on a best-effort basis.
Note that
the exception may be throw, not must be thrown
different threads are not required
throwing the exception cannot be guaranteed
throwing the exception is on a best-effort basis
throwing the exception happens when the concurrent modification is detected, not when it is caused
The documentation of the HashSet, HashMap, TreeSet and ArrayList classes says this:
The iterators returned [directly or indirectly from this class] are fail-fast: if the [collection] is modified at any time after the iterator is created, in any way except through the iterator's own remove method, the Iterator throws a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.
Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.
Note again that the behaviour "cannot be guaranteed" and is only "on a best-effort basis".
The documentation of several methods of the Map interface say this:
Non-concurrent implementations should override this method and, on a best-effort basis, throw a ConcurrentModificationException if it is detected that the mapping function modifies this map during computation. Concurrent implementations should override this method and, on a best-effort basis, throw an IllegalStateException if it is detected that the mapping function modifies this map during computation and as a result computation would never complete.
Note again that only a "best-effort basis" is required for detection, and a ConcurrentModificationException is explicitly suggested only for the non concurrent (non thread-safe) classes.
Debugging ConcurrentModificationException
So, when you see a stack-trace due to a ConcurrentModificationException, you can not immediately assume that the cause is unsafe multi-threaded access to a Collection. You must examine the stack-trace to determine which class of Collection threw the exception (a method of the class will have directly or indirectly thrown it), and for which Collection object. Then you must examine from where that object can be modified.
The most common cause is modification of the Collection within an enhanced for loop over the Collection. Just because you do not see an Iterator object in your source code does not mean there is no Iterator there! Fortunately, one of the statements of the faulty for loop will usually be in the stack-trace, so tracking down the error is usually easy.
A trickier case is when your code passes around references to the Collection object. Note that unmodifiable views of collections (such as produced by Collections.unmodifiableList()) retain a reference to the modifiable collection, so iteration over an "unmodifiable" collection can throw the exception (the modification has been done elsewhere). Other views of your Collection, such as sub lists, Map entry sets and Map key sets also retain references to the original (modifiable) Collection. This can be a problem even for a thread-safe Collection, such as CopyOnWriteList; do not assume that thread-safe (concurrent) collections can never throw the exception.
Which operations can modify a Collection can be unexpected in some cases. For example, LinkedHashMap.get() modifies its collection.
The hardest cases are when the exception is due to concurrent modification by multiple threads.
Programming to prevent concurrent modification errors
When possible, confine all references to a Collection object, so its is easier to prevent concurrent modifications. Make the Collection a private object or a local variable, and do not return references to the Collection or its iterators from methods. It is then much easier to examine all the places where the Collection can be modified. If the Collection is to be used by multiple threads, it is then practical to ensure that the threads access the Collection only with appropriate synchonization and locking.
In Java 8, you can use lambda expression:
map.keySet().removeIf(key -> key condition);
removeIf is a convenient default method in Collection which uses Iterator internally to iterate over the elements of the calling collection.
The extraction of the removal condition is expressed by allowing the caller to provide a Predicate<? super E>.
"I'll perform the iteration for you and test your Predicate on each one of the elements in the collection. If an element causes the test method of the Predicate to return true, I'll remove it."
It sounds less like a Java synchronization issue and more like a database locking problem.
I don't know if adding a version to all your persistent classes will sort it out, but that's one way that Hibernate can provide exclusive access to rows in a table.
Could be that isolation level needs to be higher. If you allow "dirty reads", maybe you need to bump up to serializable.
Note that the selected answer cannot be applied to your context directly before some modification, if you are trying to remove some entries from the map while iterating the map just like me.
I just give my working example here for newbies to save their time:
HashMap<Character,Integer> map=new HashMap();
//adding some entries to the map
...
int threshold;
//initialize the threshold
...
Iterator it=map.entrySet().iterator();
while(it.hasNext()){
Map.Entry<Character,Integer> item=(Map.Entry<Character,Integer>)it.next();
//it.remove() will delete the item from the map
if((Integer)item.getValue()<threshold){
it.remove();
}
Try either CopyOnWriteArrayList or CopyOnWriteArraySet depending on what you are trying to do.
I ran into this exception when try to remove x last items from list.
myList.subList(lastIndex, myList.size()).clear(); was the only solution that worked for me.

Why is HashMap fail-fast just because it provides a means to iterate over its keys?

Here's what I know:
Fail-fast iterators will thrown a ConcurrentModificationException
if I try to modify the given element while iterating through it,
without the use of the iterator's methods (like iterator.remove())
It's not guaranteed that a fail-fast iterator will ALWAYS throw a CME.
Fail-safe iterators won't throw CME.
I'm reading a book where I came across the following sentence:
A HashMap provides its set of keys and a Java application can iterate
over them. Thus, a HashMap is fail-fast.
The part that I don't understand is where it says "Thus...". If someone would tell me that a HashMap provides its set of keys, I still wouldn't know whether it's a fail-fast or fail-safe (based on that alone).
So why does, providing its own set of keys, make the HashMap fail-fast?
What's the connection between those two things?
It is actually the first sentence which provides the information why HashMap's is fail-fast:
A HashMap provides its set of keys and a Java application can iterate over them.
Fail safe iterators iterate over the private copy of the original collection, not the collection itself. Therefore any change to the original collection does not get noticed by the iterator, and hence it never throws CME.
Since HashMap provides its set of keys as in the quote above (rather than a copy of) it is therefore fail-fast.
The author just didn't complete the idea.
From javadocs of HashMap (https://docs.oracle.com/javase/7/docs/api/java/util/HashMap.html#keySet()):
"Returns a Set view of the keys contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation), the results of the iteration are undefined. The set supports element removal, which removes the corresponding mapping from the map, via the Iterator.remove, Set.remove, removeAll, retainAll, and clear operations. It does not support the add or addAll operations."
The idea that wasn't expressed is that the hash map is iterated over using the keySet-provided Set (well, let's take that as iterating over the map...). As that set is fail-fast (as per the doc above), the map is also fail-fast.
Remember that other methods allow to iterate over the map (but luckily as far as I could see, they're also fail-fast). Check https://docs.oracle.com/javase/7/docs/api/java/util/HashMap.html#entrySet()
Looking at internet pages that contain that exact sentence, I think the context was a comparison of HashMap with HashTable. It can be made clearer if you look at what the Javadoc says for HashTable:
The iterators returned by the iterator method of the collections returned by all of this class's "collection view methods" are fail-fast: if the Hashtable is structurally modified at any time after the iterator is created, in any way except through the iterator's own remove method, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future. The Enumerations returned by Hashtable's keys and elements methods are not fail-fast.
So, armed with this background, we can figure out that the author of the HashMap vs HashTable comparison was trying to say that HashMap is fail-fast because it has a keySet() method, which returns fail-fast iterators as described by the above Javadoc. However, this gives incomplete information because it can be taken to imply that, unlike HashMap, HashTable isn't fail-fast. In fact, HashTable implements Map and therefore also has the keySet() method, so it also has fail-fast iterators just like HashMap. Another problem with that sentence is that it is misleading: it is not the HashMap that is fail-fast, but the iterators it returns.

Java ConcurrentModificationException: Is it possible to add elements to a hashtable while iterating through it?

I am iterating through a Hashtable and at one point, I add something in to the Hashtable which is clearly giving me a ConcurrentModificationException. I understand why I am getting the error, but is there a way around this such that I could still iterate through the Hashtable and add values simultaneously?
From the docs
The iterators returned by the iterator
method of the collections returned by
all of this class's "collection view
methods" are fail-fast: if the
Hashtable is structurally modified at
any time after the iterator is
created, in any way except through the
iterator's own remove method, the
iterator will throw a
ConcurrentModificationException. Thus,
in the face of concurrent
modification, the iterator fails
quickly and cleanly, rather than
risking arbitrary, non-deterministic
behavior at an undetermined time in
the future. The Enumerations returned
by Hashtable's keys and elements
methods are not fail-fast.
Note that the fail-fast behavior of an
iterator cannot be guaranteed as it
is, generally speaking, impossible to
make any hard guarantees in the
presence of unsynchronized concurrent
modification. Fail-fast iterators
throw ConcurrentModificationException
on a best-effort basis. Therefore, it
would be wrong to write a program that
depended on this exception for its
correctness: the fail-fast behavior of
iterators should be used only to
detect bugs.
If you need this kind of behavior you can safely copy the set of keys and iterate through the copy. Another option if the hashtable is large and copying the keyset is likely to be expensive is to add to a separate collection during the iteration and add the elements of the separate collection post iteration.
You may also want to know about CopyOnWriteSet, which is specifically designed for safe iteration while set is modified. Note that iterator sees only the original set. Any additions will not be visible until next iteration.
http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/CopyOnWriteArraySet.html
This is most useful in many readers / few writers scenario. It is unlikely to be the most efficient solution if reading and writing happens in the same code path.
Make a new Hashtable that you add new entries to; then when you are done iterating, add in the entries from the first table.
Optionally, if you need to, you can skip keys that exist in the original table.
Another alternative would be to use a ConcurrentHashMap instead of a HashMap. However:
The iterators for a ConcurrentHashMap are defined to return objects reflecting a state some time at or after the creation of the iterator. A more precise statement of the behaviour is in the javadocs for the relevant methods.
A ConcurrentHashMap is probably slower than a regular HashMap.

FailSafe Feature

Before asking this question I have tried understanding (here on SOF and on some other websites) fail-safe feature. I understand Java Collection Iterators are fail-fast, which basically means they fail gracefully as soon as underlying Collection is being structurally modified (even by the same thread). My question is does fail-safe property have anything to do with Iterators' remove() or add() features? In my understanding because through Iterators you can add or remove (safely) while iterating over a Collection and you wont get a concurrent exception (that you do without using their remove and add features), so that makes Iterators fail-safe. Or I have got it completely wrong?
Thanks!
Not exactly. In my understanding fail-safe iterators work on snapshots of data and guarantee consistent view of the represented collection on the moment when the iterator has been created. (please see this blog post for more detailed treatment of this question). This property is guaranteed f.i. by iterators of the CopyOnWriteArrayList. It's iterators do not support collection modification operations and its javadoc clarifies their behavior further:
This array never changes during the
lifetime of the iterator, so
interference is impossible and the
iterator is guaranteed not to throw
ConcurrentModificationException. The
iterator will not reflect additions,
removals, or changes to the list since
the iterator was created.
Element-changing operations on
iterators themselves (remove, set, and
add) are not supported. These methods
throw UnsupportedOperationException.
UPDATE:
When talking about fail-safeness and fail-fastness it is important to separate "the failure". In case of iterator there are different cases and hazards. As for the linked article I would say that there author implements fail-safe and fail-fast iteration at the first place by implementing iterators.
The failure in that case can be defined as concurrent modification of the iterated collection. When the collection is modified then fail-fast approach will be to stop iteration and to make the caller aware of the changed conditions (via CME or by some other means).
When dealing with the same use-case and hazard we can move forward and think about fail-safe iteration. Fail-safeness property means that iteration should comply to its contract as long as it possible (and authors of the COWAS succeed by copying underlying data).
The Iterator or ListIterator work on the underlying collection is a way which doesn't not invalidate that Iterator. Any modification via any other iterator or the collection itself, invalidates the iterator for some collections. Some collections are designed for concurrent access and don't have this restriction.

Why is a ConcurrentModificationException thrown and how to debug it

I am using a Collection (a HashMap used indirectly by the JPA, it so happens), but apparently randomly the code throws a ConcurrentModificationException. What is causing it and how do I fix this problem? By using some synchronization, perhaps?
Here is the full stack-trace:
Exception in thread "pool-1-thread-1" java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(Unknown Source)
at java.util.HashMap$ValueIterator.next(Unknown Source)
at org.hibernate.collection.AbstractPersistentCollection$IteratorProxy.next(AbstractPersistentCollection.java:555)
at org.hibernate.engine.Cascade.cascadeCollectionElements(Cascade.java:296)
at org.hibernate.engine.Cascade.cascadeCollection(Cascade.java:242)
at org.hibernate.engine.Cascade.cascadeAssociation(Cascade.java:219)
at org.hibernate.engine.Cascade.cascadeProperty(Cascade.java:169)
at org.hibernate.engine.Cascade.cascade(Cascade.java:130)
This is not a synchronization problem. This will occur if the underlying collection that is being iterated over is modified by anything other than the Iterator itself.
Iterator it = map.entrySet().iterator();
while (it.hasNext()) {
Entry item = it.next();
map.remove(item.getKey());
}
This will throw a ConcurrentModificationException when the it.hasNext() is called the second time.
The correct approach would be
Iterator it = map.entrySet().iterator();
while (it.hasNext()) {
Entry item = it.next();
it.remove();
}
Assuming this iterator supports the remove() operation.
Try using a ConcurrentHashMap instead of a plain HashMap
Modification of a Collection while iterating through that Collection using an Iterator is not permitted by most of the Collection classes. The Java library calls an attempt to modify a Collection while iterating through it a "concurrent modification". That unfortunately suggests the only possible cause is simultaneous modification by multiple threads, but that is not so. Using only one thread it is possible to create an iterator for the Collection (using Collection.iterator(), or an enhanced for loop), start iterating (using Iterator.next(), or equivalently entering the body of the enhanced for loop), modify the Collection, then continue iterating.
To help programmers, some implementations of those Collection classes attempt to detect erroneous concurrent modification, and throw a ConcurrentModificationException if they detect it. However, it is in general not possible and practical to guarantee detection of all concurrent modifications. So erroneous use of the Collection does not always result in a thrown ConcurrentModificationException.
The documentation of ConcurrentModificationException says:
This exception may be thrown by methods that have detected concurrent modification of an object when such modification is not permissible...
Note that this exception does not always indicate that an object has been concurrently modified by a different thread. If a single thread issues a sequence of method invocations that violates the contract of an object, the object may throw this exception...
Note that fail-fast behavior cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast operations throw ConcurrentModificationException on a best-effort basis.
Note that
the exception may be throw, not must be thrown
different threads are not required
throwing the exception cannot be guaranteed
throwing the exception is on a best-effort basis
throwing the exception happens when the concurrent modification is detected, not when it is caused
The documentation of the HashSet, HashMap, TreeSet and ArrayList classes says this:
The iterators returned [directly or indirectly from this class] are fail-fast: if the [collection] is modified at any time after the iterator is created, in any way except through the iterator's own remove method, the Iterator throws a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.
Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis. Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.
Note again that the behaviour "cannot be guaranteed" and is only "on a best-effort basis".
The documentation of several methods of the Map interface say this:
Non-concurrent implementations should override this method and, on a best-effort basis, throw a ConcurrentModificationException if it is detected that the mapping function modifies this map during computation. Concurrent implementations should override this method and, on a best-effort basis, throw an IllegalStateException if it is detected that the mapping function modifies this map during computation and as a result computation would never complete.
Note again that only a "best-effort basis" is required for detection, and a ConcurrentModificationException is explicitly suggested only for the non concurrent (non thread-safe) classes.
Debugging ConcurrentModificationException
So, when you see a stack-trace due to a ConcurrentModificationException, you can not immediately assume that the cause is unsafe multi-threaded access to a Collection. You must examine the stack-trace to determine which class of Collection threw the exception (a method of the class will have directly or indirectly thrown it), and for which Collection object. Then you must examine from where that object can be modified.
The most common cause is modification of the Collection within an enhanced for loop over the Collection. Just because you do not see an Iterator object in your source code does not mean there is no Iterator there! Fortunately, one of the statements of the faulty for loop will usually be in the stack-trace, so tracking down the error is usually easy.
A trickier case is when your code passes around references to the Collection object. Note that unmodifiable views of collections (such as produced by Collections.unmodifiableList()) retain a reference to the modifiable collection, so iteration over an "unmodifiable" collection can throw the exception (the modification has been done elsewhere). Other views of your Collection, such as sub lists, Map entry sets and Map key sets also retain references to the original (modifiable) Collection. This can be a problem even for a thread-safe Collection, such as CopyOnWriteList; do not assume that thread-safe (concurrent) collections can never throw the exception.
Which operations can modify a Collection can be unexpected in some cases. For example, LinkedHashMap.get() modifies its collection.
The hardest cases are when the exception is due to concurrent modification by multiple threads.
Programming to prevent concurrent modification errors
When possible, confine all references to a Collection object, so its is easier to prevent concurrent modifications. Make the Collection a private object or a local variable, and do not return references to the Collection or its iterators from methods. It is then much easier to examine all the places where the Collection can be modified. If the Collection is to be used by multiple threads, it is then practical to ensure that the threads access the Collection only with appropriate synchonization and locking.
In Java 8, you can use lambda expression:
map.keySet().removeIf(key -> key condition);
removeIf is a convenient default method in Collection which uses Iterator internally to iterate over the elements of the calling collection.
The extraction of the removal condition is expressed by allowing the caller to provide a Predicate<? super E>.
"I'll perform the iteration for you and test your Predicate on each one of the elements in the collection. If an element causes the test method of the Predicate to return true, I'll remove it."
It sounds less like a Java synchronization issue and more like a database locking problem.
I don't know if adding a version to all your persistent classes will sort it out, but that's one way that Hibernate can provide exclusive access to rows in a table.
Could be that isolation level needs to be higher. If you allow "dirty reads", maybe you need to bump up to serializable.
Note that the selected answer cannot be applied to your context directly before some modification, if you are trying to remove some entries from the map while iterating the map just like me.
I just give my working example here for newbies to save their time:
HashMap<Character,Integer> map=new HashMap();
//adding some entries to the map
...
int threshold;
//initialize the threshold
...
Iterator it=map.entrySet().iterator();
while(it.hasNext()){
Map.Entry<Character,Integer> item=(Map.Entry<Character,Integer>)it.next();
//it.remove() will delete the item from the map
if((Integer)item.getValue()<threshold){
it.remove();
}
Try either CopyOnWriteArrayList or CopyOnWriteArraySet depending on what you are trying to do.
I ran into this exception when try to remove x last items from list.
myList.subList(lastIndex, myList.size()).clear(); was the only solution that worked for me.

Categories