questions around synchronization in java; when/how/to what extent - java

I am working on my first mutlithreaded program and got stuck about a couple of aspects of synchronization. I have gone over the multi-threading tutorial on oracle/sun homepage, as well as a number of questions here on SO, so I believe I have an idea of what synchronization is. However, as I mentioned there are a couple of aspects I am not quite sure how to figure out. I formulated them below in form of clear-cut question:
Question 1: I have a singleton class that holds methods for checking valid identifiers. It turns out this class needs to hold to collections to keep track of associations between 2 different identifier types. (If the word identifier sounds complicated; these are just strings). I chose to implement two MultiValueMap instances to implement this many-to-many relationship. I am not sure if these collections have to be thread-safe as the collection will be updated only at the creation of the instance of the singleton class but nevertheless I noticed that in the documentation it says:
Note that MultiValueMap is not synchronized and is not thread-safe. If you wish to use this map from multiple threads concurrently, you must use appropriate synchronization. This class may throw exceptions when accessed by concurrent threads without synchronization.
Could anyone elaborate on this "appropriate synchronization"? What exactly does it mean? I can't really use MultiValueMap.decorate() on a synchronized HashMap, or have I misunderstood something?
Question 2: I have another class that extends a HashMap to hold my experimental values, that are parsed in when the software starts. This class is meant to provide appropriate methods for my analysis, such as permutation(), randomization(), filtering(criteria) etc. Since I want to protect my data as much as possible, the class is created and updated once, and all the above mentioned methods return new collections. Again, I am not sure if this class needs to be thread-safe, as it's not supposed to be updated from multiple threads, but the methods will most certainly be called from a number of threads, and to be "safe" I have added synchronized modifier to all my methods. Can you foresee any problems with that? What kind of potential problems should I be aware of?
Thanks,

Answer 1: Your singleton class should not expose the collections it uses internally to other objects. Instead it should provide appropriate methods to expose the behaviours you want. For example, if your object has a Map in it, don't have a public or protected method to return that Map. Instead have a method that takes a key and returns the corresponding value in the Map (and optionally one that sets the value for the key). These methods can then be made thread safe if required.
NB even for collections that you do not intend to write to, I don't think you should assume that reads are necessarily thread safe unless they are documented to be so. The collection object might maintain some internal state that you don't see, but might get modified on reads.
Answer 2: Firstly, I don't think that inheritance is necessarily the correct thing to use here. I would have a class that provides your methods and has a HashMap as a private member. As long as your methods don't change the internal state of the object or the HashMap, they won't have to be synchronised.

It's hard to give general rules about synchronization, but your general understanding is right. A data-structure which is used in a read-only way, does not have to be synchronized. But, (1) you have to ensure that nobody (i.e. no other thread) can use this structure before it is properly initialized and (2) that the structure is indeed read-only. Remember, even iterators have a remove method.
To your second question: In order to ensure the immutability, i.e. that it is read-only, I would not inherit the HashMap but use it inside your class.

Synchronization commonly is needed when you either could have concurrent modifications of the underlying data or one thread modifies the data while another reads and needs to see that modification.
In your case, if I understand it correctly, the MultiValueMap is filled once upon creation and the just read. So unless reading the map would modify some internals it should be safe to read it from multiple threads without synchronization. The creation process should be synchronized or you should at least prevent read access during initialization (a simple flag might be sufficient).
The class you descibe in question 2 might not need to be synchronized if you always return new collections and no internals of the base collection are modified during creation of those "copies".
One additional note: be aware of the fact that the values in the collections might need to be synchronized as well, since if you safely get an object from the collection in multiple thread but then concurrently modify that object you'll still get problems.
So as a general rule of thumb: read-only access does not necessarily need synchronization (if the objects are not modified during those reads or if that doesn't matter), write access should generally be synchronized.

If your maps are populated once, at the time the class is loaded (i.e. in a static initializer block), and are never modified afterwards (i.e. no elements or associations are added / removed), you are fine. Static initialization is guaranteed to be performed in a thread safe manner by the JVM, and its results are visible to all threads. So in this case you most probably don't need any further synchronization.
If the maps are instance members (this is not clear to me from your description), but not modified after creation, I would say again you are most probably safe if you declare your members final (unless you publish the this object reference prematurely, i.e. pass it to the outside world from the cunstructor somehow before the constructor is finished).

Related

Should I lock a whole object in a function modifying a small part of it?

I'm beginning with multithreading in Java, and was confronted to the following problem.
Let's say we have an object containing a collection of integers, and other variables.
I want to create a function modifying one of the collection's integers.
Should it lock the whole object, or solely the collection?
I understand locking the whole object would work, but I'm afraid I'll have a drop in performances if other threads try to access the other variables of my objects.
Should it lock the whole object, or solely the collection?
This will mainly depend on what you actually have in this class. As already mentioned you need to synchronize access to all shared mutable state so here assuming that your whole object is a shared mutable state, you will need indeed to protect any read and write access to any of its fields with an implicit or intrinsic lock.
Now the question is how are those fields linked to each other?
Indeed let's say that your fields are a List of carrots and a List of cars, they have nothing in common expect being shared mutable lists, in this case you could use one dedicated implicit or intrinsic lock per field.
But if you have for example the rate Euro to Dollar and the rate Dollar to Euro that are directly connected to each other, as you need to ensure the consistency of both fields, you will need to use the same implicit or intrinsic lock for both fields.
So to summarize if all your fields are connected to each other using one implicit or intrinsic lock for all of them will be the way to go otherwise use one dedicated implicit or intrinsic lock per group of fields that are related to each other.

Thread Safe get method in ConcurrentHashMap

I understand that Concurrent HashMap allows only a single thread at a time to update/write operation for "each segment". However multiple threads are allowed to read values from the map at the same time.
For my project, I want to extend this functionality such that while getting a value from a particular segment, no update/write operations should take place in that segment until read is completed.
Any ideas to achieve this?
Just to elaborate on the problem I'm facing right now. After reading a value from the map I perform certain update operations which are strongly dependent on that read value. Thus if a separate thread updates a key value and another threads get() fails to get the most recently updated values, this will lead to a big mess. So in this case extending would be a good idea?
My gut says no. Extending ConcurrentHashMap does not sound like a good idea.
One of the most valuable design principles to which you can adhere is called "Separation of Concerns." The main "concern" of a HashMap is to store key/value pairs. Sounds like maintaining consistent relationships between certain data in your program is another concern.
Don't try to address both concerns with a single class. I would create a higher-level class to take care of maintaining the consistent relationships (maybe by using Lock objects), and I would use a plain HashMap or ConcurrentHashMap to store the key/value pairs.
Extend the ConcurrentHashMap class, and implement the getValue() method by including a synchronized block, so that no access is allowed to other threads until the read operation is completed.
Informally, you can think of a Map as an set of "variables", each "variable" is addressed by a key (instead of a static name of an ordinary variable).
(An array is formally a list of variables, each addressed by an integer index.)
In HashMap, these "variables" are like "plain" variables; if you access a "variable" concurrently, things may go wrong (just like ordinary non-volatile variables)
In ConcurrentHashMap, these "variables" have volatile semantics. Therefore it is "more" safe to use concurrently. For example, a write will be visible to the "subsequent" read.
Of course, volatile is not enough sometimes; for example, we know we cannot use a volatile int for atomic increments (without locking). We need new devices, like AtomicInteger, for atomic operations.
Fortunately, in Java 8, new atomic methods are added to ConcurrentHashMap, so that now we can operate on these "variables" atomically. See if the compute() method may fit your use case.

Java Popsicle Immutable

I am working on a problem where I need to load a large number of inputs to a problem, and process those inputs to create a 'problem space' (i.e. build data structures allowing efficient access to the inputs, etc). Once this initialization is complete, a multi-threaded process kicks off which uses the organized/processed inputs extensively in a concurrent fashion.
For performance reasons, I don't want to lock and synchronize all the read operations in the concurrent phase. What I really want is an an immutable object, safe to access by multiple readers simultaneously.
For practical reasons (readability & maintainability) I don't want to make the InputManager a true immutable object (i.e. all fields 'final' and initialized in construction). The InputManager will have numerous data structures (lists and maps), where the objects in each have many circular references to each other. These objects are constructed as 'true' immutable objects. I don't want to have a 14 argument constructor for the InputManager, but I do need the InputManager class to provide a consistent, read-only view of the problem space once constructed.
What I'm going for is 'popsicle immutability' as discussed by Eric Lippert here.
The approach I'm taking relies on using 'package visibility' of all mutating methods, and performing all mutable actions (i.e. construction of the InputManager) within a single package. Getters all have public visibility.
Something like:
public final class InputManager { // final to prevent making mutable subclasses
InputManager() { ... } //package visibility limits who can create one
HashMap<String,InputA> lookupTable1;
...
mutatingMethodA(InputA[] inputA) { //default (package visibility)
//setting up data structures...
}
mutatingMethodB(InputB[] inputB) { //default (package visibility)
//setting up data structures...
}
public InputA getSpecificInput(String param1) {
... //access data structures
return objA; //return immutable object
}
}
The overall idea, if I haven't been clear enough, is that I'll construct the InputManager in a single thread, then pass it to multiple threads who will do concurrent work using the object. I want to enforce this 'two-phase' mutable/immutable object lifecycle as well as possible, without doing something too 'cute'. Looking for comments or feedback as to better ways to accomplish this goal, as I'm sure it's not an uncommon use case but I can't find a design pattern that supports it either.
Thanks.
Personally I'd stay with your simple and sufficient approach, but in case you're interested, there is such a thing as a mutable companion idiom. You write an inner class that has mutators, while reusing all the fields and getters from the enclosing instance.
As soon as you lose the mutable companion, the enclosing instance it leaves behind is truly immutable.
I think you can simply have separate interfaces for your two phases. One for the building part, the other for the reading part. This way, you separate your access patterns cleanly. You can see this as an instance of the interface segregation principle (pdf):
Clients should not be forced to depend upon interfaces that they do not use.
As long as the object is safely published, and the readers cannot mutate it.
"Publication" here means how the creator makes the object available to readers. For example, the creator put it in a blocking queue, and readers are polling the queue.
It depends on your publication method. I'll bet it's a safe one.

safe publication and the advantage of being immutable vs. effectively immutable

I'm re-reading Java Concurrency In Practice, and I'm not sure I fully understand the chapter about immutability and safe publication.
What the book says is:
Immutable objects can be used safely by any thread without additional
synchronization, even when synchronization is not used to publish
them.
What I don't understand is, why would anyone (interested in making his code correct) publish some reference unsafely?
If the object is immutable, and it's published unsafely, I understand that any other thread obtaining a reference to the object would see its correct state, because of the guarantees offered by proper immutability (with final fields, etc.).
But if the publication is unsafe, another thread might still see null or the previous reference after the publication, instead of the reference to the immutable object, which seems to me like something no-one would like.
And if safe publication is used to make sure the new reference is seen by all the threads, then even if the object is just effectively immutable (no final fields, but no way to mute them), then everything is safe again. As the book says :
Safely published effectively immutable objects can be used safely by
any thread without additional synchronization.
So, why is immutability (vs. effective immutability) so important? In what case would an unsafe publication be wanted?
It is desirable to design objects that don't need synchronization for two reasons:
The users of your objects can forget to synchronize.
Even though the overhead is very little, synchronization is not free, especially if your objects are not used often and by many different threads.
Because the above reasons are very important, it is better to learn the sometimes difficult rules and as a writer, make safe objects that don't require synchronization rather than hoping all the users of your code will remember to use it correctly.
Also remember that the author is not saying the object is unsafely published, it is safely published without synchronization.
As for your second question, I just checked, and the book does not promise you that another thread will always see the reference to the updated object, just that if it does, it will see a complete object. But I can imagine that if it is published through the constructor of another (Runnable?) object, it will be sweet. That does help with explaining all cases though.
EDIT:
effectively immutable and immutable
The difference between effectively immutable and immutable is that in the first case you still need to publish the objects in a safe way. For the truly immutable objects this isn't needed. So truly immutable objects are preferred because they are easier to publish for the reasons I stated above.
So, why is immutability (vs. effective immutability) so important?
I think the main point is that truly immutable objects are harder to break later on. If you've declared a field final, then it's final, period. You would have to remove the final in order to change that field, and that should ring an alarm. But if you've initially left the final out, someone could carelessly just add some code that changes the field, and boom - you're screwed - with only some added code (possibly in a subclass), no modification to existing code.
I would also assume that explicit immutability enables the (JIT) compiler to do some optimizations that would otherwise be hard or impossible to justify. For example, when using volatile fields, the runtime must guarantee a happens-before relation with writing and reading threads. In practice this may require memory barriers, disabling out-of-order execution optimizations, etc. - that is, a performance hit. But if the object is (deeply) immutable (contains only final references to other immutable objects), the requirement can be relaxed without breaking anything: the happens-before relation needs to be guaranteed only with writing and reading the one single reference, not the whole object graph.
So, explicit immutability makes the program simpler so that it's both easier for humans to reason and maintain and easier for the computer to execute optimally. These benefits grow exponentially as the object graph grows, i.e. objects contain objects that contain objects - it's all simple if everything is immutable. When mutability is needed, localizing it to strictly defined places and keeping everything else immutable still gives lots of these benefits.
I had the exact same question as the original poster when finishing reading chapters 1-3 . I think the authors could have done a better job elaborating on this a bit more.
I think the difference lies therein that the internal state of effectively immutable objects can be observed to be in an inconsistent state when they are not safely published whereas the internal state of immutable objects can never be observed to be in an inconsistent state.
However I do think the reference to an immutable object can be observed to be out of date / stale if the reference is not safely published.
"Unsafe publication" is often appropriate in cases where having other threads see the latest value written to a field would be desirable, but having threads see an earlier value would be relatively harmless. A prime example is the cached hash value for String. The first time hashCode() is called on a String, it will compute a value and cache it. If another thread which calls hashCode() on the same string can see the value computed by the first thread, it won't have to recompute the hash value (thus saving time), but nothing bad will happen if the second thread doesn't see the hash value. It will simply end up performing a redundant-but-harmless computation which could have been avoided. Having hashCode() publish the hash value safely would have been possible, but the occasional redundant hash computations are much cheaper than the synchronization required for safe publication. Indeed, except on rather long strings, synchronization costs would probably negate any benefit from caching.
Unfortunately, I don't think the creators of Java imagined situations where code would write to a field and prefer that it should be visible to other threads, but not mind too much if it isn't, and where the reference stored to the field would in turn identify another object with a similar field. This leads to situations writing semantically-correct code is much more cumbersome and likely slower than code which would be likely to work but whose semantics would not be guaranteed. I don't know any really good remedy for that in some cases other than using some gratuitous final fields to ensure that things get properly "published".

Considering object encapsulation, should getters return an immutable property?

When a getter returns a property, such as returning a List of other related objects, should that list and it's objects be immutable to prevent code outside of the class, changing the state of those objects, without the main parent object knowing?
For example if a Contact object, has a getDetails getter, which returns a List of ContactDetails objects, then any code calling that getter:
can remove ContactDetail objects from that list without the Contact object knowing of it.
can change each ContactDetail object without the Contact object knowing of it.
So what should we do here? Should we just trust the calling code and return easily mutable objects, or go the hard way and make a immutable class for each mutable class?
It's a matter of whether you should be "defensive" in your code. If you're the (sole) user of your class and you trust yourself then by all means no need for immutability. However, if this code needs to work no matter what, or you don't trust your user, then make everything that is externalized immutable.
That said, most properties I create are mutable. An occasional user botches this up, but then again it's his/her fault, since it is clearly documented that mutation should not occur via mutable objects received via getters.
It depends on the context. If the list is intended to be mutable, there is no point in cluttering up the API of the main class with methods to mutate it when List has a perfectly good API of its own.
However, if the main class can't cope with mutations, then you'll need to return an immutable list - and the entries in the list may also need to be immutable themselves.
Don't forget, though, that you can return a custom List implementation that knows how to respond safely to mutation requests, whether by firing events or by performing any required actions directly. In fact, this is a classic example of a good time to use an inner class.
If you have control of the calling code then what matters most is that the choice you make is documented well in all the right places.
Joshua Bloch in his excellent "Effective Java" book says that you should ALWAYS make defensive copies when returning something like this. That may be a little extreme, especially if the ContactDetails objects are not Cloneable, but it's always the safe way. If in doubt always favour code safety over performance - unless profiling has shown that the cloneing is a real performance bottleneck.
There are actually several levels of protection you can add. You can simply return the member, which is essentially giving any other class access to the internals of your class. Very unsafe, but in fairness widely done. It will also cause you trouble later if you want to change the internals so that the ContactDetails are stored in a Set. You can return a newly-created list with references to the same objects in the internal list. This is safer - another class can't remove or add to the list, but it can modify the existing objects. Thirdly return a newly created list with copies of the ContactDetails objects. That's the safe way, but can be expensive.
I would do this a better way. Don't return a list at all - instead return an iterator over a list. That way you don't have to create a new list (List has a method to get an iterator) but the external class can't modify the list. It can still modify the items, unless you write your own iterator that clones the elements as needed. If you later switch to using another collection internally it can still return an iterator, so no external changes are needed.
In the particular case of a Collection, List, Set, or Map in Java, it is easy to return an immutable view to the class using return Collections.unmodifiableList(list);
Of course, if it is possible that the backing-data will still be modified then you need to make a full copy of the list.
Depends on the context, really. But generally, yes, one should write as defensive code as possible (returning array copies, returning readonly wrappers around collections etc.). In any case, it should be clearly documented.
I used to return a read-only version of the list, or at least, a copy. But each object contained in the list must be editable, unless they are immutable by design.
I think you'll find that it's very rare for every gettable to be immutable.
What you could do is to fire events when a property is changed within such objects. Not a perfect solution either.
Documentation is probably the most pragmatic solution ;)
Your first imperative should be to follow the Law of Demeter or ‘Tell don't ask’; tell the object instance what to do e.g.
contact.print( printer ) ; // or
contact.show( new Dialog() ) ; // or
contactList.findByName( searchName ).print( printer ) ;
Object-oriented code tells objects to do things. Procedural code gets information then acts on that information. Asking an object to reveal the details of its internals breaks encapsulation, it is procedural code, not sound OO programming and as Will has already said it is a flawed design.
If you follow the Law of Demeter approach any change in the state of an object occurs through its defined interface, therefore side-effects are known and controlled. Your problem goes away.
When I was starting out I was still heavily under the influence of HIDE YOUR DATA OO PRINCIPALS LOL. I would sit and ponder what would happen if somebody changed the state of one of the objects exposed by a property. Should I make them read only for external callers? Should I not expose them at all?
Collections brought out these anxieties to the extreme. I mean, somebody could remove all the objects in the collection while I'm not looking!
I eventually realized that if your objects' hold such tight dependencies on their externally visible properties and their types that, if somebody touches them in a bad place you go boom, your architecture is flawed.
There are valid reasons to make your external properties readonly and their types immutable. But that is the corner case, not the typical one, imho.
First of all, setters and getters are an indication of bad OO. Generally the idea of OO is you ask the object to do something for you. Setting and getting is the opposite. Sun should have figured out some other way to implement Java beans so that people wouldn't pick up this pattern and think it's "Correct".
Secondly, each object you have should be a world in itself--generally, if you are going to use setters and getters they should return fairly safe independent objects. Those objects may or may not be immutable because they are just first-class objects. The other possibility is that they return native types which are always immutable. So saying "Should setters and getters return something immutable" doesn't make too much sense.
As for making immutable objects themselves, you should virtually always make the members inside your object final unless you have a strong reason not to (Final should have been the default, "mutable" should be a keyword that overrides that default). This implies that wherever possible, objects will be immutable.
As for predefined quasi-object things you might pass around, I recommend you wrap stuff like collections and groups of values that go together into their own classes with their own methods. I virtually never pass around an unprotected collection simply because you aren't giving any guidance/help on how it's used where the use of a well-designed object should be obvious. Safety is also a factor since allowing someone access to a collection inside your class makes it virtually impossible to ensure that the class will always be valid.

Categories