Java. Read, write, separate synch - java

I am learning multithreading, and I have a little question.
When I am sharing some variable between threads (ArrayList, or something other like double, float), should it be lcoked by the same object in read/write? I mean, when 1 thread is setting variable value, can another read at same time withoud any problems? Or should it be locked by same object, and force thread to wait with reading, until its changed by another thread?

All access to shared state must be guarded by the same lock, both reads and writes. A read operation must wait for the write operation to release the lock.
As a special case, if all you would to inside your synchronized blocks amounts to exactly one read or write operation, then you may dispense with the synchronized block and mark the variable as volatile.

Short: It depends.
Longer:
There is many "correct answer" for each different scenarios. (and that makes programming fun)
Do the value to be read have to be "latest"?
Do the value to be written have let all reader known?
Should I take care any race-condition if two threads write?
Will there be any issue if old/previous value being read?
What is the correct behaviour?
Do it really need it to be correct ? (yes, sometime you don't care for good)
tl;dr
For example, not all threaded programming need "always correct"
sometime you tradeoff correctness with performance (e.g. log or progress counter)
sometime reading old value is just fine
sometime you need eventually correct (e.g. in map-reduce, nobody nor synchronized is right until all done)
in some cases, correct is mandatory for every moment (e.g. your bank account balance)
in write-once, read-only it doesn't matter.
sometime threads in groups with complex cases.
sometime many small, independent lock run faster, but sometime flat global lock is faster
and many many other possible cases
Here is my suggestion: If you are learning, you should thing "why should I need a lock?" and "why a lock can help in DIFFERENT cases?" (not just the given sample from textbook), "will if fail or what could happen if a lock is missing?"

If all threads are reading, you do not need to synchronize.
If one or more threads are reading and one or more are writing you will need to synchronize somehow. If the collection is small you can use synchronized. You can either add a synchronized block around the accesses to the collection, synchronized the methods that access the collection or use a concurrent threadsafe collection (for example, Vector).
If you have a large collection and you want to allow shared reading but exclusive writing you need to use a ReadWriteLock. See here for the JavaDoc and an exact description of what you want with examples:
ReentrantReadWriteLock
Note that this question is pretty common and there are plenty of similar examples on this site.

Related

java - what does synchronized really do according the java memory model?

After reading a little bit about the java memory model and synchronization, a few questions came up:
Even if Thread 1 synchronizes the writes, then although the effect of the writes will be flushed to main memory, Thread 2 will still not see them because the read came from level 1 cache. So synchronizing writes only prevents collisions on writes. (Java thread-safe write-only hashmap)
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads. (https://docs.oracle.com/javase/tutorial/essential/concurrency/syncmeth.html)
A third website (I can't find it again, sorry) said that every change to any object - it doesn't care where the reference comes from - will be flushed to memory when the method leaves the synchronized block and establishes a happens-before situation.
My questions are:
What is really flushed back to memory by exiting the synchronized block? (As some websites also said that only the object whose lock has been aquired will be flushed back.)
What does happens-before-relaitonship mean in this case? And what will be re-read from memory on entering the block, what not?
How does a lock achieve this functionality (from https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Lock.html):
All Lock implementations must enforce the same memory synchronization semantics as provided by the built-in monitor lock, as described in section 17.4 of The Java™ Language Specification:
A successful lock operation has the same memory synchronization effects as a successful Lock action.
A successful unlock operation has the same memory synchronization effects as a successful Unlock action.
Unsuccessful locking and unlocking operations, and reentrant locking/unlocking operations, do not require any memory synchronization effects.
If my assumtion that everything will be re-read and flushed is correct, this is achieved by using synchronized-block in the lock- and unlock-functions (which are mostly also necessary), right? And if it's wrong, how can this functionality be achieved?
Thank you in advance!
The happens-before-relationship is the fundamental thing you have to understand, as the formal specification operates in terms of these. Terms like “flushing” are technical details that may help you understanding them, or misguide you in the worst case.
If a thread performs action A within a synchronized(object1) { … }, followed by a thread performing action B within a synchronized(object1) { … }, assuming that object1 refers to the same object, there is a happens-before-relationship between A and B and these actions are safe regarding accessing shared mutable data (assuming, no one else modifies this data).
But this is a directed relationship, i.e. B can safely access the data modified by A. But when seeing two synchronized(object1) { … } blocks, being sure that object1 is the same object, you still need to know whether A was executed before B or B was executed before A, to know the direction of the happens-before-relationship. For ordinary object oriented code, this usually works naturally, as each action will operate on whatever previous state of the object it finds.
Speaking of flushing, leaving a synchronized block causes flushing of all written data and entering a synchronized block causes rereading of all mutable data, but without the mutual exclusion guaranty of a synchronized on the same instance, there is no control over which happens before the other. Even worse, you can not use the shared data to detect the situation, as without blocking the other thread, it can still inconsistently modify the data you’re operating on.
Since synchronizing on different objects can’t establish a valid happens-before relationship, the JVM’s optimizer is not required to maintain the global flush effect. Most notably, today’s JVMs will remove synchronization, if Escape Analysis has proven that the object is never seen by other threads.
So you can use synchronizing on an object to guard access to data stored somewhere else, i.e not in that object, but it still requires consistent synchronizing on the same object instance for all access to the same shared data, which complicates the program logic, compared to simply synchronizing on the same object containing the guarded data.
volatile variables, like used by Locks internally, also have a global flush effect, if threads are reading and writing the same volatile variable, and use the value to form a correct program logic. This is trickier than with synchronized blocks, as there is no mutual exclusion of code execution, or well, you could see it as having a mutual exclusion limited to a single read, write, or cas operation.
There is no flush per-se, it's just easier to think that way (easier to draw too); that's why there are lots of resources online that refer to flush to main memory (RAM assuming), but in reality it does not happen that often. What really happens is that a drain is performed of the load and/or store buffers to L1 cache (L2 in case of IBM) and it's up to the cache coherence protocol to sync data from there; or to put it differently caches are smart enough to talk to each other (via a BUS) and not fetch data from main memory all the time.
This is a complicated subject (disclaimer: even though I try to do a lot of reading on this, a lot of tests when I have time, I absolutely do not understand it in full glory), it's about potential compiler/cpu/etc re-orderings (program order is never respected), it's about flushes of the buffers, about memory barriers, release/acquire semantics... I don't think that your question is answerable without a phD report; that's why there are higher layers in the JLS called - "happens-before".
Understanding at least a small portion of the above, you would understand that your questions (at least first two), make very little sense.
What is really flushed back to memory by exiting the synchronized block
Probably nothing at all - caches "talk" to each other to sync data; I can only think of two other cases: when you first time read some data and when a thread dies - all written data will be flushed to main memory(but I'm not sure).
What does happens-before-relaitonship mean in this case? And what will be re-read from memory on entering the block, what not?
Really, the same sentence as above.
How does a lock achieve this functionality
Usually by introducing memory barriers; just like volatiles do.

Biased locking design decision

I am trying understand a rationale behind biased locking and making it a default. Since reading this blog post, namely:
"Since most objects are locked by at most one thread during their lifetime, we allow that thread to bias an object toward itself"
I am perplexed... Why would anyone design a synchronized set of methods to be accessed by one thread only? In most cases, people devise certain building blocks specifically for the multi-threaded use-case, and not a single-threaded one. In such cases, EVERY lock aquisition by a thread which is not biased is at the cost of a safepoint, which is a huge overhead! Could someone please help me understand what I am missing in this picture?
The reason is probably that there are a decent number of libraries and classes that are designed to be thread safe but that are still useful outside of such circumstances. This is especially true of a number of classes that predate the Collections framework. Vector and it's subclasses is a good example. If you also consider that most java programs are not multi threaded it is in most cases an overall improvement to use a biased locking scheme, this is especially true of legacy code where the use of such Classes is all to common.
You are correct in a way, but there are cases when this is needed, as Holger very correctly points in his comment. There is so-called, the grace period when no biased-locking is attempted at all, so it's not like this will happen all the time. As I last remember looking at the code, it was 5 seconds. To prove this you would need a library that could inspect Java Object's header (jol comes to my mind), since biased locking is hold inside mark word. So only after 5 seconds will the object that held a lock before will be biased towards the same lock.
EDIT
I wanted to write a test for this, but seems like there is one already! Here is the link for it

Java avoid race condition WITHOUT synchronized/lock

In order to avoid race condition, we can synchronize the write and access methods on the shared variables, to lock these variables to other threads.
My question is if there are other (better) ways to avoid race condition? Lock make the program slow.
What I found are:
using Atomic classes, if there is only one shared variable.
using a immutable container for multi shared variables and declare this container object with volatile. (I found this method from book "Java Concurrency in Practice")
I'm not sure if they perform faster than syncnronized way, is there any other better methods?
thanks
Avoid state.
Make your application as stateless as it is possible.
Each thread (sequence of actions) should take a context in the beginning and use this context passing it from method to method as a parameter.
When this technique does not solve all your problems, use the Event-Driven mechanism (+Messaging Queue).
When your code has to share something with other components it throws event (message) to some kind of bus (topic, queue, whatever).
Components can register listeners to listen for events and react appropriately.
In this case there are no race conditions (except inserting events to the queue). If you are using ready-to-use queue and not coding it yourself it should be efficient enough.
Also, take a look at the Actors model.
Atomics are indeed more efficient than classic locks due to their non-blocking behavior i.e. a thread waiting to access the memory location will not be context switched, which saves a lot of time.
Probably the best guideline when synchronization is needed is to see how you can reduce the critical section size as much as possible. General ideas include:
Use read-write locks instead of full locks when only a part of the threads need to write.
Find ways to restructure code in order to reduce the size of critical sections.
Use atomics when updating a single variable.
Note that some algorithms and data structures that traditionally need locks have lock-free versions (they are more complicated however).
Well, first off Atomic classes uses locking (via synchronized and volatile keywords) just as you'd do if you did it yourself by hand.
Second, immutability works great for multi-threading, you no longer need monitor locks and such, but that's because you can only read your immutables, you cand modify them.
You can't get rid of synchronized/volatile if you want to avoid race conditions in a multithreaded Java program (i.e. if the multiple threads cand read AND WRITE the same data). Your best bet is, if you want better performance, to avoid at least some of the built in thread safe classes which do sort of a more generic locking, and make your own implementation which is more tied to your context and thus might allow you to use more granullar synchronization & lock aquisition.
Check out this implementation of BlockingCache done by the Ehcache guys;
http://www.massapi.com/source/ehcache-2.4.3/src/net/sf/ehcache/constructs/blocking/BlockingCache.java.html
One of the alternatives is to make shared objects immutable. Check out this post for more details.
You can perform up to 50 million lock/unlocks per second. If you want this to be more efficient I suggest using more course grain locking. i.e. don't lock every little thing, but have locks for larger objects. Once you have much more locks than threads, you are less likely to have contention and having more locks may just add overhead.

if multiple threads are updating the same variable, what should be done so each thread updates the variable correctly?

If multiple threads are updating the same variable, what should I do so each thread updates the variable correctly?
Any help would be greatly appreciated
There are several options:
1) Using no synchronization at all
This can only work if the data is of primitive type (not long/double), and you don't care about reading stale values (which is unlikely)
2) Declaring the field as volatile
This will guarantee that stale values are never read. It also works fine for objects (assuming the objects aren't changed after creation), because of the happens-before guarantees of volatile variables (See "Java Memory Model").
3) Using java.util.concurrent.AtomicLong, AtomicInteger etc
They are all thread safe, and support special operations like atomic incrementation and atomic compare-and-set operations.
4) Protecting reads and writes with the same lock
This approach provides mutual exclusion, which allows defining a large atomic operation, where multiple data members are manipulated as a single operation.
This is a major problem with multi-threaded applications, and spans more than I could really cover in an answer, so I'll point you to some resources.
http://download.oracle.com/javase/tutorial/essential/concurrency/sync.html
http://www.vogella.de/articles/JavaConcurrency/article.html#concurrencyjava_synchronized
Essentially, you use the synchronized keyword to place a lock around a variable. This makes sure that the piece of code is only being run once at a time. You can also place locks around the same object in multiple areas.
Additionally, you need to look out for several pitfalls, such as Deadlock.
http://tutorials.jenkov.com/java-concurrency/deadlock.html
Errors caused by misuse of locks are often very difficult to debug and track down, because they aren't very consistent. So, you always need to be careful that you put all of your locks in the correct location.
You should implement locking on the variable in question.
Eg.
http://download.oracle.com/javase/tutorial/essential/concurrency/newlocks.html

Approach to a thread safe program

All,
What should be the approach to writing a thread safe program. Given a problem statement, my perspective is:
1 > Start of with writing the code for a single threaded environment.
2 > Underline the fields which would need atomicity and replace with possible concurrent classes
3 > Underline the critical section and enclose them in synchronized
4 > Perform test for deadlocks
Does anyone have any suggestions on the other approaches or improvements to my approach. So far, I can see myself enclosing most of the code in synchronized blocks and I am sure this is not correct.
Programming in Java
Writing correct multi-threaded code is hard, and there is not a magic formula or set of steps that will get you there. But, there are some guidelines you can follow.
Personally I wouldn't start with writing code for a single threaded environment and then converting it to multi-threaded. Good multi-threaded code is designed with multi-threading in mind from the start. Atomicity of fields is just one element of concurrent code.
You should decide on what areas of the code need to be multi-threaded (in a multi-threaded app, typically not everything needs to be threadsafe). Then you need to design how those sections will be threadsafe. Methods of making one area of the code threadsafe may be different than making other areas different. For example, understanding whether there will be a high volume of reading vs writing is important and might affect the types of locks you use to protect the data.
Immutability is also a key element of threadsafe code. When elements are immutable (i.e. cannot be changed), you don't need to worry about multiple threads modifying them since they cannot be changed. This can greatly simplify thread safety issues and allow you to focus on where you will have multiple data readers and writers.
Understanding details of concurrency in Java (and details of the Java memory model) is very important. If you're not already familiar with these concepts, I recommend reading Java Concurrency In Practice http://www.javaconcurrencyinpractice.com/.
You should use final and immutable fields wherever possible, any other data that you want to change add inside:
synchronized (this) {
// update
}
And remember, sometimes stuff brakes, and if that happens, you don't want to prolong the program execution by taking every possible way to counter it - instead "fail fast".
As you have asked about "thread-safety" and not concurrent performance, then your approach is essentially sound. However, a thread-safe program that uses synchronisation probably does not scale much in a multi cpu environment with any level of contention on your structure/program.
Personally I like to try and identify the highest level state changes and try and think about how to make them atomic, and have the state changes move from one immutable state to another – copy-on-write if you like. Then the actual write can be either a compare-and-set operation on an atomic variable or a synchronised update or whatever strategy works/performs best (as long as it safely publishes the new state).
This can be a bit difficult to structure if your new state is quite different (requires updates to several fields for instance), but I have seen it very successfully solve concurrent performance issues with synchronised access.
Buy and read Brian Goetz's "Java Concurrency in Practice".
Any variables (memory) accessible by multiple threads potentially at the same time, need to be protected by a synchronisation mechanism.

Categories