Question arises after reading this one. What is the difference between synchronized and unsynchronized objects? Why are unsynchronized objects perform better than synchronized ones?
What is the difference between Synchronized and Unsynchronized objects ? Why is Unsynchronized Objects perform better than Synchronized ones ?
HashTable is considered synchronized because its methods are marked as synchronized. Whenever a thread enters a synchronized method or a synchronized block it has to first get exclusive control over the monitor associated with the object instance being synchronized on. If another thread is already in a synchronized block on the same object then this will cause the thread to block which is a performance penalty as others have mentioned.
However, the synchronized block also does memory synchronization before and after which has memory cache implications and also restricts code reordering/optimization both of which have significant performance implications. So even if you have a single thread calling entering the synchronized block (i.e. no blocking) it will run slower than none.
One of the real performance improvements with threaded programs is realized because of separate CPU high-speed memory caches. When a threaded program does memory synchronization, the blocks of cached memory that have been updated need to be written to main memory and any updates made to main memory will invalidate local cached memory. By synchronizing more, again even in a single threaded program, you will see a performance hit.
As an aside, HashTable is an older class. If you want a reentrant Map then ConcurrentHashMap should be used.
Popular speaking the Synchronized Object is a single thread model,if there are 2 thread want to modify the Synchronized Object . if the first one get the lock of the Object ,that the last one should be waite。but if the Object is Unsynchronized,they can operat the object at the same time,It is the reason that why the Unsynchronized is unsafe。
For synchronization to work, the JVM has to prevent more than one thread entering a synchronized block at a time. This requires extra processing than if the synchronized block did not exist placing additional load on the JVM and therefore reducing performance.
The exact locking mechanisms in play when synchronization occurs are explain in How the Java virtual machine performs thread synchronization
Synchronization:
Array List is non-synchronized which means multiple threads can work
on Array List at the same time. For e.g. if one thread is performing
an add operation on Array List, there can be an another thread
performing remove operation on Array List at the same time in a multi
threaded environment
while Vector is synchronized. This means if one thread is working on
Vector, no other thread can get a hold of it. Unlike Array List, only
one thread can perform an operation on vector at a time.
Performance:
Synchronized operations consumes more time compared to
non-synchronized ones so if there is no need for thread safe
operation, Array List is a better choice as performance will be
improved because of the concurrent processes.
Synchronization is useful because it allows you to prevent code from being run twice at the same time (commonly called concurrency). This is important in a threaded environment for a multitude of reasons. In order to provide this guarantee the JVM has to do extra work which means that performance decreases. Because synchronization requires that only one process be allowed to execute at a time, it can cause multi-threaded programs to function as slowly (or slower!) than single-threaded programs.
It is important to note that the amount of performance decrease is not always obvious. Depending on the circumstances, the decrease may be tiny or huge. This depends on all sorts of things.
Finally, I'd like to add a short warning: Concurrent programming using synchronization is hard. I've found that usually other concurrency controls better suit my needs. One of my favorites is Atomic Reference. This utility is great because it very narrowly limits the amount of synchronized code. This makes it easier to read, maintain and write.
Related
I just want to understand why do we really need thread safety with collections? I know that if there are two threads and first thread is iterating over the collection and second thread is modifying the collection then first thread will get ConcurrentModificationException.
But then what if, if I know that none of my threads will iterate over the collection using an iterator, so does it means that thread safety is only needed because we want to allow other threads to iterator over the collection using an iterator? Are there any other reasons and usecases?
Any sort of reading and any sort of writing in two different threads can cause issues if you're not using a thread safe collection.
Iterating is a kind of reading, but so are List.get, Set.contains, Map.get, and many other operations.
(Iterating is a case in which Java can easily detect thread safety issues -- though multiple threads are rarely the actual reason for ConcurrentModificationException.)
I know that if there are two threads and first thread is iterating over the collection and second thread is modifying the collection then first thread will get ConcurrentModificationException.
That's not true. ConcurrentModificationException is for situation when your iterate thru collection and change it at the same time.
Thread safety is complex concept that includes several parts. It won't be easy to explain why we need it.
Main thing is because of the reason outside of scope of this discussion changes made in one thread may not be visible in another.
I just want to understand why do we really need thread safety with collections?
We need thread safety with any object that is being modified. Period. If two threads are sharing the same object and one thread makes a modification, there is no guarantee that the update to the object will be seen by the other thread and there are possibilities that the object may be partially updated causing exceptions, hangs, or other unexpected results.
One of the speedups that is gained with threads is local CPU cached memory. Each thread running in a CPU has local cached memory that is much faster than system memory. The cache is used as much as possible for high speed calculations and then invalidated or flushed to system memory when necessary. Without locking and memory synchronization, each thread could be working with invalid memory and could experience race conditions.
This is why threaded programs need to use concurrent collections (think ConcurrentHashMap) or protect collections (or any mutable object) using synchronized locks or other mechanisms. These ensure that the objects can't be modified at the same time and ensure that the modifications are published between threads appropriately.
I am reading comparison between Reentrant locks and synchronization blocks in java. I am going through the various resources on internet. One disadvantage that I discovered using Reentrant locks over synchronization blocks is that in previous one, you have to explicitly use try finally block to call unlock method on the acquired lock in the finally block, as it might be possible that your critical section of code might throw exception and it can cause big trouble, if thread doesn't releases the lock, While in the latter one, JVM itself takes care of releasing the lock in case of exception.
I am not very much convinced with this disadvantage, because it's not a big deal to use try finally block.As we have been using it for long time for ex(stream closing etc). Can somebody tell me some other disadvantages of Re-entrant locks over synchronized blocks?
A ReentrantLock is a different tool for a different use-case. While you can use both for most synchronization issues (that's what they are made for), the come with different advantages and disadvantages.
Synchronized is at most simple: you write synchronized and that's it. With modern JVMs it is reasonable fast, but has the drawback that it puts all threads that try to enter a synchronized block on hold, whether they actually need to or not. If you use synchronized too often, this can dramatically reduce the speed of multi-threading, worst case down to a point where single-threaded execution would have been faster.
As threading issues only occur if someone is writing while someone else is reading/writing the same data section, programs often run into the problem, that they could theoretically run without synchronization, because most threads just read, but there is this one occasional write, which enforces the synchronized block. This is what the Locks were made for: you have a finer control over when you actually synchronize.
The basic ReentrantLock allows - beside a fair parameter in the constructor - that you can decide when you release the lock, and you can do it at multiple points, so when it suits you best. Other variations of it like the ReentrantReadWriteLock allow you to have many unsynchronized reads, except if there is a write. The downside is that this is solved in Java code, which makes it noticeably slower than the "native" synchronized block. That said: you should only use it, if you know that the optimization gain using this lock is bigger than the loss.
Under normal situations you can only tell the difference in speed if you actually monitor it, by running a profiler to check the speed before and afterwards in a sophisticated way.
synchronized is almost always faster for low or minimal contention, because it allows JVM to perform some optimizations such as biased locking, lock elision and others. Here are some more details how it works:
Let's assume some monitor is held by thread A, and thread B requests this monitor. In that case monitor will change its state to inflated. Saying short, it means that all threads trying to acquire this monitor, will be put to wait set at OS level, which is quite expensive.
Now, if thread A released monitor before thread B requested it, so-called rebias operation will be performed by cheap (on modern CPU) compare-and-swap operation.
Let's take a look at ReentrantLock now. Each thread calls lock() or lockInterruptibly() method cause locking attempt done via CAS operation.
Conclusion: in low contention cases, prefer synchronized. In high contention cases, prefer ReentrantLock. For all cases between, it is hard to say for sure, consider performing benchmarks to find out which solution is faster.
I'm new to java.
I'm little bit confused between Threadsafe and synchronized.
Thread safe means that a method or class instance can be used by multiple threads at the same time without any problems occurring.
Where as Synchronized means only one thread can operate at single time.
So how they are related to each other?
The definition of thread safety given in Java Concurrency in Practice is:
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
For example, a java.text.SimpleDateFormat object has internal mutable state that is modified when a method that parses or formats is called. If multiple threads call the methods of the same dateformat object, there is a chance a thread can modify the state needed by the other threads, with the result that the results obtained by some of the threads may be in error. The possibility of having internal state get corrupted causing bad output makes this class not threadsafe.
There are multiple ways of handling this problem. You can have every place in your application that needs a SimpleDateFormat object instantiate a new one every time it needs one, you can make a ThreadLocal holding a SimpleDateFormat object so that each thread of your program can access its own copy (so each thread only has to create one), you can use an alternative to SimpleDateFormat that doesn't keep state, or you can do locking using synchronized so that only one thread at a time can access the dateFormat object.
Locking is not necessarily the best approach, avoiding shared mutable state is best whenever possible. That's why in Java 8 they introduced a date formatter that doesn't keep mutable state.
The synchronized keyword is one way of restricting access to a method or block of code so that otherwise thread-unsafe data doesn't get corrupted. This keyword protects the method or block by requiring that a thread has to acquire exclusive access to a certain lock (the object instance, if synchronized is on an instance method, or the class instance, if synchronized is on a static method, or the specified lock if using a synchronized block) before it can enter the method or block, while providing memory visibility so that threads don't see stale data.
Thread safety is a desired behavior of the program, where the synchronized block helps you achieve that behavior. There are other methods of obtaining Thread safety e.g immutable class/objects. Hope this helps.
Thread safety: A thread safe program protects it's data from memory consistency errors. In a highly multi-threaded program, a thread safe program does not cause any side effects with multiple read/write operations from multiple threads on shared data (objects). Different threads can share and modify object data without consistency errors.
synchronized is one basic method of achieving ThreadSafe code.
Refer to below SE questions for more details:
What does 'synchronized' mean?
You can achieve thread safety by using advanced concurrency API. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Related SE question:
Synchronization vs Lock
Synchronized: only one thread can operate at same time.
Threadsafe: a method or class instance can be used by multiple threads at the same time without any problems occurring.
If you relate this question as, Why synchronized methods are thread safe? than you can get better idea.
As per the definition this appears to be confusive. But not,if you understand it analytically.
Synchronized means: sequentially one by one in an order,Not concurrently [Not at the same time].
synchronized method not allows to act another thread on it, While a thread is already working on it.This avoids concurrency.
example of synchronization: If you want to buy a movie ticket,and stand in a queue. you will get the ticket only after the person in front of you get the ticket.
Thread safe means: method becomes safe to be accessed by multiple threads without any problem at the same time.synchronized keyword is one of the way to achieve 'thread safe'. But Remember:Actually while multiple threads tries to access synchronized method they follow the order so becomes safe to access. Actually, Even they act at the same time, but cannot access the same resource(method/block) at the same time, because of synchronized behavior of the resource.
Because If a method becomes synchronized, so this is becomes safe to allow multiple threads to act on it, without any problem. Remember:: multiple threads "not act on it at the same time" hence we call synchronized methods thread safe.
Hope this helps to understand.
After patiently reading through a lot of answers and not being too technical at the same time, I could say something definite but close to what Nayak had already replied to fastcodejava above, which comes later on in my answer but look
synchronization is not even close to brute-forcing thread-safety; it's just making a piece of code (or method) safe and incorruptible for a single authorized thread by preventing it from being used by any other threads.
Thread safety is about how all threads accessing a certain element behave and get their desired results in the same way if they would have been sequential (or even not so), without any form of undesired corruption (sorry for the pleonasm) as in an ideal world.
One of the ways of achieving proximity to thread-safety would be using classes in java.util.concurrent.atomic.
Sad, that they don't have final methods though!
Nayak, when we declare a method as synchronized, all other calls to it from other threads are locked and can wait indefinitely. Java also provides other means of locking with Lock objects now.
You can also declare an object to be final or volatile to guarantee its availability to other concurrent threads.
ref: http://www.javamex.com/tutorials/threads/thread_safety.shtml
In practice, performance wise, Thread safe, Synchronised, non-thread safe and non-synchronised classes are ordered as:
Hashtable(slower) < Collections.SynchronizedMap < HashMap(fastest)
While reading concurrency in Java, I have following doubts:
Does Java provides lower level construct then synchronized for synchronization?
In what circumstances will we use semaphore over synchronized (which provides monitor behaviour in Java)
Synchronized allows only one thread of execution to access the resource at the same time. Semaphore allows up to n (you get to choose n) threads of execution to access the resource at the same time.
There is also volatile keyword, according to http://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html volatile variable access is more efficient than accessing these variables through synchronized code
java.util.concurrent.Semaphore is used to restrict the number of threads that can access a resource. That is, while synchronized allows only one thread to aquire lock and execute the synchonized block / method, Semaphore gives permission up to n threads to go and blocks the others.
There is also atomics. This gives access to the basic hardware compare-and-swap command that's the basis of all synchronization. It allows you, for example, to increment a number safely. If you ++ a volatile field, another thread executing the same instruction could read the field before your thread writes to it, then write back to it after your thread. So one increment gets lost. Atomics do the read and write "atomically" and so avoid the problem.
Actually, volatiles, synchronized statements, and atomics tend to force all thread data to be refreshed from main memory and/or written to main memory as appropriate, so none of them are really that low level. (I'm simplifying here. Unlike C#, Java does not really have a concept of "main memory".)
What are the disadvantages of making a large Java non-static method synchronized? Large method in the sense it will take 1 to 2 mins to complete the execution.
If you synchronize the method and try to call it twice at the same time, one thread will have to wait two minutes.
This is not really a question of "disadvantages". Synchronization is either necessary or not, depending on what the method does.
If it is critical that the code runs only once at the same time, then you need synchronization.
If you want to run the code only once at the same time to preserve system resources, you may want to consider a counting Semaphore, which gives more flexibility (such as being able to configure the number of concurrent executions).
Another interesting aspect is that synchronization can only really be used to control access to resources within the same JVM. If you have more than one JVM and need to synchronize access to a shared file system or database, the synchronized keyword is not at all sufficient. You will need to get an external (global) lock for that.
If the method takes on the order of minutes to execute, then it may not need to be synchronized at such a coarse level, and it may be possible to use a more fine-grained system, perhaps by locking only the portion of a data structure that the method is operating on at the moment. Certainly, you should try to make sure that your critical section isn't really 2 minutes long - any method that takes that long to execute (regardless of the presence of other threads or locks) should be carefully studied as a candidate for parallelization. For a computation this time-consuming, you could be acquiring and releasing hundreds of locks and still have it be negligible. (Or, to put it another way, even if you need to introduce a lot of locks to parallelize this code, the overhead probably won't be significant.)
Since your method takes a huge amount of time to run, the relatively tiny amount of time it takes to acquire the synchronized lock should not be important.
A bigger problem could appear if your program is multithreaded (which I'm assuming it is, since you're making the method synchronized), and more than one thread needs to access that method, it could become a bottleneck. To prevent this, you might be able to rewrite the method so that it does not require synchronization, or use a synchronized block to reduce the size of the protected code (in general, the smaller the amount of code that is protected by the synchronize keyword, the better).
You can also look at the java.util.concurrent classes, as you may find a better solution there as well.
If the object is shared by multiple threads, if one thread tries to call the synchronized method on the object while another's call is in progress, it will be blocked for 1 to 2 minutes. In the worst case, you could end up with a bottleneck where the throughput of your system is dominated by executing these computations one at a time.
Whether this is a problem or not depends on the details of your application, but you probably should look at more fine-grained synchronization ... if that is practical.
In simple two lines Disadvantage of synchronized methods in Java :
Increase the waiting time of the thread
Create performance problem
First drawback is that threads that are blocked waiting to execute synchronize code can't be interrupted.Once they're blocked their stuck there, until they get the lock for the object the code is synchronizing on.
Second drawback is that the synchronized block must be within the same method in other words we can't start a synchronized block in one method and end the syncronized block in another for obvious reasons.
The third drawback is that we can't test to see if an object's intrinsic lock is available or find out any other information about the lock also if the lock isn't available we can't timeout after we waited lock for a while. When we reach the beginning of a synchronized block we can either get the lock and continue executing or block at that line of code until we get the lock.
The fourth drawback is that if multiple threads are awaiting to get lock, it's not first come first served. There isn't set order in which the JVM will choose the next thread that gets the lock, so the first thread that blocked could be the last thread to get the lock and vice Versa.
so instead of using synchronization we can prevent thread interference using classes that implement the java.util.concurrent locks.lock interface.
In simple two lines Disadvantage of synchronized methods in Java :
1. Increase the waiting time of the thread
2. Create a performance problem