Difference between Wrappers and Concurrent collection - java

we can synchronize a collection by using 'collections.synchronizedCollection(Collection c)'
or 'collections.synchronizedMap(Map c)' and we can also use java concurrent API like ConcurrentHashMap or ArrayQueue or BlockingQueue.
Is there any difference in synchronization level between these two way of getting synchronized collections or these are almost same?
could any one explain?

Yes: speed during massive parallel processing.
This can be illustrated in a very simple way: Imagine that 100 Threads are waiting to take something out of a collection.
The synchronized way: 99 Threads a put to sleep and 1 Thread gets its value.
The concurrent way: 100 Threads get their value immediately, none is put on hold.
Now the second method takes a little more time than a simple get, but as soon as a minimum like 2 Threads are using it on a constant basis, it is well worth the time you save thanks to concurrent execution.

so now as per my understandings synchronized way is a wrapper and blocks whole collection object and on other hand in concurrent way only objects inside collection get synchronized and we can access 2 or more elements of a collection at same time

Related

Why do we need thread safety with collections?

I just want to understand why do we really need thread safety with collections? I know that if there are two threads and first thread is iterating over the collection and second thread is modifying the collection then first thread will get ConcurrentModificationException.
But then what if, if I know that none of my threads will iterate over the collection using an iterator, so does it means that thread safety is only needed because we want to allow other threads to iterator over the collection using an iterator? Are there any other reasons and usecases?
Any sort of reading and any sort of writing in two different threads can cause issues if you're not using a thread safe collection.
Iterating is a kind of reading, but so are List.get, Set.contains, Map.get, and many other operations.
(Iterating is a case in which Java can easily detect thread safety issues -- though multiple threads are rarely the actual reason for ConcurrentModificationException.)
I know that if there are two threads and first thread is iterating over the collection and second thread is modifying the collection then first thread will get ConcurrentModificationException.
That's not true. ConcurrentModificationException is for situation when your iterate thru collection and change it at the same time.
Thread safety is complex concept that includes several parts. It won't be easy to explain why we need it.
Main thing is because of the reason outside of scope of this discussion changes made in one thread may not be visible in another.
I just want to understand why do we really need thread safety with collections?
We need thread safety with any object that is being modified. Period. If two threads are sharing the same object and one thread makes a modification, there is no guarantee that the update to the object will be seen by the other thread and there are possibilities that the object may be partially updated causing exceptions, hangs, or other unexpected results.
One of the speedups that is gained with threads is local CPU cached memory. Each thread running in a CPU has local cached memory that is much faster than system memory. The cache is used as much as possible for high speed calculations and then invalidated or flushed to system memory when necessary. Without locking and memory synchronization, each thread could be working with invalid memory and could experience race conditions.
This is why threaded programs need to use concurrent collections (think ConcurrentHashMap) or protect collections (or any mutable object) using synchronized locks or other mechanisms. These ensure that the objects can't be modified at the same time and ensure that the modifications are published between threads appropriately.

Java prioritizing threads

My main thread has a private LinkedList which contains task objects for the players in my game. I then have a separate thread that runs every hour that accesses and clears that LinkedList and runs my algorithm which randomly adds new uncompleted tasks to every players LinkedList. Right now I made a getter method that is synchronized so that I dont run into any concurrency issues. This works fine but the synchronized keyword has a lot of overhead especially since its accessed a ton from the main thread while only accessed hourly from my second thread.
I am wondering if there is a way to prioritize the main thread? For example on that 2nd thread I could loop through the players then make a new LinkedList then run my algorithm and add all the tasks to that LinkedList then quickly assign the old LinkedList equal to the new one. This would slightly increase memory usage on the stack while improving main thread speed.
Basically I am trying to avoid making my main thread use synchronization when it will only be used once an hour at most and I am willing to greatly degrade the performance of the 2nd thread to keep the main threads speed. Is there a way I can use the 2nd thread to notify the 1st that it will be locking a method instead of having the 1st thread physically have to go through all of the synchronization over head steps? I feel like this would be possible since if that 2nd thread shares a cache with the main thread and it could change a boolean denoting that the main thread has to wait till that variable is changed back. The main thread would have to check that boolean every time it tries run that method and if the 2nd thread is telling it to wait the main thread will then freeze till the boolean is changed.
Of course the 2nd thread would have to specify which object and method has the lock along with a binary 0 or 1 denoting if its locked or not. Then the main thread would just need to check its shared cache for the object and the binary boolean value once it reaches that method which seems way faster than normal synchronization. Anyways this would then result in them main thread running at normal speed while the 2nd thread handles a bunch of work behind the scenes without degrading main thread performance. Does this exist if so how can I do it and if it does not exist how hard would it actually be to implement?
Premature optimization
It sounds like you are overly worried about the cost of synchronization. Doing a dozen, or a hundred, or even a thousand synchronizations once an hour is not going to impact the performance of your app by any significant amount.
If your concern has not yet been validated by careful study with a profiling tool, you’ve fallen into the common trap of premature optimization.
AtomicReference
Nevertheless, I can suggest an alternative approach.
You want to replace a list once an hour. If you do not mind letting any threads continue using the current list already accessed while you swap out for a new list, then use AtomicReference. An object of this class holds the reference to another object of a specified type.
I generally like the Atomic… classes for thread-safety work because they scream out to the reader that a concurrency problem is at hand.
AtomicReference < List < Task > > listRef = new AtomicReference<>( originalList ) ;
A different thread is able to replace that reference to the old list with a reference to the new list.
listRef.set( newList ) ;
Access by the other thread:
List< Task > list = listRef.get() ;
Note that this approach does not make thread-safe the payload, the list itself. But you claim that only a single thread will ever be manipulating the content of the list. You claim a different thread will only replace the entire list. So this AtomicReference serves the purpose of replacing the list in a thread-safe manner while making the issue of concurrency quite obvious.
volatile
Using AtomicReference accomplishes the same goal as volatile. I’m wary of volatile because (a) its use may go unnoticed by the reader, and (b) I suspect many Java programmers do not understand volatile, especially since its meaning was redefined.
For more info about why plain reference assignment is not thread-safe, see this Question.

Why do unsynchronized objects perform better than synchronized ones?

Question arises after reading this one. What is the difference between synchronized and unsynchronized objects? Why are unsynchronized objects perform better than synchronized ones?
What is the difference between Synchronized and Unsynchronized objects ? Why is Unsynchronized Objects perform better than Synchronized ones ?
HashTable is considered synchronized because its methods are marked as synchronized. Whenever a thread enters a synchronized method or a synchronized block it has to first get exclusive control over the monitor associated with the object instance being synchronized on. If another thread is already in a synchronized block on the same object then this will cause the thread to block which is a performance penalty as others have mentioned.
However, the synchronized block also does memory synchronization before and after which has memory cache implications and also restricts code reordering/optimization both of which have significant performance implications. So even if you have a single thread calling entering the synchronized block (i.e. no blocking) it will run slower than none.
One of the real performance improvements with threaded programs is realized because of separate CPU high-speed memory caches. When a threaded program does memory synchronization, the blocks of cached memory that have been updated need to be written to main memory and any updates made to main memory will invalidate local cached memory. By synchronizing more, again even in a single threaded program, you will see a performance hit.
As an aside, HashTable is an older class. If you want a reentrant Map then ConcurrentHashMap should be used.
Popular speaking the Synchronized Object is a single thread model,if there are 2 thread want to modify the Synchronized Object . if the first one get the lock of the Object ,that the last one should be waite。but if the Object is Unsynchronized,they can operat the object at the same time,It is the reason that why the Unsynchronized is unsafe。
For synchronization to work, the JVM has to prevent more than one thread entering a synchronized block at a time. This requires extra processing than if the synchronized block did not exist placing additional load on the JVM and therefore reducing performance.
The exact locking mechanisms in play when synchronization occurs are explain in How the Java virtual machine performs thread synchronization
Synchronization:
Array List is non-synchronized which means multiple threads can work
on Array List at the same time. For e.g. if one thread is performing
an add operation on Array List, there can be an another thread
performing remove operation on Array List at the same time in a multi
threaded environment
while Vector is synchronized. This means if one thread is working on
Vector, no other thread can get a hold of it. Unlike Array List, only
one thread can perform an operation on vector at a time.
Performance:
Synchronized operations consumes more time compared to
non-synchronized ones so if there is no need for thread safe
operation, Array List is a better choice as performance will be
improved because of the concurrent processes.
Synchronization is useful because it allows you to prevent code from being run twice at the same time (commonly called concurrency). This is important in a threaded environment for a multitude of reasons. In order to provide this guarantee the JVM has to do extra work which means that performance decreases. Because synchronization requires that only one process be allowed to execute at a time, it can cause multi-threaded programs to function as slowly (or slower!) than single-threaded programs.
It is important to note that the amount of performance decrease is not always obvious. Depending on the circumstances, the decrease may be tiny or huge. This depends on all sorts of things.
Finally, I'd like to add a short warning: Concurrent programming using synchronization is hard. I've found that usually other concurrency controls better suit my needs. One of my favorites is Atomic Reference. This utility is great because it very narrowly limits the amount of synchronized code. This makes it easier to read, maintain and write.

Does my implementation of LinkedBlockingQueue need to be synchronized?

To begin with, I have used search and found n topics related to this question. Unfortunately, they didin't help me, so it'll be n++ topics :)
Situation: I will have few working threads (the same class, just many dublicates) (let's call them WT) and one result writing thread (RT).
WT will add objects to the Queue, and RT will take them. Since there will be many WT won't there be any memory problems(independant from the max queue size)? Will those operations wait for each other to be completed?
Moreover, as I understand, BlockingQueue is quite slow, so maybe I should leave it and use normal Queue while in synchronized blocks? Or should I consider my self by using SynchronizedQueue?
LinkedBlockingQueue is designed to handle multiple threads writing to the same queue. From the documentation:
BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically unless specified otherwise in an implementation.
Therefore, you are quite safe (unless you expect the bulk operations to be atomic).
Of course, if thread A and thread B are writing to the same queue, the order of A's items relative to B's items will be indeterminate unless you synchronize A and B.
As to the choice of queue implementation, go with the simplest that does the job, and then profile. This will give you accurate data on where the bottlenecks are so you won't have to guess.

Disadvantage of synchronized methods in Java

What are the disadvantages of making a large Java non-static method synchronized? Large method in the sense it will take 1 to 2 mins to complete the execution.
If you synchronize the method and try to call it twice at the same time, one thread will have to wait two minutes.
This is not really a question of "disadvantages". Synchronization is either necessary or not, depending on what the method does.
If it is critical that the code runs only once at the same time, then you need synchronization.
If you want to run the code only once at the same time to preserve system resources, you may want to consider a counting Semaphore, which gives more flexibility (such as being able to configure the number of concurrent executions).
Another interesting aspect is that synchronization can only really be used to control access to resources within the same JVM. If you have more than one JVM and need to synchronize access to a shared file system or database, the synchronized keyword is not at all sufficient. You will need to get an external (global) lock for that.
If the method takes on the order of minutes to execute, then it may not need to be synchronized at such a coarse level, and it may be possible to use a more fine-grained system, perhaps by locking only the portion of a data structure that the method is operating on at the moment. Certainly, you should try to make sure that your critical section isn't really 2 minutes long - any method that takes that long to execute (regardless of the presence of other threads or locks) should be carefully studied as a candidate for parallelization. For a computation this time-consuming, you could be acquiring and releasing hundreds of locks and still have it be negligible. (Or, to put it another way, even if you need to introduce a lot of locks to parallelize this code, the overhead probably won't be significant.)
Since your method takes a huge amount of time to run, the relatively tiny amount of time it takes to acquire the synchronized lock should not be important.
A bigger problem could appear if your program is multithreaded (which I'm assuming it is, since you're making the method synchronized), and more than one thread needs to access that method, it could become a bottleneck. To prevent this, you might be able to rewrite the method so that it does not require synchronization, or use a synchronized block to reduce the size of the protected code (in general, the smaller the amount of code that is protected by the synchronize keyword, the better).
You can also look at the java.util.concurrent classes, as you may find a better solution there as well.
If the object is shared by multiple threads, if one thread tries to call the synchronized method on the object while another's call is in progress, it will be blocked for 1 to 2 minutes. In the worst case, you could end up with a bottleneck where the throughput of your system is dominated by executing these computations one at a time.
Whether this is a problem or not depends on the details of your application, but you probably should look at more fine-grained synchronization ... if that is practical.
In simple two lines Disadvantage of synchronized methods in Java :
Increase the waiting time of the thread
Create performance problem
First drawback is that threads that are blocked waiting to execute synchronize code can't be interrupted.Once they're blocked their stuck there, until they get the lock for the object the code is synchronizing on.
Second drawback is that the synchronized block must be within the same method in other words we can't start a synchronized block in one method and end the syncronized block in another for obvious reasons.
The third drawback is that we can't test to see if an object's intrinsic lock is available or find out any other information about the lock also if the lock isn't available we can't timeout after we waited lock for a while. When we reach the beginning of a synchronized block we can either get the lock and continue executing or block at that line of code until we get the lock.
The fourth drawback is that if multiple threads are awaiting to get lock, it's not first come first served. There isn't set order in which the JVM will choose the next thread that gets the lock, so the first thread that blocked could be the last thread to get the lock and vice Versa.
so instead of using synchronization we can prevent thread interference using classes that implement the java.util.concurrent locks.lock interface.
In simple two lines Disadvantage of synchronized methods in Java :
1. Increase the waiting time of the thread
2. Create a performance problem

Categories