Is semaphore thread-safe when its permits variable bigger than 1? - java

Recently I have been studying the sleep barber, and I have comprehended that it seems like a binary semaphore when the permits value equals to 1. How about when it exceeds 1? Will one thread be exchanged by a new one without releasing when multiple threads acquired?
I think it is unsafe but I am not sure .
It would be nice if you could tell me the difference between syncing and simultaneous access.

Semaphore works as a gate. It can't be qualified as thread-safe or thread-unsafe. Resources (in general objects) can be thread-safe or unsafe.
If it is binary semaphore, only one thread can access your resources at any given moment. So there is no need to think about thread-safety.
But if semaphore count is 2, two threads can simultaneously access the same resource. If your resource (some object) is thread-safe, you are good. Otherwise you would need to implement some kind of synchronization mechanism so that unsafe part can only be accessed by one thread at a time.

Related

Confused about synchronization and thread safe ? java

Actually, I am a bit confused in regards of several explanation from website or blog about synchronization and thread-safe. I've done some research on different class of Core Java Api or Java Framework (Collections). And i've often noticed that some class are synchronize and thread-safe which means, at a time, only one thread can access the code.
But i need some precision :
A class is synchronize so its thread-safe ?
Or synchronize and thread-safe have two different meaning ?
Best regards
A class is synchronize so its thread-safe ?
A class is not synchronized. Rather a method, or a block of code is synchronized.
Synchronization (using synchronized) is one way to make code thread-safe. There are other ways.
Or synchronize and thread-safe have two different meaning ?
Yes. They have different meanings.
And i've often noticed that some class are synchronize and thread-safe which means, at a time, only one thread can access the code.
Actually, if you "noticed" that, you were not paying attention!
With a synchronized method, only one thread can access the code while holding a given lock; i.e. you get mutual exclusion. If two threads use different locks, then you won't get mutual exclusion.
The other thing to note is that merely using synchronized does not guarantee thread-safety. You need to use it in the right way:
threads need to synchronize on the appropriate objects / locks
threads need to synchronize in all appropriate code
if the code entails acquiring multiple locks, the locks need to be acquired in an order that avoids deadlocks.

Threadsafe vs Synchronized

I'm new to java.
I'm little bit confused between Threadsafe and synchronized.
Thread safe means that a method or class instance can be used by multiple threads at the same time without any problems occurring.
Where as Synchronized means only one thread can operate at single time.
So how they are related to each other?
The definition of thread safety given in Java Concurrency in Practice is:
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
For example, a java.text.SimpleDateFormat object has internal mutable state that is modified when a method that parses or formats is called. If multiple threads call the methods of the same dateformat object, there is a chance a thread can modify the state needed by the other threads, with the result that the results obtained by some of the threads may be in error. The possibility of having internal state get corrupted causing bad output makes this class not threadsafe.
There are multiple ways of handling this problem. You can have every place in your application that needs a SimpleDateFormat object instantiate a new one every time it needs one, you can make a ThreadLocal holding a SimpleDateFormat object so that each thread of your program can access its own copy (so each thread only has to create one), you can use an alternative to SimpleDateFormat that doesn't keep state, or you can do locking using synchronized so that only one thread at a time can access the dateFormat object.
Locking is not necessarily the best approach, avoiding shared mutable state is best whenever possible. That's why in Java 8 they introduced a date formatter that doesn't keep mutable state.
The synchronized keyword is one way of restricting access to a method or block of code so that otherwise thread-unsafe data doesn't get corrupted. This keyword protects the method or block by requiring that a thread has to acquire exclusive access to a certain lock (the object instance, if synchronized is on an instance method, or the class instance, if synchronized is on a static method, or the specified lock if using a synchronized block) before it can enter the method or block, while providing memory visibility so that threads don't see stale data.
Thread safety is a desired behavior of the program, where the synchronized block helps you achieve that behavior. ​There are other methods of obtaining Thread safety e.g immutable class/objects. Hope this helps.
Thread safety: A thread safe program protects it's data from memory consistency errors. In a highly multi-threaded program, a thread safe program does not cause any side effects with multiple read/write operations from multiple threads on shared data (objects). Different threads can share and modify object data without consistency errors.
synchronized is one basic method of achieving ThreadSafe code.
Refer to below SE questions for more details:
What does 'synchronized' mean?
You can achieve thread safety by using advanced concurrency API. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Related SE question:
Synchronization vs Lock
Synchronized: only one thread can operate at same time.
Threadsafe: a method or class instance can be used by multiple threads at the same time without any problems occurring.
If you relate this question as, Why synchronized methods are thread safe? than you can get better idea.
As per the definition this appears to be confusive. But not,if you understand it analytically.
Synchronized means: sequentially one by one in an order,Not concurrently [Not at the same time].
synchronized method not allows to act another thread on it, While a thread is already working on it.This avoids concurrency.
example of synchronization: If you want to buy a movie ticket,and stand in a queue. you will get the ticket only after the person in front of you get the ticket.
Thread safe means: method becomes safe to be accessed by multiple threads without any problem at the same time.synchronized keyword is one of the way to achieve 'thread safe'. But Remember:Actually while multiple threads tries to access synchronized method they follow the order so becomes safe to access. Actually, Even they act at the same time, but cannot access the same resource(method/block) at the same time, because of synchronized behavior of the resource.
Because If a method becomes synchronized, so this is becomes safe to allow multiple threads to act on it, without any problem. Remember:: multiple threads "not act on it at the same time" hence we call synchronized methods thread safe.
Hope this helps to understand.
After patiently reading through a lot of answers and not being too technical at the same time, I could say something definite but close to what Nayak had already replied to fastcodejava above, which comes later on in my answer but look
synchronization is not even close to brute-forcing thread-safety; it's just making a piece of code (or method) safe and incorruptible for a single authorized thread by preventing it from being used by any other threads.
Thread safety is about how all threads accessing a certain element behave and get their desired results in the same way if they would have been sequential (or even not so), without any form of undesired corruption (sorry for the pleonasm) as in an ideal world.
One of the ways of achieving proximity to thread-safety would be using classes in java.util.concurrent.atomic.
Sad, that they don't have final methods though!
Nayak, when we declare a method as synchronized, all other calls to it from other threads are locked and can wait indefinitely. Java also provides other means of locking with Lock objects now.
You can also declare an object to be final or volatile to guarantee its availability to other concurrent threads.
ref: http://www.javamex.com/tutorials/threads/thread_safety.shtml
In practice, performance wise, Thread safe, Synchronised, non-thread safe and non-synchronised classes are ordered as:
Hashtable(slower) < Collections.SynchronizedMap < HashMap(fastest)

how using Lock interface gives more performance over using synchronise keyword in concurrent applications design?

I was going through "Java Concurrency CookBook". In that author mentioned using Lock interface gives more performance over using synchronized keyword.Can any one tell how? Using the terms like stack-frame, ornumber of method calls.
Don't mind, please help me get rid of java concurrency concepts.
The raison d'etre for Lock and friends isn't that it is inherently faster than synchronized(), it is that it can be used in different ways that don't necessarily correspond to the lexical block structure, and also that it can offer more facilities such as read-write locks, counting semaphores, etc.
Whether a specific Lock implementation is actually faster than synchronized is a moot point and implementation-dependent. There is certainly no such claim in the Javadoc. Doug Leas's book[1] where it all started doesn't make any claim that I can see quickly stronger than 'often with better performance'.
[1]: Lea, Concurrent Programming in Java, 2nd edition, Addison Wesley 2000.
1 Synchronisation is the only culprit that leads to the problem of deadlock unlike lock which is free of deadlock issue.
2 In synchronisation , we don’t know after how much time a thread will get a chance after a previous thread has released the lock. This can lead to problem of starvation whereas incase of lock we have its implementing class reentrant lock which has one of its constructor which lets you pass fairness property as one of its argument that leta longest waiting thread get the chance to acquire the lock.
3 In synchronisation, if a thread is waiting for another thread, then the waiting thread won’t do any other activity which doesn’t require lock access but with lock interface there is a trylock() method with which you can try for access the lock and if you don’t get the lock you can perform other alternate tasks. This helps to improve the performance of the application .
4 There is no api to check how many threads are waiting for a particular lock whereas this is possible with lock interface implementation class ReentrantLock methods.
5 One can get better control of locks using lock interface with holdCount() method which is not found with synchronization.

Thread safety within Java

So, while working on something that was having locking issues, a question came to me. Do objects that only can be accessed from a single thread require locks or synchronization at all?
For example, given Thread1, Thread2, and Thread3, along with Buffer1, Buffer2, Buffer3, where each buffer is instanced as a thread is created, meaning that Thread1 will only ever access Buffer1, and the same for Thread2 and Buffer2, along with Thread3 and Buffer3. Thread1 will never touch Buffer2 or Buffer3. While adding/removing/modifying bytes in the stream, are locks needed?
No, You wont need any locks in this case. Locking and synchronization is only required when any resource is being shared between multiple threads.
If you go ahead and add synchronization on the private instance of that buffer then still it wont make any difference as there will be no thread waiting to acquire locks, The only one locking and releasing the buffer will be the owner thread.
1. When more than one thread try to access an object, then locking becomes necessary.
2. Moreover classes when developed needs to be thread safe, if concurrent access by threads is possible.
3. A class is said to be thread safe, it if behaves correctly in the presence of interleaving and scheduling of the underlying OS , without any synchronization mechanism from the client.
4. Locking the resources can cause overhead, prevents concurrent access, and bottle neck situations.
Only when two or more threads need to access a shared object you need to worry about locking.
No. This strategy for ensuring thread-safety is generally referred to as confinement.
Confinement relies on encapsulation techniques to ensure that multiple threads cannot access an object. "Concurrent Programming in Java" by Doug Lea has good chapter on the details of confinement and its strengths and weaknesses compared to other exclusion techniques.
Paraphrasing from Lea, in general there are 4 conditions needed for confinement of a reference r, to an object x, within a method m:
m cannot pass r as an argument to another method.
m cannot pass r as a return value.
m cannot record r in a field (instance or static) that is accessible from another thread.
m cannot may not let any other references escape (via 1-3) that may be traversed to r.
From what I remember from my studies, if you are using a private buffer for every thread you should not worry about locking it to avoid concurrent access, since you don't have any.
If no-one is reading the buffer apart from the creator, it could do whatever he wants on it without worrying that someone else is reading or writing it. so you should be fine
But you have to remember that a thread can be interrupted at any time, so your internal buffer can be in a inconsistent state. (this shouldn't be a problem since you are accessing only sequentially from the same thread)
Locks are not needed unless threads are concurrently using the same data structure.
Hence if different data structures are used by each thread, your code is guaranteed to be thread safe.
Incidentally, this is one of the main reasons why the key Java collection classes like java.util.ArrayList are not thread safe: making them thread safe would add a performance overhead which you shouldn't have to pay for if you don't need, and in a lot of cases you don't need it because you can ensure in some other way that only one thread accesses the ArrayList at once.

Concurrency design principles in practice

I have a Results object which is written to by several threads concurrently. However, each thread has a specific purpose and owns certain fields, so that no data is actually modified by more than one thread. The consumer of this data will not try to read it until all of the writer threads are done writing it. Because I know this to be true, there is no synchronization on the data writes and reads.
There is a RunningState object associated with this Results object which serves to coordinate this work. All of its methods are synchronized. When a thread is done with its work on this Results object, it calls done() on the RunningState object, which does the following: decrements a counter, checks if the counter has gone to 0 (indicating that all writers are done), and if so, puts this object on a concurrent queue. That queue is consumed by a ResultsStore which reads all of the fields and stores data in the database. Before reading any data, the ResultsStore calls RunningState.finalizeResult(), which is an empty method whose sole purpose is to synchronize on the RunningState object, to ensure that writes from all of the threads are visible to the reader.
Here are my concerns:
1) I believe that this will work correctly, but I feel like I'm violating good design principles to not synchronize on the data modifications to an object that is shared by multiple threads. However, if I were to add synchronization and/or split things up so each thread only saw the data it was responsible for, it would complicate the code. Anyone who modifies this area had better understand what's going on in any case or they're likely to break something, so from a maintenance standpoint I think the simpler code with good comments explaining how it works is a better way to go.
2) The fact that I need to call this do-nothing method seems like an indication of wrong design. Is it?
Opinions appreciated.
This seems mostly right, if a bit fragile (if you change the thread-local nature of one field, for instance, you may forget to synchronize it and end up with hard-to-trace data races).
The big area of concern is in memory visibility; I don't think you've established it. The empty finalizeResult() method may be synchronized, but if the writer threads didn't also synchronize on whatever it synchronizes on (presumably this?), there's no happens-before relationship. Remember, synchronization isn't absolute -- you synchronize relative to other threads that are also synchronized on the same object. Your do-nothing method will indeed do nothing, not even ensure any memory barrier.
You somehow need to establish a happens-before relationship between each thread doing its writes, and the thread that eventually reads. One way to do this without synchronization is via a volatile variable, or an AtomicInteger (or other atomic classes).
For instance, each writer thread can invoke counter.incrementAndGet(1) on the object, and the reading thread can then check that counter.get() == THE_CORRECT_VALUE. There's a happens-before relationship between a volatile/atomic field being written and it being read, which gives you the needed visibility.
Your design is sound, but it can be improved if you are using a true concurrent queue since a concurrent queue from the java.util.concurrent package already guarantees a happens before relationship between the thread putting an item into the queue, and the thread taking an item out, so this precludes needing to call finalizeResult() in the taking thread (so no need for that "do nothing" method call).
From java.util.concurrent package description:
The methods of all classes in java.util.concurrent and its subpackages
extend these guarantees to higher-level synchronization. In
particular:
Actions in a thread prior to placing an object into any
concurrent collection happen-before actions subsequent to the access
or removal of that element from the collection in another thread.
The comments in another answer concerning using an AtomicInteger instead of synchronization are also wise (as using an AtomicInteger to do your thread counting will likely perform better than synchronization), just make sure to get the value of the count after the atomic decrement (e.g. decrementAndGet()) when comparing to 0 in order to avoid adding to the queue twice.
What you've described is indeed safe, but it also sounds, frankly, brittle and (as you note) maintenance could become an issue. Without sample code, it's really hard to tell what's really easiest to understand, so an already subjective question becomes frankly unanswerable. Could you ask a coworker for a code review? (Particularly one that's likely to have to deal with this pattern.) I'm going to trust you that this is indeed the simplest approach, but doing something like wrapping synchronized blocks around writes would increase safety now and in the future. That said, you obviously know your code better than I do.

Categories