Penalty of AtomicVariables over locks in Java - java

For my current project, i try to use Atomic Integers and Atomic Booleans where ever possible when we have more than 1 thread accessing it. This helps in keeping the logic lock free(Internally i know it may still use locks) and the code much cleaner. The use case is mostly for configuration tags which may change abruptly.
I want to know what is the penalty performance wise of using Atomic Variables, will this invalidate the cache far too often and actually make my solution inferior than just using locks?

atomic* classes does not use locks, it uses CAS (compare-and-set) to achieve thread-safety. in general they are faster then locking variant, however under very high contention they tends to be actually slower.
if you want some analogy, it is something like optimistic and pessimistic locking in DB.
if you are interested in more details, you might want to check Java Performance: The Definitive Guide book.
ADD: With cache I assume you are referring to happens-before relationship. Here is quote from oracle tutorial:
The java.util.concurrent.atomic package defines classes that support
atomic operations on single variables. All classes have get and set
methods that work like reads and writes on volatile variables. That
is, a set has a happens-before relationship with any subsequent get on
the same variable.

Related

Collection with record level locking?

I need to make a data structure keyed off of username and then some data (additional collections) in a POJO. The data needs to be thread safe.
So I'm thinking for the main structure, ConcurrentHashMap<String, MyPOJO>. For the operations I need to perform on MyPOJO, I may either just read it, or I may perform write operations on it.
Would the best approach be to do a get on the map and then operate on MyPOJO in a syncronized block? I assume I just need to put a syncronized block in the update methods and the read methods would automatically be blocked? Is that the best approach in a highly concurrent app? Or do I need to use something like ReadWriteLock on BOTH the get/set operations?
If I use something like StampedLock, each MyPOJO would need one correct, so I can do record level locking?
Thanks!
Would the best approach be to do a get on the map and then operate on MyPOJO in a synchronized block?
I assume that you mean a synchronized block on the MyPOJO instance itself (or a private lock owned by the instance).
My answer is yes, if you do it right.
I assume I just need to put a synchronized block in the update methods and the read methods would automatically be blocked?
No, that's not correct. All methods that access or update a mutable object would need to synchronize on the same lock.
If you don't synchronize for both reads and writes, you risk various thread-safety concerns, including problems with visibility of writes. Heisenbugs.
Is that the best approach in a highly concurrent app? Or do I need to use something like ReadWriteLock on BOTH the get/set operations?
It depends.
On the ReadWriteLock issue:
Unless it is likely that you will get significant lock contention on a specific MyPOJO instance, it is probably not worth the effort to optimize this.
If the access and update methods only hold the lock for a relatively short period of time, that reduces the impact of any contention.
More generally, I have a suspicion that you might be confusing "highly concurrent" with "highly scalable". Java multi-threading only performs up to the limit of the cores (and memory) on a single machine. Beyond that, clever tweaks to improve concurrency get you nowhere. To scale up further, you need to change the system architecture so that requests are handled by multiple JVM instances on different machines.
So ... to sum up ... ReadWriteLock might help if you have significant contention on individual MyPOJO instances AND there are likely to be a lot of parallel read operations on individual instances.
If I use something like StampedLock, each MyPOJO would need one correct, so I can do record level locking?
I doubt that there would be much benefit unless you have significant contention; see above. But yes, if you used a StampedLock per instance you would get record-level locking ... just like you would other per-instance locking.
FWIW: This smells to me of "premature optimization". Furthermore, if you expect that your solution will need to scale beyond a single JVM in the short to medium term, then it is arguably a waste of time to optimize the single JVM solution too much.

Is java concurrent package implemented using locks?

Conceptually,
Mutex
Reader's/Writer lock (Better form of Mutex)
Semaphore
Condition Variable
are used as four major synchronization mechanisms, which are purely lock based. Different programming language have different terms/jargon for these 4 mechanisms. POSIX pthread package is one such example for such implementation.
First two get implemented using spin lock(Busy-wait).
Last two get implemented using sleep lock.
Lock based synchronisation is expensive in terms of cpu cycles.
But, I learnt that java.util.concurrent packages do not use lock(sleep/spin) based mechanism to implement synchronisation.
My question:
What is the mechanism used by java concurrent package to implement synchronization? Because spin lock is cpu intensive and sleep lock is more costlier than spin lock due to frequent context switch.
That very much depends on what parts of the java.util.concurrent package you use (and to a lesser degree on the implementation). E.g. the LinkedBlockingQueue as of Java 1.7 uses both ReentrantLocks and Conditions, while e.g. the java.util.concurrent.atomic classes or the CopyOnWrite* classes rely on volatiles + native methods (that insert the appropriate memory barriers).
The actual native implementation of Locks, Semaphores, etc. also varies between architectures and implementations.
Edit: If you really care about performance, you should measure performance of your specific workload. There are folks far more clever than me like A. Shipilev (whose site is a trove of information on this topic) on the JVM team, who do this and care deeply about JVM performance.
This question is best answered by looking at the source code for java.util.concurrent. The precise implementation depends on the class you are referring to.
For example, many of the implementations make use of volatile data and sun.misc.Unsafe, which defers e.g. compare-and-swap to native operations. Semaphore (via AbstractQueuedSynchronizer) makes heavy use of this.
You can browse through the other objects there (use the navigation pane on the left of that site) to take a look at the other synchronization objects and how they are implemented.
The short answer is no.
Concurrent collections are not implemented with locks compared to synchronized collections.
I myself had the exact same issue as what is asked, wanted to always understand the details. What helped me ultimately to fully understand what's going on under the hood was to read the following chapter in java concurrency in practice:
5.1 Synchronized collections
5.2 Concurrent collections
The idea is based on doing atomic operations, which basically requires no lock, since they are atomic.
The OP's question and the comment exchanges appear to contain quite a bit of confusion. I will avoid answering the literal questions and instead try to give an overview.
Why does java.util.concurrent become today's recommended practice?
Because it encourages good application coding patterns. The potential performance gain (which may or may not materialize) is a bonus, but even if there is no performance gain, java.util.concurrent is still recommended because it helps people write correct code. Code that is fast but is flawed has no value.
How does java.util.concurrent encourage good coding patterns?
In many ways. I will just list a few.
(Disclaimer: I come from a C# background and do not have comprehensive knowledge of Java's concurrent package; though a lot of similarities exist between the Java and C# counterparts.)
Concurrent data collections simplifies code.
Often, we use locking when we need to access and modify a data structure from different threads.
A typical operation involves:
Lock (blocked until succeed),
Read and write values,
Unlock.
Concurrent data collections simplify this by rolling all these operations into a single function call. The result is:
Simpler code on the caller's side,
Possibly more optimized, because the library implementation can possibly use a different (and more efficient) locking or lock-free mechanism than the JVM object monitor.
Avoids a common pitfall of race condition: Time of check to time of use.
Two broad categories of concurrent data collection classes
There are two flavors of concurrent data collection classes. They are designed for very different application needs. To benefit from the "good coding patterns", you must know which one to use given each situation.
Non-blocking concurrent data collections
These classes can guarantee a response (returning from a method call) in a deterministic amount of time - whether the operation succeeds or fails. It never deadlocks or wait forever.
Blocking concurrent data collections
These classes make use of JVM and OS synchronization features to link together data operations with thread control.
As you have mentioned, they use sleep locks. If a blocking operation on a blocking concurrent data collection is not satisfied immediately, the thread requesting this operation goes into sleep, and will be waken up when the operation is satisfied.
There is also a hybrid: blocking concurrent data collections that allow one to do a quick (non-blocking) check to see if the operation might succeed. This quick check can suffer from the "Time of check to time of use" race condition, but if used correctly it can be useful to some algorithms.
Before the java.util.concurrent package becomes available, programmers often had to code their own poor-man's alternatives. Very often, these poor alternatives have hidden bugs.
Besides data collections?
Callable, Future, and Executor are very useful for concurrent processing. One could say that these patterns offer something remarkably different from the imperative programming paradigm.
Instead of specifying the exact order of execution of a number of tasks, the application can now:
Callable allows packaging "units of work" with the data that will be worked on,
Future provides a way for different units of work to express their order dependencies - which work unit must be completed ahead of another work unit, etc.
In other words, if two different Callable instances don't indicate any order dependencies, then they can potentially be executed simultaneously, if the machine is capable of parallel execution.
Executor specifies the policies (constraints) and strategies on how these units of work will be executed.
One big thing which was reportedly missing from the original java.util.concurrent is the ability to schedule a new Callable upon the successful completion of a Future when it is submitted to an Executor. There are proposals calling for a ListenableFuture.
(In C#, the similar unit-of-work composability is known as Task.WhenAll and Task.WhenAny. Together they make it possible to express many well-known multi-threading execution patterns without having to explicitly create and destroy threads with own code.)

Java avoid race condition WITHOUT synchronized/lock

In order to avoid race condition, we can synchronize the write and access methods on the shared variables, to lock these variables to other threads.
My question is if there are other (better) ways to avoid race condition? Lock make the program slow.
What I found are:
using Atomic classes, if there is only one shared variable.
using a immutable container for multi shared variables and declare this container object with volatile. (I found this method from book "Java Concurrency in Practice")
I'm not sure if they perform faster than syncnronized way, is there any other better methods?
thanks
Avoid state.
Make your application as stateless as it is possible.
Each thread (sequence of actions) should take a context in the beginning and use this context passing it from method to method as a parameter.
When this technique does not solve all your problems, use the Event-Driven mechanism (+Messaging Queue).
When your code has to share something with other components it throws event (message) to some kind of bus (topic, queue, whatever).
Components can register listeners to listen for events and react appropriately.
In this case there are no race conditions (except inserting events to the queue). If you are using ready-to-use queue and not coding it yourself it should be efficient enough.
Also, take a look at the Actors model.
Atomics are indeed more efficient than classic locks due to their non-blocking behavior i.e. a thread waiting to access the memory location will not be context switched, which saves a lot of time.
Probably the best guideline when synchronization is needed is to see how you can reduce the critical section size as much as possible. General ideas include:
Use read-write locks instead of full locks when only a part of the threads need to write.
Find ways to restructure code in order to reduce the size of critical sections.
Use atomics when updating a single variable.
Note that some algorithms and data structures that traditionally need locks have lock-free versions (they are more complicated however).
Well, first off Atomic classes uses locking (via synchronized and volatile keywords) just as you'd do if you did it yourself by hand.
Second, immutability works great for multi-threading, you no longer need monitor locks and such, but that's because you can only read your immutables, you cand modify them.
You can't get rid of synchronized/volatile if you want to avoid race conditions in a multithreaded Java program (i.e. if the multiple threads cand read AND WRITE the same data). Your best bet is, if you want better performance, to avoid at least some of the built in thread safe classes which do sort of a more generic locking, and make your own implementation which is more tied to your context and thus might allow you to use more granullar synchronization & lock aquisition.
Check out this implementation of BlockingCache done by the Ehcache guys;
http://www.massapi.com/source/ehcache-2.4.3/src/net/sf/ehcache/constructs/blocking/BlockingCache.java.html
One of the alternatives is to make shared objects immutable. Check out this post for more details.
You can perform up to 50 million lock/unlocks per second. If you want this to be more efficient I suggest using more course grain locking. i.e. don't lock every little thing, but have locks for larger objects. Once you have much more locks than threads, you are less likely to have contention and having more locks may just add overhead.

if multiple threads are updating the same variable, what should be done so each thread updates the variable correctly?

If multiple threads are updating the same variable, what should I do so each thread updates the variable correctly?
Any help would be greatly appreciated
There are several options:
1) Using no synchronization at all
This can only work if the data is of primitive type (not long/double), and you don't care about reading stale values (which is unlikely)
2) Declaring the field as volatile
This will guarantee that stale values are never read. It also works fine for objects (assuming the objects aren't changed after creation), because of the happens-before guarantees of volatile variables (See "Java Memory Model").
3) Using java.util.concurrent.AtomicLong, AtomicInteger etc
They are all thread safe, and support special operations like atomic incrementation and atomic compare-and-set operations.
4) Protecting reads and writes with the same lock
This approach provides mutual exclusion, which allows defining a large atomic operation, where multiple data members are manipulated as a single operation.
This is a major problem with multi-threaded applications, and spans more than I could really cover in an answer, so I'll point you to some resources.
http://download.oracle.com/javase/tutorial/essential/concurrency/sync.html
http://www.vogella.de/articles/JavaConcurrency/article.html#concurrencyjava_synchronized
Essentially, you use the synchronized keyword to place a lock around a variable. This makes sure that the piece of code is only being run once at a time. You can also place locks around the same object in multiple areas.
Additionally, you need to look out for several pitfalls, such as Deadlock.
http://tutorials.jenkov.com/java-concurrency/deadlock.html
Errors caused by misuse of locks are often very difficult to debug and track down, because they aren't very consistent. So, you always need to be careful that you put all of your locks in the correct location.
You should implement locking on the variable in question.
Eg.
http://download.oracle.com/javase/tutorial/essential/concurrency/newlocks.html

Approach to a thread safe program

All,
What should be the approach to writing a thread safe program. Given a problem statement, my perspective is:
1 > Start of with writing the code for a single threaded environment.
2 > Underline the fields which would need atomicity and replace with possible concurrent classes
3 > Underline the critical section and enclose them in synchronized
4 > Perform test for deadlocks
Does anyone have any suggestions on the other approaches or improvements to my approach. So far, I can see myself enclosing most of the code in synchronized blocks and I am sure this is not correct.
Programming in Java
Writing correct multi-threaded code is hard, and there is not a magic formula or set of steps that will get you there. But, there are some guidelines you can follow.
Personally I wouldn't start with writing code for a single threaded environment and then converting it to multi-threaded. Good multi-threaded code is designed with multi-threading in mind from the start. Atomicity of fields is just one element of concurrent code.
You should decide on what areas of the code need to be multi-threaded (in a multi-threaded app, typically not everything needs to be threadsafe). Then you need to design how those sections will be threadsafe. Methods of making one area of the code threadsafe may be different than making other areas different. For example, understanding whether there will be a high volume of reading vs writing is important and might affect the types of locks you use to protect the data.
Immutability is also a key element of threadsafe code. When elements are immutable (i.e. cannot be changed), you don't need to worry about multiple threads modifying them since they cannot be changed. This can greatly simplify thread safety issues and allow you to focus on where you will have multiple data readers and writers.
Understanding details of concurrency in Java (and details of the Java memory model) is very important. If you're not already familiar with these concepts, I recommend reading Java Concurrency In Practice http://www.javaconcurrencyinpractice.com/.
You should use final and immutable fields wherever possible, any other data that you want to change add inside:
synchronized (this) {
// update
}
And remember, sometimes stuff brakes, and if that happens, you don't want to prolong the program execution by taking every possible way to counter it - instead "fail fast".
As you have asked about "thread-safety" and not concurrent performance, then your approach is essentially sound. However, a thread-safe program that uses synchronisation probably does not scale much in a multi cpu environment with any level of contention on your structure/program.
Personally I like to try and identify the highest level state changes and try and think about how to make them atomic, and have the state changes move from one immutable state to another – copy-on-write if you like. Then the actual write can be either a compare-and-set operation on an atomic variable or a synchronised update or whatever strategy works/performs best (as long as it safely publishes the new state).
This can be a bit difficult to structure if your new state is quite different (requires updates to several fields for instance), but I have seen it very successfully solve concurrent performance issues with synchronised access.
Buy and read Brian Goetz's "Java Concurrency in Practice".
Any variables (memory) accessible by multiple threads potentially at the same time, need to be protected by a synchronisation mechanism.

Categories