How to remove deadlock in Java code using NetBeans - java

I have old code in Java which deadlocks... I never used netbeans as a development tool... however, I need to fix the code.
I ran the application in debug mode, clicked on check for deadlock and netBeans brought a screen. Two out of four threads were in red... see the screen dump below.
I'm new to multithreading, and on the top of that code is not mine...
What's most likely causing the problem?

As far as I can tell the problem is very likely related to the way in which (or more specifically the order in which) the multiple threads acquire and release locks.
In the above example the two threads need access to two locks (or monitors):
nano.toolbox.strategies.ESMarketMaker
nano.toolbox.strategies.ExecutionManager
From the stack trace on the two threads currently in a deadlock, we can see that thread 'ExecutionManager' has aquired the ExecutionManager monitor but is awaiting acquisition (while still holding the 'ExecutionManager' monitor) of the 'ESMarketMaker' monitor.
The 'StrategyManager' thread on the other hand, has acquired the 'ESMarketMaker' monitor but is awaiting acqusition (while still holding the 'ESMarketMaker' monitor) of the 'ExecutionManager' monitor.
This is a class example of deadlocks and the many ways in which order of acquisition of locks can cause deadlocks.
There are many ways to address these kind of problems:
If possible, all threads needing some set of locks to operate, must acquire the shared locks in the same order (the inversed order is the problem in the above deadlock). But this is not always possible, as multiple threads may have only semi-overlapping lock usage in different conditions, why it may be hard or impossible to design a protocol of acquisition that will ensure uniform ordering.
You may also use tryLock() instead, which is a non-blocking acquisition, it returns a flag to indicate success or failure and gives you the option to do something else before re-trying. One thing I would recommend in this case, is that if acquisition fails, it is to drop all currently owned locks and try from scratch again (thus giving way for any who is blocked on any or all locks the current thread holds, to complete their work, maybe freeing the locks this thread needs when it retries).
One thing to note though, is that sometimes when deciding on the protocol to use, you need more explicit control over your locks, rather than normal synchronization in Java. In these cases, the usage of explicit ReentrantLock instances can be a benefit, as these allows you to do stuff like inspecting whether a lock is unlocked or currently locked, and do non-blocking try-locks as described above.
I hope this helps, I'm sorry I can't be more specific, but I would need to see the source code for that. :-)
(Oh an p.s., a third thing one might opt for, if deadlock is something that must be avoided by all cost, is to look into modeling tools, to model a state machine over the states of the program and locks, which can be used together with analysis tools which can check for possible deadlocks in such a model and give you examples if any such is found).

Related

Is there a way to get the number of idle monitors

This link mentions the number of monitors currently in a running JVM (i.e. 250000)
https://bugs.openjdk.org/browse/JDK-8153224
How were they able to obtain that number?
Specifically, I want to obtain the number of idle monitors in a running system. We have observed a long "Stopping threads" phase i a gc run.
The safepoint log shows that the time is spent in the "sync" phase.
We are looking for culprits.
It is true that every Java object can be used as a monitor, but the "idle monitors" that https://bugs.openjdk.org/browse/JDK-8153224 refers to are not simply objects. That wouldn't make sense. It would mean that the mere existence of lots of objects in the heap would delay GC "sync".
So what does it mean?
First we need to understand a bit about how Java monitors (primitive locks) work.
The default state of a monitor is "thin". In this state, the monitor is represented by a couple of lock bits in the object header. When a thread attempts to acquire a monitor that is in the thin state, it uses (I believe) a CAS instruction (or similar) to atomically test that the lock is thin+unlocked and flip it to thin+locked. If that succeeds, the thread has acquired the monitor lock and it proceeds on its way.
However, if the lock bits say that the monitor is already locked, the CAS will fail: we have lock contention. The locking code now needs to add the current thread to a queue for the monitor, and "park" it. But the object header bits cannot represent the queue and so on. So the code creates a "fat" lock data structure to hold the extra state needed. This is called lock inflation. (The precise details don't matter for the purpose of this explanation.)
The problem is that "fat" locks use more memory than "thin" locks, and have more complicated and expensive lock acquire and release operations. So we want to turn them back into "thin" locks; i.e. "deflate" them. But we don't want to do this immediately that the lock is released because there is a fair chance that the lock contention will recur and the lock will need to be reinflated.
So there is a mechanism that scans for "fat" locks that are current not locked and don't have threads waiting to acquire them. These are the "idle monitors" that the JDK-8153224 is talking about. If I understand what the bug report is saying, when the number of these "idle monitors" is large, the deflater mechanism can significantly delay the GC "sync" step.
This brings us to your problem. How can you tell if the deflater is what is causing your "syncs" to take a long time?
I don't think you can ... directly.
However, for this phenomenon to occur, you would need a scenario where there is a lot of thread contention on a large number of monitors. My advice would be to examine / analyze your application to see if that is plausible. (Note that the Twitter example involved thousands of threads and hundreds of thousands of monitors. It sounds a bit extreme to me ...)
There are other things (culprits) that can cause delays in the GC "sync" step. These include:
certain kinds of long running loops,
long running System.arraycopy() calls, and
having a large number of threads in RUNNING state.
So before spend lots of time on the "idle monitor" theory, I think you should do some safepoint profiling as mentioned in How to reduce time taken on threads reaching Safepoint - Sync state. Look for application threads that maybe the cause.

in multithreaded programming does synchronization strip the benefits of concurrent executions

I have a dilemma regarding the use of multithreading in the application I am working on. I have a workflow in which the state of the object changes, which presents no issues for single-threaded operation. However, In order to improve the performance, I am planning to use multiple threads.
It is my understanding that since the state is going to be shared among the threads, every thread must acquire a lock on the state before execution, so doesn't this defeat the purpose of multithreading? It seems like multiple threads won't produce any actual concurrency, so it wouldn't be any better than single threaded.
Is my analysis correct? If I am misunderstanding then would someone please clarify the concept?
The short answer: concurrency is hard. Real concurrency, with multiple concurrent writers, is really hard.
What you need to determine is what your actual consistency guarantees need to be. Does every reader need to be able to see every write, guaranteed? Then you'll be forced into linearizing all the threads somehow (e.g. using locks) -- your next effort should be to ensure you do as much work as possible outside of the lock, to keep the lock held for the shortest possible time.
One way to keep the lock held for the shortest possible time is to use a lock-free algorithm. Most lock-free algorithms are based on an atomic compare-and-set primitive, such as those provided by the java.util.concurrent.atomic package. These can be very high-performance, but designing a successful lock-free algorithm can be subtle. One simple kind of lock-free algorithm is to just build a new (immutable) state object and then atomically make it the "live" state, retrying in a loop if a different state was made live by another writer in the interim. (This approach is good enough for many applications, but it's vulnerable to livelock if you have too many writers.)
If you can get by with a looser consistency guarantee, then many other optimizations are possible. For example, you can use thread-local caches so that each thread sees its own view of the data and can be writing in parallel. Then you need to deal with the consequences of data being stale or inconsistent. Most techniques in this vein strive for eventual consistency: writes may not be visible to all readers immediately, but they are guaranteed to be visible to all readers eventually.
This is an active area of research, and a complete answer could fill a book (really, several books!). If you're just getting started in this area, I'd recommend you read Java Concurrency in Practice by Goetz et al, as it provides a good introduction to the subject and lots of practical advice about how to successfully build concurrent systems.
Your interpretation of the limits of multithreading and concurrency are correct. Since the state must be acquired and controlled by threads in order for them to perform work (and waiting when not working), you are essentially splitting the work of a single thread among multiple threads.
The best way to fix this is to adjust your program design to limit the size of the critical section. As we learned in my operating systems course with process synchronization,
only one critical section must be executing at any given time
The specific term critical section may not directly apply to Java concurrency, but it still illustrates the concept.
What does it mean to limit this critical section? For example, let's say you have a program managing a single bank account (unrealistic, but illustrates my point). If a lock on the account must be acquired by a thread for the balance to be updated, the basic option would be to have a single thread working on updating the balance at all times (without concurrency). The critical section would be the entire program. However, let's say there was also other logic to be executed, such as alerting other banks of the balance update. You could require the lock on the bank account state only while updating the balance, and not when alerting other banks, decreasing the size of critical section and allowing other threads to perform work by alerting other banks while one thread is updating the balance.
Please comment if this was unclear. Your seem to already understand the constraints of concurrency, but hopefully this will reveal possible steps towards implementing concurrency.
Your need is not totally clear but you guess well the limitations that multi threading may have.
Running parallel threads have a sense if some "relatively autonomous" tasks can be concurrently performed by distinct threads or group of threads.
If your scenario looks like : you start 5 threads and finally only a single thread is active while the others are waiting for a locking resource, using multithreading makes no sense and could even introduce an overhead because of cpu context switches.
I think that in your use case, the multithreading could be used for :
tasks that don't change the state
performing a task that changes the state if the task may be divided in multiple processing with a minimal set of instructions that may do profitable the multithreading use.
It is my understanding that since the state is going to be shared among the threads, every thread must acquire a lock on the state before execution, so doesn't this defeat the purpose of multithreading?
The short answer is "it depends". It is rare that you have a multithreaded application that has no shared data. So sharing data, even if it needs a full lock, doesn't necessarily defeat the performance improvements when making a single threaded application be multi-threaded.
The big question is what the frequency that the state needs to be updated by each thread. If the threads read in the state, do their concurrent processing which takes time, and then alter the state at the end then you may see performance gains. On the other hand, if every step in the processing needs to somehow be coordinated between threads then they may all spend them time contending for the state object. Reducing this dependence on shared state will then improve your multi-threaded performance.
There are also more efficient ways to update a state variable which can avoid locks. Something like the following pattern is used a lot:
private AtomicReference<State> sharedState;
...
// inside a thread processing loop
// do the processing job
while (true) {
State existingState = sharedState.get();
// create a new state object from the existing and our processing
State newState = updateState(state);
// if the application state hasn't changed, then update it
if (sharedState.compareAndSet(existingState, newState)) {
break;
}
// otherwise we need to get the new existing state and try again
}
One way to handle state changes is to have a coordinating thread. It is the only thread which reads from the state and generates jobs. As jobs finish they put updates to the state on a BlockingQueue which is then read by the coordinating thread which updates the state in turn. Then the processing threads don't have to all be contending for access to the shared state.
Imagine it this way :
Synchronization is blocking
Concurrency is parallelization
You don't have to use synchronization. You can use an Atomic reference object as a wrapper for your shared mutable state.
You can also use stamped locks which improves concurrency by allowing for optimistic reads. You may also use Accumulators to write concurrent code. These features are part of Java 8.
Another way to prevent synchronization is to use immutable objects which can be shared and published freely and need no synchronization. I should add that you should use immutable objects anyway regardless of concurrency for that makes your state space of the object easier to reason about

Java Thread.suspend precise semantic

This question is NOT about alternatives to Thread.suspend.
This is about the possibility to implement a bias lock with Thread.suspend, which (I believe) can't be implemented with Thread.interrupt or similar alternatives.
I know Thread.suspend is deprecated.
But I want to know the precise semantics of Thread.suspend.
If I call thread1.suspend(), am I guaranteed to be blocked until thread1 is fully stopped? If I call thread1.resume(), can this call be visible to other threads out of order?
More over, if I successfully suspend a thread, will this thread be suspended at a somewhat safe point? Will I see its intermediate state (because Java forbids out of thin air value even in not properly synchronized program, I don't believe this is allowed) or see something out of order (if suspend is an asynchronous request, then sure I will see that kind of thing)?
I want to know these because I want to implement some toy asymmetric lock within Java (like BiasedLock in HotSpot). Using Thread.suspend you can implement a Dekker like lock without store load barrier (and shift the burden to the rare path). My experimentation shows it works, but since a Thread.sleep is enough to wait for a remote context switch, I am not sure this is guaranteed behavior.
By the way, are there any other way to force (or detect) remote barrier? For example, I search the web and find others use FlushProcessWriteBuffers or change affinity to bind a thread to each core. Can these tricks done within Java?
EDIT
I came up with an idea. Maybe I can use GC and finalizer to implement the biased lock, at least if only two threads are there. Unfortunately the slow path may require explicit gc() call, which isn't really practical.
If GC is not precise, I maybe end up with a deadlock. If the GC is too smart and collect my object before I nullify the reference (maybe the compiler is allowed to reuse stack variables, but is the compiler allowed to do these kind of things for heap variables, ignoring acquire fence and load fence? ), I end up with corrupted data.
EDIT
It seems a so called "reachability fence" is needed to prevent the optimizer moveing an object's last reference upward. Unfortunately it's no where.
Its semantics consist entirely of what is specified in the Javadoc:
Suspends this thread.
First, the checkAccess method of this thread is called with no arguments. This may result in throwing a SecurityException (in the current thread).
If the thread is alive, it is suspended and makes no further progress unless and until it is resumed.
But as you're not going to use it, because it's deprecated, this is all irrelevant.

What will happen if the locks themselves get contended upon?

All objects in Java have intrinsic locks and these locks are used for synchronization. This concept prevents objects from being manipulated by different threads at the same time, or helps control execution of specific blocks of code.
What will happen if the locks themselves get contended upon - i.e. 2 threads asking for the lock at the exact microsecond.
Who gets it, and how does it get resolved?
What will happen if the locks themselves get contended upon - i.e. 2 threads asking for the lock at the exact microsecond.
One thread will get the lock, and the other will be blocked until the first thread releases it.
(Aside: some of the other answers assert that there is no such thing as "at the same time" in Java. They are wrong!! There is such a thing! If the JVM is using two or more cores of a multi-core system, then two threads on different cores could request the same Object lock in exactly the same hardware clock cycle. Clearly, only one will get it, but that is a different issue.)
Who gets it, and how does it get resolved?
It is not specified which thread will get the lock.
It is (typically) resolved by the OS'es thread scheduler ... using whatever mechanisms that uses. This aspect of the JVM's behaviour is (obviously) platform specific.
If you really, really want to figure out precisely what is going on, the source code for OpenJDK and Linux are freely available. But to be frank, you don't need to know.
When it comes to concurrency, there is no such thing as "at the same time"; java ensures that someone is first.
If you are asking about simultaneous contended access to lock objects, that is the essence of concurrent programming - nothing to say other than "it happens by design"
If you are asking about simultaneously using an object as a lock and as a regular object, it's not a problem: It happens all the time when using non synchronized methods during a concurrent call to a synchronized method (which uses this as the lock object)
The thing handling lock requests can only handle one thing at a time; therefore, 2 threads can't ask for the lock at the same time.
Even if it is in the same microsecond, one will still be ahead of the other one (perhaps faster by a nanosecond). The one that asks first will get the lock. The one who asks second will then wait for the lock to be released.
An analogy will be ... stacking papers together... Suppose I have one hand and that hand can only hold one piece of paper. Different people(threads) are handing me a single piece of paper. If two people "offer me papers at the same time" I will handle one before the other
In reality, there is no such thing as at the same time. The phrase exists because our brains can not work at the micro...nano...pico second speeds
http://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html
Locks are implemented not only in JVM but also at OS and hardware level so the mechanisms may differ. We rely on Java API and JVM specs and they say that one of the threads will acquire the lock the other will block.

Synchronization in java - Can we set priority to a synchronized access in java?

Synchronization works by providing exclusive access to an object or method by putting a Synchronized keyword before a method name. What if I want to give higher precedence to one particular access if two or more accesses to a method occurs at the same time. Can we do that?
Or just may be I'm misunderstanding the concept of Synchronization in java. Please correct me.
I have other questions as well,
Under what requirements should we make method synchronized?
When to make method synchronized ? And when to make block synchronized ?
Also if we make a method synchronized will the class too be synchronized ? little confused here.
Please Help. Thanks.
No. Sadly Java synchronization and wait/notify appear to have been copied from the very poor example of Unix, rather than almost anywhere else where there would have been priority queues instead of thundering herds. When Per Brinch Hansen, author of monitors and Objective Pascal, saw Java, he commented 'clearly I have laboured in vain'.
There is a solution for almost everything you need in multi-threading and synchronization in the concurrent package, it however requires some thinking about what you do first. The synchronized, wait and notify constructs are like the most basic tools if you have just a very basic problem to solve, but realistically most advanced programs will (/should) never use those and instead rely on the tools available in the Concurrent package.
The way you think about threads is slightly wrong. There is no such thing as a more important thread, there is only a more important task. This is why Java clearly distinguishes between Threads, Runnables and Callables.
Synchronization is a concept to prevent more than one thread from entering a specific part of code, which is - again - the most basic concept of avoiding threading issues. Those issues happen if more than one thread accesses some data, where at least one of those multiple threads is trying to modify that data. Think about an array that is read by Thread A, while it is written by Thread B at the same time. Eventually Thread B will write the cell that Thread A is just about to read. Now as the order of execution of threads is undefined, it is as well undefined whether Thread A will read the old value, the new value or something messed up in between.
A synchronized "lock" around this access is a very brute way of ensuring that this will never happen, more sophisticated tools are available in the concurrent package like the CopyOnWriteArray, that seamlessly handles the above issue by creating a copy for the writing thread, so neither Thread A nor Thread B needs to wait. Other tools are available for other solutions and problems.
If you dig a bit into the available tools you soon learn that they are highly sophisticated, and the difficulties using them is usually located with the programmer and not with the tools, because countless hours of thinking, improving and testing has been gone into those.
Edit: to clarify a bit why the importance is on the task even though you set it on the thread:
Imagine a street with 3 lanes that narrows to 1 lane (synchronized block) and 5 cars (threads) are arriving. Let's further assume there is one person (the car scheduler) that has to define which cars get the first row and which ones get the other rows. As there is only 1 lane, he can at best assign 1 cars to the first row and the others need to come behind. If all cars look the same, he will most likely assign the order more or less randomly, while a car already in front might stay in front more likely, just because it would be to troublesome to move those cars around.
Now lets say one car has a sign on top "President of the USA inside", so the scheduler will most likely give that car priority in his decision. But even though the sign is on the car, the reason for his decision is not the importance of the car (thread), but the importance on the people inside (task). So the sign is nothing but an information for the scheduler, that this car transports more important people. Whether or not this is true however, the scheduler can't say (at least not without inspection), so he just has to trust the sign on the car.
Now if in another scenario all 5 cars have the "President inside" sign, the scheduler doesn't have any way to decide which one goes first, and he is in the same situation again as he was with all the cars having no sign at all.
Well in case of synchronized, the access is random if multiple threads are waiting for the lock. But in case you need first-come first-serve basis: Then you can probably use `ReentrantLock(fairness). This is what the api says:
The constructor for this class accepts an optional fairness parameter.
When set true, under contention, locks favor granting access to the
longest-waiting thread.
Else if you wish to give access based on some other factor, then I guess it shouldn;t be complicated to build one. Have a class that when call's lock gets blocked if some other thread is executing. When called unlock it will unblock a thread based on whatever algorithm you wish to.
There's no such thing as "priority" among synchronized methods/blocks or accesses to them. If some other thread is already holding the object's monitor (i.e. if another synchronized method or synchronized (this) {} block is in progress and hasn't relinquished the monitor by a call to this.wait()), all other threads will have to wait until it's done.
There are classes in the java.util.concurrent package that might be able to help you if used correctly, such as priority queues. Full guidance on how to use them correctly is probably beyond the scope of this question - you should probably read a decent tutorial to start with.

Categories