When I have a synchronized method in java, and if multiple threads (lets say 10 threads) tries to access this method and lets assume some thread gets access to this method and finishes the execution of the method and releases the lock then which of the remaining 9 threads get access to this method? Is there any standard mechanism through which next thread will be selected from the pool or will it be selected in FIFO order or will it randomly be selected the thread?
Thread scheduling in Java is platform-specific. There is no guarantee in the order of thread execution in a synchronization scenario.
Having said that, the procedure is roughly as follows:
A preemptive scheduling algorithm is employed
Each thread gets a priority number by the JVM
The thread with he highest priority is selected
FIFO ordering is followed among threads with identical priorities
The JVM runs the thread with the highest priority. Priorities can be programmatically set, too, via the setPriority() method of the Thread class.
The next thread will be selected essentially at random, and the algorithm for selecting the next thread may be different on different machines. This is necessary for Java to gain the efficiencies of using native threads.
If you need first in, first out behavior, you may want to use something from the java.util.concurrent package, such as the Semaphore class with fairness set to true.
Related
As per the below JLS rule-
Each action in a thread happens-before every action in that thread that comes later in the program's order.
In the below case, would clear() always execute before put in a multithreaded environment
private ConcurrentMap<Feature, Boolean> featureMap = new ConcurrentHashMap<>();
public void loadAllConfiguration() {
featureMap.clear();
featureMap.put()
}
In the below case, would clear() always execute before put in a multithreaded environment
Yes, in a multithreaded application, in each thread clear() comes before put(). However when you look at the interaction of multiple threads then this is no longer true in terms of the shared ConcurrentHashMap.
For example, due to race conditions, you might see the following sequence of events:
Thread 1 calls clear().
Thread 2 calls clear().
Thread 3 calls clear().
Thread 2 calls put().
Thread 1 calls put().
Thread 3 calls put().
Even though each thread does clear and then put, there is no guarantee that there will only be 1 item in the ConcurrentHashMap if that was the point of your question.
I'm not super clear on the question but I think:
Each action in a thread happens-before every action in that thread
that comes later in the program's order.
Means that within the context of a single thread (since clear and put are blocking synchronous calls) that the runtime guarantees that they will be executed in the order they are invoked.
Based on my limited understanding of java, this should NOT extend to a multithreaded environment. Suppose you have a single concurrent map shared between two threads, and each one of those threads invokes loadAllConfiguration against a shared featureMap.
The threads can be executed concurrently, so that the operations are interleved!!!!
This could result in an execution order of:
**THREAD 1** **THREAD 2**
map.clear()
map.put()
map.clear()
map.put()
or even in both clears being called concurrently and then both puts being applied concurrently.
I haven't used java so i'm not sure what the ConcurrentHashMap provides, but i'm assuming that it only protects you from race conditions (one thread writing while another reads) by using some sort of synchronization, but it should still might leave you exposed to logical errors (ie clears/put) being interleaved in a deterministic way)
This question already has answers here:
Java multithreading - thread priority
(6 answers)
Closed 5 years ago.
I am currently working on an android project written in java.
I have a dashboard that queries data from a cloudant database and renders the data on graphs. the data however has to be processed when received.
I have 4 AsyncTasks that process the received data in the doInBackground Override method simultaneously (or is supposed to). The process being very slow, I tried the line
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
in each AsyncTask.
However, the 4 AsynkTasks seem to all happen one after the other, is that due to changing the priority of the threads to max? When a thread priority is changed, does it stop all other threads and continue with just the one until it finishes?
From: https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html
Every thread has a priority. Threads with higher priority are executed
in preference to threads with lower priority. Each thread may or may
not also be marked as a daemon. When code running in some thread
creates a new Thread object, the new thread has its priority initially
set equal to the priority of the creating thread, and is a daemon
thread if and only if the creating thread is a daemon.
(as a note: this information was found by searching "java Thread.MAX_PRIORITY" in google and looking at the top result)
From the documentation:
Every thread has a priority. Threads with higher priority are executed
in preference to threads with lower priority. ... When code running in
some thread creates a new Thread object, the new thread has its
priority initially set equal to the priority of the creating thread.
It's literally just a hint to tell the scheduler roughly what order to execute the threads. It can, and likely will, ignore you.
If you've set the priority of all threads to the maximum, that's effectively the same as leaving them at the default. As you may have to tell your boss, if everything is top priority then nothing is top priority :)
There is a strange thing happening on our production box.
Code functionality:
A UI servlet takes a monitor lock on the document object which is being actioned upon by the user and performs some computation on it. The monitor lock is acquired to prevent the same document object from getting modified concurrently by multiple users simultaneously.
Issue Observed in Prod:
Few user actions are getting timed out.
Log Analysis:
The thread corresponding to the timed out user actions is printing all logs prior to acquiring the monitor lock on the document object. Then there is a gap of over 1 hour where the thread is not surfacing up in the logs and then it suddenly becomes alive and does the computation and attempts to send back a response which obviously errors out as the HTTP request has already timed out.
We have checked the logs and code and can confirm that there is no other thread which had acquired the monitor lock on that particular document object. So the lock was uncontested at the point in question.
What could be the possible issue? Is it just that the thread was put into a Runnable state on encountering a synchronized block and for the next 60-80 mins, the CPU never got a chance to run this particular runnable thread?
Ensure the application code is not messing around with thread priority via Thread.setPriority() method or the like. If you're using an IDE like IntelliJ and the Java sources are available, and assuming you can run the application and relevant flow locally in your development machine, you can put a breakpoint in Thread.setPriority() to see if anywhere it is getting invoked. This is an excerpt from Java Concurrency in Practice, Goetz 2006, regarding how unpredictable behavior can be if you try to set Thread priority manually:
10.3.1. Starvation
Starvation occurs when a thread is perpetually denied access to resources it needs in order to make progress; the most
commonly starved resource is CPU cycles. Starvation in Java applications can be caused by inappropriate use of thread
priorities. It can also be caused by executing nonterminating constructs (infinite loops or resource waits that do not
terminate) with a lock held, since other threads that need that lock will never be able to acquire it.
The thread priorities defined in the Thread API are merely scheduling hints. The Thread API defines ten priority levels
that the JVM can map to operating system scheduling priorities as it sees fit. This mapping is platform􀍲specific, so two
Java priorities can map to the same OS priority on one system and different OS priorities on another. Some operating
systems have fewer than ten priority levels, in which case multiple Java priorities map to the same OS priority.
Operating system schedulers go to great lengths to provide scheduling fairness and liveness beyond that required by the
Java Language Specification. In most Java applications, all application threads have the same priority, Thread.
NORM_PRIORITY. The thread priority mechanism is a blunt instrument, and it's not always obvious what effect changing
priorities will have; boosting a thread's priority might do nothing or might always cause one thread to be scheduled in
preference to the other, causing starvation.
It is generally wise to resist the temptation to tweak thread priorities. As soon as you start modifying priorities, the
behavior of your application becomes platform specific and you introduce the risk of starvation. You can often spot a
program that is trying to recover from priority tweaking or other responsiveness problems by the presence of
Thread.sleep or Thread.yield calls in odd places, in an attempt to give more time to lower priority threads.[5]
What does thread priority means? will a thread with MAX_PRIORITY completes its execution before a thread which has MIN_PRIORITY? Or a MAX_PRIORITY thread will be given more execution time then MIN_PRIORITY thread? or any thing else?
The javadoc for Thread only says this, "Threads with higher priority are executed in preference to threads with lower priority." That can mean different things depending on what JVM you are running and, more likely, on what operating system you are running.
In the simplest interpretation of "priority", as implemented by some real-time, embedded operating systems; a thread with a lower priority will never get to run when a higher priority thread is waiting to run. The lower priority thread will be immediately preempted by whatever event caused the higher priority thread to become runnable. That kind of absolute priority is easy to implement, but it puts a burden on the programmer to correctly assign priorities to all of the different threads of all of the different processes running in the box. That is why you usually don't see it outside of embedded systems.
Most general-purpose operating systems assume that not all processes are designed to cooperate with one another. They try to be fair, giving an equal share to each thread that wants CPU time. Usually that is accomplished by continually adjusting the thread's true priorities according to some formula that accounts for how much CPU different threads have wanted in the recent past, and how much each got. There usually is some kind of a weighting factor, to let a programmer say that this thread should get a larger "share" than that thread. (e.g., the "nice" value on a Unix-like system.)
Because any practical JVM must rely on the OS to provide thread scheduling, and because there are so many different ways to interpret"priority"; Java does not attempt to dictate what "priority" really means.
Suppose you have a program that starts two threads a and b, and b starts another ten threads of its own. Does a receive half of the available "attention" while b and its threads share the other half, or do they all share equally? If the answer is the latter by default, how could you achieve the former? Thanks!
There are lots of nice documentation on this topic. One such is this.
When a Java thread is created, it inherits its priority from the thread that created it. You can also modify a thread's priority at any time after its creation using the setPriority() method. Thread priorities are integers ranging between MIN_PRIORITY and MAX_PRIORITY (constants defined in the Thread class). The higher the integer, the higher the priority. At any given time, when multiple threads are ready to be executed, the runtime system chooses the "Runnable" thread with the highest priority for execution. Only when that thread stops, yields, or becomes "Not Runnable" for some reason will a lower priority thread start executing. If two threads of the same priority are waiting for the CPU, the scheduler chooses one of them to run in a round-robin fashion. The chosen thread will run until one of the following conditions is true:
A higher priority thread becomes "Runnable".
It yields, or its run() method exits.
On systems that support time-slicing, its time allotment has expired.
At any given time, the highest priority thread is running. However, this is not guaranteed. The thread scheduler may choose to run a lower priority thread to avoid starvation. For this reason, use priority only to affect scheduling policy for efficiency purposes. Do not rely on thread priority for algorithm correctness.
Does a receive half of the available "attention" while b and its threads share the other half, or do they all share equally?
Neither. The proportion of time received by each thread is unspecified, and there's no reliable way to control it in Java. It is up to the native thread scheduler.
If the answer is the latter by default, how could you achieve the former?
You can't, reliably.
The only thing that you have to influence the relative amounts of time each thread gets to run are thread priorities. Even they are not reliable or predictable. The javadocs simply say that a high priority thread is executed "in preference to" a lower priority thread. In practice, it depends on how the native thread scheduler handles priorities.
For more details: http://docs.oracle.com/javase/7/docs/technotes/guides/vm/thread-priorities.html ... which includes information on how thread priorities on a range of platforms and Java versions.
One cannot say with surity the order in which the threads will be executed. Thread Scheduler works as per its inbuilt algorithm which we cannot change. Thread Scheduler picks up any threads (Highest priority threads) from runnable pool and make it running.
We can only mention the priority in which scheduler should process our threads.