Difference between thread priorities - java

What does thread priority means? will a thread with MAX_PRIORITY completes its execution before a thread which has MIN_PRIORITY? Or a MAX_PRIORITY thread will be given more execution time then MIN_PRIORITY thread? or any thing else?

The javadoc for Thread only says this, "Threads with higher priority are executed in preference to threads with lower priority." That can mean different things depending on what JVM you are running and, more likely, on what operating system you are running.
In the simplest interpretation of "priority", as implemented by some real-time, embedded operating systems; a thread with a lower priority will never get to run when a higher priority thread is waiting to run. The lower priority thread will be immediately preempted by whatever event caused the higher priority thread to become runnable. That kind of absolute priority is easy to implement, but it puts a burden on the programmer to correctly assign priorities to all of the different threads of all of the different processes running in the box. That is why you usually don't see it outside of embedded systems.
Most general-purpose operating systems assume that not all processes are designed to cooperate with one another. They try to be fair, giving an equal share to each thread that wants CPU time. Usually that is accomplished by continually adjusting the thread's true priorities according to some formula that accounts for how much CPU different threads have wanted in the recent past, and how much each got. There usually is some kind of a weighting factor, to let a programmer say that this thread should get a larger "share" than that thread. (e.g., the "nice" value on a Unix-like system.)
Because any practical JVM must rely on the OS to provide thread scheduling, and because there are so many different ways to interpret"priority"; Java does not attempt to dictate what "priority" really means.

Related

how does Thread.sleep(delay) react to actual parallel threads

The link to the documentation says: Thread.sleep causes the current thread to suspend execution for a specified period
What does the term current thread mean? I mean if the processor has only one core that it makes sense to coin one of the threads as the current thread, but if all the threads(say 4 of them) are running individually on separate cores, then which one is the current thread?
The "current thread" is the thread which calls Thread.sleep(delay).
Also if a thread sleeps, it does not block the entire CPU core. Some other thread can be run on the same CPU core while your thread is asleep.
Every single command and method call you make has to be executed by anyone thread. From that thread's perspective, it is itself the current thread. So in other words: Thread.sleep(delay) pauses the thread that executes the Thread.sleep() method.
Also, keep in mind that multi-threading and multiple cores only have a very distant relationship.
Even before multi-core CPUs were commonplace, pretty much every operating system supported heavy multi-threading (or multi-tasking, which is basically the same thing for the purpose of this discussion) operation.
In modern OS this is done with a technique called preemptive multitasking. This basically means that the OS can forcibly pause the currently running process and allow another one to run for a short time, providing the illusion of actual parallel processing.
And since a lot of time in a given process is often spent waiting for some external I/O (network, disk, ...) that even means that you can use the CPU more efficiently (since the time a process would spend waiting for IO another process can spend doing actual computation).
As an example at the time of writing this, my laptop has 1311 threads (most of which are probably sleeping and only a handful will actually run and/or wait to run), even though it has only 4 cores.
tl;dr while multiple cores allow more than one thread to actually execute at the exact same time, you can have multi-threading even with a single core and there's very little noticeable difference if you do (besides raw performance, obviously)
The name, "Current thread," was chosen for the convenience of the authors of the operating system, not for the authors of applications that have to run under the operating system.
In the source code for an operating system, it makes sense to have a variable current_thread[cpu_id] that points to a data structure that describes the thread that is running on that cpu_id at any given moment.
From the point-of-view of an application programmer, any system call that is supposed to do something to the "current thread," is going to do it to the thread that makes the call. If a thread that is running on CPU 3 calls Thread.sleep(n), the OS will look up current_thread[3] (i.e., the thread that made the call) and put that thread to sleep.
From the application point-of-view, Thread.sleep(n) is a function that appears to do nothing, and always takes at least n milliseconds to do it.
In general, you should substitute "the caller" or "the calling thread" any time you see "current thread" in any API document.

How long does it take to change a thread's priority in Java?

What I can't find is any statement on whether changing a thread's priority is a costly operation, time-wise. I would like to do it frequently, but if each switch carries a significant time penalty it is probably not worth the trouble.
What I can't find is any statement on whether changing a thread's priority is a costly operation, time-wise. I would like to do it frequently, but if each switch carries a significant time penalty it is probably not worth the trouble.
Any answer here is going to be very OS dependent. I suspect with most Unix variants that the answer is no, it's not costly. It may require some sort of data synchronization but otherwise it is just setting a value on the thread's administrative information. I suspect that there is no rescheduling of the threads as discussed in the comments.
That said, without knowing more about your particular use case, I doubt it is going to be worth the trouble. As I say in the answer listed below, about the only time thread prioritization will make a difference is if all of the threads are completely CPU bound and you want one task or another to get more cycles.
Also, thread priorities are very non-linear and small changes to them may have little to no effect so any overhead incurred in setting the thread priorities will overwhelm any benefits gained by changing them.
See my answer here:
Guide for working with Linux thread priorities and scheduling policies?
Also, check out this article about Java thread priorities and some real life testing of them under Linux. To quote:
As can be seen, thread priorities 1-8 end up with a practically equal share of the CPU, whilst priorities 9 and 10 get a vastly greater share (though with essentially no difference between 9 and 10). The version tested was Java 6 Update 10.
In the case of Windows, a call to SetThreadPriority to change the priority of a ready to run thread is a system call that will move the thread from it's current priority ready queue to a different priority ready queue, which is more costly than just setting some value in a thread object.
If SetThreadPriority is used to increase the priority of a thread, and if that results in the now higher priority thread preempting a lower priority thread, the preemption occurs at call time, not at the next time slice.
Ready queues are mentioned here:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682105(v=vs.85).aspx
Context switching related to a priority change is mentioned here: "The following events might require thread dispatching ... A thread’s priority changes, either because of a system service call or because Windows itself changes the priority value." and "Preemption ... a lower-priority thread is preempted when a higher-priority thread becomes ready to run. This situation might occur for a couple of reasons: A higher-priority thread’s wait completes ... A thread priority is increased or decreased." Ready queues are also mentioned: "Windows multiprocessor systems have per-processor dispatcher ready queues"
https://www.microsoftpressstore.com/articles/article.aspx?p=2233328&seqNum=7
I asked about this at MSDN forums. The fourth post agrees with the sequence I mention in the first and third post in this thread:
https://social.msdn.microsoft.com/Forums/en-US/d4d40f9b-bfc9-439f-8a76-71cc5392669f/setthreadpriority-to-higher-priority-is-context-switch-immediate?forum=windowsgeneraldevelopmentissues
In the case of current versions of Linux, run queues indexed by priority were replaced by a red-black tree. Changing a thread's priority would involve removal and reinsertion of a thread object within the red-black tree. Preemption would occur if the thread object is moved sufficiently to the "left" of the red-black tree.
https://www.ibm.com/developerworks/library/l-completely-fair-scheduler
In response to the comments about the app that "examines a full-speed stream of incoming Bluetooth data packets", the receiving thread should be highest priority, hopefully spending most of its time not running while waiting for the reception of a packet. The high priority packets would be queued up to be processed by another thread just lower in priority than the receiving thread. Multiple processing threads could take advantage of multiple cores if needed.

Can a running thread become runnable on entering a uncontested synchronized block?

There is a strange thing happening on our production box.
Code functionality:
A UI servlet takes a monitor lock on the document object which is being actioned upon by the user and performs some computation on it. The monitor lock is acquired to prevent the same document object from getting modified concurrently by multiple users simultaneously.
Issue Observed in Prod:
Few user actions are getting timed out.
Log Analysis:
The thread corresponding to the timed out user actions is printing all logs prior to acquiring the monitor lock on the document object. Then there is a gap of over 1 hour where the thread is not surfacing up in the logs and then it suddenly becomes alive and does the computation and attempts to send back a response which obviously errors out as the HTTP request has already timed out.
We have checked the logs and code and can confirm that there is no other thread which had acquired the monitor lock on that particular document object. So the lock was uncontested at the point in question.
What could be the possible issue? Is it just that the thread was put into a Runnable state on encountering a synchronized block and for the next 60-80 mins, the CPU never got a chance to run this particular runnable thread?
Ensure the application code is not messing around with thread priority via Thread.setPriority() method or the like. If you're using an IDE like IntelliJ and the Java sources are available, and assuming you can run the application and relevant flow locally in your development machine, you can put a breakpoint in Thread.setPriority() to see if anywhere it is getting invoked. This is an excerpt from Java Concurrency in Practice, Goetz 2006, regarding how unpredictable behavior can be if you try to set Thread priority manually:
10.3.1. Starvation
Starvation occurs when a thread is perpetually denied access to resources it needs in order to make progress; the most
commonly starved resource is CPU cycles. Starvation in Java applications can be caused by inappropriate use of thread
priorities. It can also be caused by executing nonterminating constructs (infinite loops or resource waits that do not
terminate) with a lock held, since other threads that need that lock will never be able to acquire it.
The thread priorities defined in the Thread API are merely scheduling hints. The Thread API defines ten priority levels
that the JVM can map to operating system scheduling priorities as it sees fit. This mapping is platform􀍲specific, so two
Java priorities can map to the same OS priority on one system and different OS priorities on another. Some operating
systems have fewer than ten priority levels, in which case multiple Java priorities map to the same OS priority.
Operating system schedulers go to great lengths to provide scheduling fairness and liveness beyond that required by the
Java Language Specification. In most Java applications, all application threads have the same priority, Thread.
NORM_PRIORITY. The thread priority mechanism is a blunt instrument, and it's not always obvious what effect changing
priorities will have; boosting a thread's priority might do nothing or might always cause one thread to be scheduled in
preference to the other, causing starvation.
It is generally wise to resist the temptation to tweak thread priorities. As soon as you start modifying priorities, the
behavior of your application becomes platform specific and you introduce the risk of starvation. You can often spot a
program that is trying to recover from priority tweaking or other responsiveness problems by the presence of
Thread.sleep or Thread.yield calls in odd places, in an attempt to give more time to lower priority threads.[5]

Correctness of thread priority algorithm

I read that the correctness of thread priority algorithm is not always guaranteed beacause it depends on the JVM. Why is so and how does it depend on the JVM?
Thanks in advance.
The thread priorities effect is platform specific. Playing with these priorities should be avoided as they can cause liveness problems and their effect is not obvious.
From the Java Concurency in practice:
The thread priorities defined in the Thread API are merely scheduling hints.The Thread
API defines ten priority levels that the JVM can map to operating system scheduling
priorities as it sees fit. This mapping is platform‐specific, so two Java priorities
can map to the sameOS priority on one system and different OS priorities on another.
...
The thread priority mechanism is a blunt instrument, and it's not always obvious what
effect changing priorities will have; boosting a thread's priority might do nothing or
might always cause one thread to be scheduled in preference to the other, causing
starvation.
Conclusion:
Avoid the temptation to use thread priorities, since they increase platform
dependence and can cause liveness problems. Most concurrent applications can use
the default priority for all threads.

Prioritization of threads within threads

Suppose you have a program that starts two threads a and b, and b starts another ten threads of its own. Does a receive half of the available "attention" while b and its threads share the other half, or do they all share equally? If the answer is the latter by default, how could you achieve the former? Thanks!
There are lots of nice documentation on this topic. One such is this.
When a Java thread is created, it inherits its priority from the thread that created it. You can also modify a thread's priority at any time after its creation using the setPriority() method. Thread priorities are integers ranging between MIN_PRIORITY and MAX_PRIORITY (constants defined in the Thread class). The higher the integer, the higher the priority. At any given time, when multiple threads are ready to be executed, the runtime system chooses the "Runnable" thread with the highest priority for execution. Only when that thread stops, yields, or becomes "Not Runnable" for some reason will a lower priority thread start executing. If two threads of the same priority are waiting for the CPU, the scheduler chooses one of them to run in a round-robin fashion. The chosen thread will run until one of the following conditions is true:
A higher priority thread becomes "Runnable".
It yields, or its run() method exits.
On systems that support time-slicing, its time allotment has expired.
At any given time, the highest priority thread is running. However, this is not guaranteed. The thread scheduler may choose to run a lower priority thread to avoid starvation. For this reason, use priority only to affect scheduling policy for efficiency purposes. Do not rely on thread priority for algorithm correctness.
Does a receive half of the available "attention" while b and its threads share the other half, or do they all share equally?
Neither. The proportion of time received by each thread is unspecified, and there's no reliable way to control it in Java. It is up to the native thread scheduler.
If the answer is the latter by default, how could you achieve the former?
You can't, reliably.
The only thing that you have to influence the relative amounts of time each thread gets to run are thread priorities. Even they are not reliable or predictable. The javadocs simply say that a high priority thread is executed "in preference to" a lower priority thread. In practice, it depends on how the native thread scheduler handles priorities.
For more details: http://docs.oracle.com/javase/7/docs/technotes/guides/vm/thread-priorities.html ... which includes information on how thread priorities on a range of platforms and Java versions.
One cannot say with surity the order in which the threads will be executed. Thread Scheduler works as per its inbuilt algorithm which we cannot change. Thread Scheduler picks up any threads (Highest priority threads) from runnable pool and make it running.
We can only mention the priority in which scheduler should process our threads.

Categories