JVM Implementation of Thread Work Distribution and Multicore - java

I am doing some research on language implementations on multicore platforms. Currently, I am trying to figure out a couple of things:
How does a JVM implementation map a java.lang.Thread to an OS native thread?
For instance say Open JDK, I think I even do not know what parts I should take a look to read more about this. Is there any document describing how native features are implemented? Since there are parts in java.lang.Thread that are native, I assumed that maybe some more internal parts are coded in the native parts.
Taking this to multicore, how is this mapping done for multicore? How threads are mapped to different cores to run simultaneously? I know that there is ExecutorService implementation that we can use to take advantage of multicore features. Here can come a consequence of the previous answers: If the OS native threads are responsible for work distribution and thread scheduling, then is it true to say that what JVM does through ThreadPool and ExecutorService is only creating threads and submitting tasks to them?
I'd be thankful for your answers and also if I am on the right track on topic.

For instance say Open JDK, I think I even do not know what parts I should take a look to read more about this.
You should start by looking at the parts of the source code that are coded in C++. A C / C++ IDE might help you with the codebase exploration.
Taking this to multicore, how is this mapping done for multicore? How threads are mapped to different cores to run simultaneously?
I'm pretty sure that the operating system takes care of that aspect, not the JVM.
... is it true to say that what JVM does through ThreadPool and ExecutorService is only creating threads and submitting tasks to them?
AFAIK, yes.

Related

Why aren't Java threads both lightweight (like green threads) and multi-core capable? (backed by an internal native fixed size native thread pool)

Back in java 1.1 all threads were running on a single core (not taking advantage of machine's multiple cores/CPUs) and scheduled by JVM in user space (so called green threads).
Around Java 1.2 / 1.3 (depending on the underlying OS), a change was made and Java Thread objects were mapped to OS threads (pthreads in case of Linux), which takes full advantage of multiple cores, but OTOH creating a thread became very expensive in terms of memory (because of crazy huge initial stack size of OS threads), which heavily limits the number of concurrent requests that a single machine can handle in thread-per-request model. This required server-side architectures to switch to the asynchronous model (non-blocking I/O package was introduced, AsyncContext was added to servlet API, etc) which has been continuously confusing several generations of Java server-side devs up to this day: at first most APIs look like they were intended for thread-per-request model and one needs to carefully read API documentations to find async capabilities bootstrapped to them from a side.
Only recently project Loom finally aims to deliver lightweight threads that are backed by a thread pool (a Java thread pool of "old-style" Java threads, which in turn map to OS threads) and thus combining the advantages: cheap to create in large quantities threads that do utilize multiple cores and can be lightheartedly suspended on blocking operations (such as I/O etc).
Why is this happening only now, after 20 years, instead of right away in Java 1.3? ie: why Java threads were made to map 1-1 to OS threads instead of being backed (executed) by JVM's internal thread pool of OS threads of fixed size corresponding to available CPU cores?
Is it difficult to implement in JVM maybe?
It seems not much more complex that all the asynchronous programming that java server-side devs have been forced to do for the last 20 years and what C/C++ devs have always been doing, but maybe I'm missing something.
Another possibility is that there is some blocking obstacle in architectural design of JVM that prevents it from being implemented this way.
UPDATE:
Project Loom's architecture design info was updated according to comments: many thanks!
after some consideration it seems to me that JIT compiling of java byte-code to native code may be the reason:
in the model I proposed, a native OS thread switching between execution of java threads would be a picking from its work queue a tuple <thread_stack, thread_instruction_pointer>. However because of JIT, java thread's stack basically is the same thing as backing OS thread's stack, which cannot be replaced just like that AFAIK.
So as I understand, the way I proposed to implement would only be possible if JVM was interpreting the bytcode each time and keeping java threads' stacks on its heap, which is not the case.

Java threads - manual scheduling [duplicate]

I am really curious about how the JVM works with threads!
In my searches on the internet, I found some material about RTSJ, but I don't know if it's the right directions for my answers.
Can someone give me directions, material, articles or suggestions about the JVM scheduling algorithm?
I am also looking for information about the default configuration of Java threads in the scheduler, like how long does it take for every thread in case of time-slicing.
I appreciate any help, thank you!
There is no single Java Virtual Machine; JVM is a specification, and there are multiple implementations of it, including the OpenJDK version and the Sun version of it, among others. I don't know for certain, but I would guess that any reasonable JVM would simply use the underlying threading mechanism provided by the OS, which would imply POSIX Threads (pthreads) on UNIX (Mac OS X, Linux, etc.) and would imply WIN32 threads on Windows. Typically, those systems use a round-robin strategy by default.
It doesn't. The JVM uses operating system native threads, so the OS does the scheduling, not the JVM.
A while ago I wrote some articles on thread scheduling from the point of view of Java. However, on mainstream platforms, threading behaviour essentially depends on underlying OS threading.
Have a look in particular at my page on what is Java thread priority, which explains how Java's priority levels map to underlying OS threading priorities, and how in practice this makes threads of different priorities behave on Linux vs Windows. A major difference discussed is that under Linux there's more of a relationship between thread priority and the proportion of CPU allocated to a thread, whereas under Windows this isn't directly the case (see the graphs).
I don't have commenting rights so writing is here...
JVM invokes pthreads(generally used threading mechanism,other variants are there) for each corresponding request. But the scheduling here is done entirely by OS acting as host.
But it is a preferred approach and it is possible to schedule these threads by JVM. For example in Jikes RVM there are options to override this approach of OS decision. For example, in it threads are referred as RVMThread and they can be scheduled/manipulated using org.jikesrvm.schedular package classes.
For more reference

How to PIN a Java thread to a processor on Linux? (with JNI, native code, linux trick, etc.) [duplicate]

Does anybody know of a way to lock down individual threads within a Java process to specific CPU cores (on Linux)? I've done this in C, but can't find how to do this in Java. My instincts are that this will require a JNI call, but I was hoping someone here might have some insight or might have done it before.
Thanks!
You can't do this in pure java. But if you really need it -- you can use JNI to call native code which do the job. This is the place to start with:
http://ovatman.blogspot.com/2010/02/using-java-jni-to-set-thread-affinity.html
http://blog.toadhead.net/index.php/2011/01/22/cputhread-affinity-in-java/
UPD: After some thinking, I've decided to create my own class for this: ThreadAffinity.java It's JNA-based, and very simple -- so, if you want to use it in production, may be you should spent some time making it more stable, but for benchmarking and testing it works well as is.
UPD 2: There is another library for working with thread affinity in java. It uses same method as previously noted, but has another interface
I know it's been a while, but if anyone comes across this thread, here's how I solved this problem. I wrote a script that would do the following:
"jstack -l "
Take the results, find the "nid"'s of the threads I want to manually lock down to cores.
Taskset those threads.
You might want to take a look at https://github.com/peter-lawrey/Java-Thread-Affinity/blob/master/src/test/java/com/higherfrequencytrading/affinity/AffinityLockBindMain.java
IMO, this will not be possible unless you use native calls. JVM is supposed to be platform independent, any system calls done to achieve this will not result in a portable code.
It's not possible (at least with plain Java).
You can use thread pools to limit the amount of threads (and therefore cores) used for different types of work, but there is no way to specify a core to use.
There is even the (small) possibility that your Java runtime doesn't support native threading for your OS or hardware. In this case, green threads are used and only one core will be used for the whole JVM.

How Java multi-threaded program is able to use multiple CPU cores?

Could someone please provide explanation how Java multi-threaded program (e.g. Tomcat servlet container) is able to use all cores of CPU when JVM is only single process on linux? Is there any good in-depth article that describes the subject in details?
EDIT #1: I'm not looking for advice how to implement multi-threaded program in Java. I'm looking for explanation of how JVM internally manages to use multiple cores on linux/windows while still being single process on the OS.
EDIT #2: The best explanation I managed to find is that Hotspot (Sun/Oracle JVM) implements threads as native threads on Linux using NPTL. So more less each thread in Java is lightweight process (native thread) on Linux. It is clearly visible using ps -eLf command that print outs not only process id (PPID) but also native thread id (LWP).
More details can be also found here:
http://www.velocityreviews.com/forums/t499841-java-5-threads-in-linux.html
Distinguishing between Java threads and OS threads?
EDIT #3: Wikipedia has short but nice entry on NPTL with some further references http://en.wikipedia.org/wiki/Native_POSIX_Thread_Library
The Linux kernel supports threads as first-class citizens. In fact to the kernel a thread isn't much different to a process, except that it shares a address space with another thread/process.
Some old versions of ps even showed a separate process for each thread by default and newer versions can enable this behavior using the -m flag.
The JVM is a single process with many threads. Each thread can be scheduled on a different CPU core. A single process can have many threads.
When Java software running inside the JVM asks for another thread the JVM starts another thread.
That is how the JVM manages to use multiple cores.
If you use the concurrency library and split up your work as much as you can, the JVM should handle the rest.
Take a look at this http://embarcaderos.net/2011/01/23/parallel-processing-and-multi-core-utilization-with-java/
I would start by reading the Concurrency Tutorial.
In particular, it explains the differences (and relationship) between processes and threads.
On the architectures that I'm familiar with, the threads (including JVM-created threads) are managed by the OS. The JVM simply uses the threading facilities provided by the operating system.

What is the JVM thread scheduling algorithm?

I am really curious about how the JVM works with threads!
In my searches on the internet, I found some material about RTSJ, but I don't know if it's the right directions for my answers.
Can someone give me directions, material, articles or suggestions about the JVM scheduling algorithm?
I am also looking for information about the default configuration of Java threads in the scheduler, like how long does it take for every thread in case of time-slicing.
I appreciate any help, thank you!
There is no single Java Virtual Machine; JVM is a specification, and there are multiple implementations of it, including the OpenJDK version and the Sun version of it, among others. I don't know for certain, but I would guess that any reasonable JVM would simply use the underlying threading mechanism provided by the OS, which would imply POSIX Threads (pthreads) on UNIX (Mac OS X, Linux, etc.) and would imply WIN32 threads on Windows. Typically, those systems use a round-robin strategy by default.
It doesn't. The JVM uses operating system native threads, so the OS does the scheduling, not the JVM.
A while ago I wrote some articles on thread scheduling from the point of view of Java. However, on mainstream platforms, threading behaviour essentially depends on underlying OS threading.
Have a look in particular at my page on what is Java thread priority, which explains how Java's priority levels map to underlying OS threading priorities, and how in practice this makes threads of different priorities behave on Linux vs Windows. A major difference discussed is that under Linux there's more of a relationship between thread priority and the proportion of CPU allocated to a thread, whereas under Windows this isn't directly the case (see the graphs).
I don't have commenting rights so writing is here...
JVM invokes pthreads(generally used threading mechanism,other variants are there) for each corresponding request. But the scheduling here is done entirely by OS acting as host.
But it is a preferred approach and it is possible to schedule these threads by JVM. For example in Jikes RVM there are options to override this approach of OS decision. For example, in it threads are referred as RVMThread and they can be scheduled/manipulated using org.jikesrvm.schedular package classes.
For more reference

Categories