What is the default number of threads running in the JVM? - java

Recently I've been learning more about thread and I was wondering why the resource monitor shows always 19 threads running for the Java process.
Now my questions are:
Is this the VM using 19 threads?
If so:
Are you able to access those threads?
Is it possible to use these threads for thread pooling?
Is it possible to decrease the amount of threads?
If not:
What is causing to show up the 19 threads?
I created a small .jar (see bottom for source) that would run and create a fixed threadpool of 5 worker threads. To that pool I sent tasks and I noticed that after all tasks have been handled, the amount of threads java uses goes back to 19.
Are the threads in the fixed threadpool idle or have they been removed and thus new threads are being created whenever new tasks are submitted?
Sorry for the multiple questions in one post.
Link to source: http://pastebin.com/iXpLbFVF
Image while sending tasks: http://gyazo.com/223d720bf73c1b919fbfe0b69088838a
Image after sending tasks: http://gyazo.com/3147269d90eb2c916373220ef53c0b92

It depends on the JVM version, the JVM vendor and some settings like which garbage collector is in place (and how the GC is tuned). Also some add-ons like agents or JMX can change the system running threads. And of course all threads started by the actual Java program. You can use the jstack program to actually list them (most of the system threads have obvious names). They include threads for finalisation, GC, the main thread, the Gui threads (if used), also JIT Compiler Threads and reference weakeners.

Related

How does the JVM spread threads between CPU cores?

Can somebody help me to understand how JVM spread threads between available CPU cores? Here som my vision how it is work but pls correct me.
So from the begining: when computer is started then bootstrap thread (usually thread 0 in core 0 in processor 0) starts up fetching code from address 0xfffffff0. All the rest CPUs/cores are in special sleep state called Wait-for-SIPI(WFS).
Then after OS is loaded it starts managing processes and schedule them between CPU/cores sending a special inter-processor-interrupt (IPI) over the Advanced Programmable Interrupt Controller (APIC) called a SIPI (Startup IPI) to each thread that is in WFS. The SIPI contains the address from which that thread should start fetching code.
So for example OS started JVM by loading JVM code in memory and pointing one of the CPU cores to its address (using mechanism described above). After that JVM that is executed as separate OS process with its own virtual memory area can start several threads.
So question is: how?
Does JVM use the same mechanism as OS and during time slice that OS gave to JVM can send SIPI to other cores and point the to address of the tasks that should be executed in a separate thread? If yes then how is restored the original program that could be executed by OS on this core?
Assume that it is not correct vision as suppose that this tasks of involving other CPUs/cores should be managed via OS. Overwise we could interrupt execution of some OS processes running in parallel on other cores. So if JVM wants to start new thread on other CPU/core it makes some OS call and send address of the task to be executed to the OS. OS schedule execution as for other programs but with different that this execution should happen in the same process to be able to access the same address space as the rest JVM threads.
How is it done? Can somebody describe it in more details?
The OS manages and schedule threads by default. The JVM makes the right calls to the OS to make this happen, but doesn't get involved.
Does JVM use the same mechanism as OS
The JVM uses the OS, it has no idea what actually happens.
Each process has its own virtual address space, again managed by the OS.
I have a library which uses JNA to wrap setaffinity on Linux and Windows. You need to do this as thread scheduling is controlled by the OS not the JVM.
https://github.com/OpenHFT/Java-Thread-Affinity
Note: in most cases, using affinity either a) doesn't help or b) doesn't help as much as you might think.
We use it to reduce jitter of around 40 - 100 microseconds which doesn't happen often, but often enough to impact our performance profile. If you want your 99%ile latencies to be as low as possible, in the micro-second range, thread affinity is essential. If you are ok with 1 in 100 requests taking 1 ms longer, I wouldn't bother.

Java Scheduler Thread Switching

I've read somewhere that in Java scheduler, thread switching happens after execution of certain amount of instructions and not after a certain time (like schedulers used in operating systems). But the references were missing. I wanted to know if this is correct.
Java used to have a feature called GreenThreads, It was removed in 1.3. For all practical purposes we can assume that thread scheduling is directly influenced by the underlying operating systems's process/thread scheduling strategy. In this context, developers need to assume that threads are executed/scheduled randomly and should code/treat them as such.
On Linux the scheduling of Java threads is done using the Completely Fair Scheduler (CFS). For each CPU there is a run-queue and by default there is 24 ms in which all threads on a single-run queue should have a chance to run. So if there are 2 threads, each thread gets 12 ms, if there are 3 tasks, each thread gets 8 ms.
When tasks have a different priority, things chance. And also there is a minimum granularity to prevent reduce context switching overhead of many threads are running.

Java multithreaded: program sporadically strats using more cores than a FixedThreadPool is allowed to

I have an application which just uses ExecutorService.newFixedThreadPool(), and everything runs fine on our development machines (multicore Intels mostly, also runs fine on a 6 core AMD). But when we run it on our server (Opteron CPUs, 64 cores total) and the thread pool is limited to, say, 4 threads, sporadically something weird happens and the program starts using 48 cores.
There is nothing but a main thread and this ExecutorService which should be limited to N threads, so there should be no more than N+1(main)+X(some java services) threads, but definitely not 48+.
Any suggestions on what might be causing this behavior are highly appreciated.
I'm not posting any code here, because we were not able to reproduce this in any other environment, than this server and there's nothing special about the code. It's just the fixed thread pool, on which Callables are run in batches (each batch no more than the size of thread pool) and the results are collected from Futures before submitting the next batch of tasks.
Looks like you're using a parallel garbage collector. See here: http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2009-January/000718.html
From that answer, it looks like you'll have 40 threads of GC, plus your application threads. So that's probably what's happening.
Check this out: http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html -- in particular set XX:ParallelGCThreads=n
If it helps ... I had this exact same thing happen to me, except the container monitor killed my process for excess thread usage. Oh HP-UX, how I (don't) miss you.

Java threads, keep same number of threads running

I need to process 80 files of info, and I'm doing it via groups of 8 threads, what i would like to do is to to always have 8 threads running (right now i have 8 threads, and after those 8 finish their job, another 8 are generated, and so on).
So I Would like to know if there is a way to do his:
launch 8 threads.
after 1 thread finishes its job, launch another thread (so all the
time I have 8 threads running, until the job is done)
Why not use a thread pool, and in particular a fixed size thread pool ? Configure your thread pool size to be 8 threads, and then submit all your work items as Runnable/Callable objects. The thread pool will execute these using the 8 configured threads.
So, everyone is quick to jump in and tell you to use a thread pool. Sure, that's the right way to achieve what you want. The question is, is it the right thing to want? It's not as simple as throw a bunch of threads at the problem, and magically everything is solved.
You haven't told us the nature of the processing. Are the jobs I/O bound, or CPU bound1? If they are CPU bound, the threads do nothing. If they are I/O bound, the threading might help.
You haven't told us if you have eight cores (or compute units). If you can't guarantee that you'll have that, it might not be best to have eight threads running.
There's a lot to think about. You're increasing the complexity of your solution. Maybe it's getting you what you want, maybe not.
1: Yes, you said you're processing files, but that doesn't tell us enough. Maybe the processing is intensive (think: rendering a video file). Or maybe you're reading the files from a very fast disk (think: SSD or memory-mapped files).

java.lang.OutOfMemoryError: unable to create new native thread

i saw comment like this
one place i have seen this problem is if you keep creating threads, and instead of calling start(), call run() directly on the thread object.
This will result in the thread object not getting dereferenced...
So after sometime the message unable to create new native thread comes up
on the Sun Java Forums
In my application, initialy we plan to use thread, but later, we decided no need anymore, so we just call run() instead of start(). Do we need to do manual GC for new threadClass(..) ?
my tomcat startup setting
-Xms1024m -Xmx1024m -XX:MaxPermSize=450m
Why do you create a Thread in the first place?
Your code should implement the Runnable interface instead.
Then, when you decide that you want to run it in a thread, simple instantiate a Thread with the Runnable as the argument and call start() on the Thread object.
If, instead, you just want to run it in your current thread, simply call run() on your Runnable object.
This has several advantages:
you don't involve any Thread objects as long as you don't care about separate threads
your code is wrapped in a Runnable which fits closer conceptually: you're not writing some special kind of Thread, do you? You simply write some code that can be executed/run.
you can easily switch to using an Executor which further abstract away the decision
And last but not least you avoid any potential confusion on whether or not a native thread resource is created.
When you call run() method no new thread should be created. And your objects will be collected by Garbage collector when they are not referenced.
Your other part of code may be creating lot of Threads.
Try using ThreadPoolExecutor (thread pooling) in your code to limit threads in your application, And tune your threadpool size accordingly for better performance.
You can also check following to debug your issue: (referenced from link)
There are a few things to do if you encounter this exception.
Use the lsof -p PID command (Unix
platforms) to see how many threads
are active for this process.
Determine if there is a maximum
number of threads per process defined
by the operating system. If the limit
is too low for the application, try
raising the per-process thread limit.
Examine the application code to
determine if there is code that is
creating threads or connections (such
as LDAP connections) and not
destroying them. You could dump the
Java threads to see if there are an
excessive number has been created.
If you find that too many connections
are opened by the application, make
sure that any thread that the
application creates is destroyed. An
enterprise application (.ear) or Web
application (.war) runs under a
long-running JVM. Just because the
application is finished does not mean
that the JVM process ends. It is
imperative that an application free
any resources that it allocates.
Another solution would be for the
application to use a thread pool to
manage the threads needed.
This link describes quite nicely how this error is thrown by the JVM:
http://javaeesupportpatterns.blogspot.ro/2012/09/outofmemoryerror-unable-to-create-new.html
Basically it's very dependent on the OS. On RedHat Linux 6.5 (most likely other distros/version and kernel versions) the max_threads=max_process x 2.
The max number of threads is very dependent on the number of allowed processes. Which the max number of processes is dependent on the max physical memory you have installed.
If you have a look in the limits.conf file (on my RHL 6.5 it's in /etc/security/limits.d/90-nproc.conf). Exert form the file:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc **1024**
root soft nproc unlimited
You'll see that for non root users it's 1024 (which means 2048 max threads).
To see the max number of threads that your user is allowed to create run this command "cat /proc/sys/kernel/threads-max" or "sysctl kernel.threads-max".
To solve an issue like this (at least it worked for me) as root you'll need to ncrease the max allowed threads:
echo 10000 > /proc/sys/kernel/threads-max
This affects all users and the root. The user needs to log out and then log in again for the settings to take affect.

Categories