i saw comment like this
one place i have seen this problem is if you keep creating threads, and instead of calling start(), call run() directly on the thread object.
This will result in the thread object not getting dereferenced...
So after sometime the message unable to create new native thread comes up
on the Sun Java Forums
In my application, initialy we plan to use thread, but later, we decided no need anymore, so we just call run() instead of start(). Do we need to do manual GC for new threadClass(..) ?
my tomcat startup setting
-Xms1024m -Xmx1024m -XX:MaxPermSize=450m
Why do you create a Thread in the first place?
Your code should implement the Runnable interface instead.
Then, when you decide that you want to run it in a thread, simple instantiate a Thread with the Runnable as the argument and call start() on the Thread object.
If, instead, you just want to run it in your current thread, simply call run() on your Runnable object.
This has several advantages:
you don't involve any Thread objects as long as you don't care about separate threads
your code is wrapped in a Runnable which fits closer conceptually: you're not writing some special kind of Thread, do you? You simply write some code that can be executed/run.
you can easily switch to using an Executor which further abstract away the decision
And last but not least you avoid any potential confusion on whether or not a native thread resource is created.
When you call run() method no new thread should be created. And your objects will be collected by Garbage collector when they are not referenced.
Your other part of code may be creating lot of Threads.
Try using ThreadPoolExecutor (thread pooling) in your code to limit threads in your application, And tune your threadpool size accordingly for better performance.
You can also check following to debug your issue: (referenced from link)
There are a few things to do if you encounter this exception.
Use the lsof -p PID command (Unix
platforms) to see how many threads
are active for this process.
Determine if there is a maximum
number of threads per process defined
by the operating system. If the limit
is too low for the application, try
raising the per-process thread limit.
Examine the application code to
determine if there is code that is
creating threads or connections (such
as LDAP connections) and not
destroying them. You could dump the
Java threads to see if there are an
excessive number has been created.
If you find that too many connections
are opened by the application, make
sure that any thread that the
application creates is destroyed. An
enterprise application (.ear) or Web
application (.war) runs under a
long-running JVM. Just because the
application is finished does not mean
that the JVM process ends. It is
imperative that an application free
any resources that it allocates.
Another solution would be for the
application to use a thread pool to
manage the threads needed.
This link describes quite nicely how this error is thrown by the JVM:
http://javaeesupportpatterns.blogspot.ro/2012/09/outofmemoryerror-unable-to-create-new.html
Basically it's very dependent on the OS. On RedHat Linux 6.5 (most likely other distros/version and kernel versions) the max_threads=max_process x 2.
The max number of threads is very dependent on the number of allowed processes. Which the max number of processes is dependent on the max physical memory you have installed.
If you have a look in the limits.conf file (on my RHL 6.5 it's in /etc/security/limits.d/90-nproc.conf). Exert form the file:
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc **1024**
root soft nproc unlimited
You'll see that for non root users it's 1024 (which means 2048 max threads).
To see the max number of threads that your user is allowed to create run this command "cat /proc/sys/kernel/threads-max" or "sysctl kernel.threads-max".
To solve an issue like this (at least it worked for me) as root you'll need to ncrease the max allowed threads:
echo 10000 > /proc/sys/kernel/threads-max
This affects all users and the root. The user needs to log out and then log in again for the settings to take affect.
Related
How do I distinguish running Java threads and native threads?
In Linux there will be Parent process for every child process, and they say 0 is the parent of all the process, will there be a Parent thread of all the forked Java threads?
How do I know which Java thread is related to OS thread (if a Java thread forkes a native process thread).
Is there any naming convention of Java threads and OS threads?
Can a running Java threads can be suspended or killed from another Java code?
On Linux, Java threads are implemented with native threads, so a Java program using threads is no different from a native program using threads. A "Java thread" is just a thread belonging to a JVM process.
On a modern Linux system (one using NPTL), all threads belonging to a process have the same process ID and parent process ID, but different thread IDs. You can see these IDs by running ps -eLf. The PID column is the process ID, the PPID column is the parent process ID, and the LWP column is the thread (LightWeight Process) ID. The "main" thread has a thread ID that's the same as the process ID, and additional threads will have different thread ID values.
Older Linux systems may use the "linuxthreads" threading implementation, which is not fully POSIX-compliant, instead of NPTL. On a linuxthreads system, threads have different process IDs.
You can check whether your system is using NPTL or linuxthreads by running the system's C library (libc) as a standalone program and looking under "Available extensions" in its output. It should mention either "Native POSIX Threads Library" or linuxthreads. The path to the C library varies from system to system: it may be /lib/libc.so.6, /lib64/libc.so.6 (on 64-bit RedHat-based systems), or something like /lib/x86_64-linux-gnu/libc.so.6 (on modern Debian-based systems such as Ubuntu).
At the OS level, theads don't have names; those exist only within the JVM.
The pthread_kill() C function can be used to send a signal to a specific thread, which you could use to try to kill that specific thread from outside the JVM, but I don't know how the JVM would respond to it. It might just kill the whole JVM.
There is no standard; this completely depends on the Java implementation which you're using. Also, don't mix up "native threads" and "native processes". A process is an isolated entity which can't see into the address space of other processes. A thread is something which runs in the address space of a native process and which can see into the memory of other threads of the same process.
What you see on Linux is something else: Some versions of Linux create an entry in the process table for each thread of a parent process. These "processes" aren't real processes (in the isolation sense). They are threads which can be listed with the ps command. You can find the process which created them by using the parent PID (PPID).
There is no generic solution how Java threads are mapped to OS threads, if at all. Every JVM implementation can do it in a different way.
There is also a pure Java thread implementation, called green threads. This is used as a fallback if native threads aren't supported or the system isn't multithreaded at all. You won't see any green threads at your OS.
Can a running Java threads can be suspended or killed from another Java code ?
If they are running at the same JVM, yes, with stop(). But that's not a good solution and may work, or not. interrupt() allows the thread so safely shut itself down.
There is no way to kill threads outside of the JVM I know of. If a OS really supports killing of threads, I wouldn't expect the Java application to run correctly afterwards!
Can a running Java threads can be
suspended or killed from another Java
code ?
In theory yes. In practice, the Thread.kill() and Thread.suspend() methods are deprecated because they are unsafe except in very limited situations. The basic problem is that killing or suspending a Java thread is likely to mess up other threads that depend on it, and shared data structures that it might have been in the middle of updating.
If "another Java code" is meant to mean another JVM, then the chances of it working are even less. Even if you figured out how to send the relevant thread signal, the results are completely unpredictable. My bet is that the "target" JVM would crash.
Can somebody help me to understand how JVM spread threads between available CPU cores? Here som my vision how it is work but pls correct me.
So from the begining: when computer is started then bootstrap thread (usually thread 0 in core 0 in processor 0) starts up fetching code from address 0xfffffff0. All the rest CPUs/cores are in special sleep state called Wait-for-SIPI(WFS).
Then after OS is loaded it starts managing processes and schedule them between CPU/cores sending a special inter-processor-interrupt (IPI) over the Advanced Programmable Interrupt Controller (APIC) called a SIPI (Startup IPI) to each thread that is in WFS. The SIPI contains the address from which that thread should start fetching code.
So for example OS started JVM by loading JVM code in memory and pointing one of the CPU cores to its address (using mechanism described above). After that JVM that is executed as separate OS process with its own virtual memory area can start several threads.
So question is: how?
Does JVM use the same mechanism as OS and during time slice that OS gave to JVM can send SIPI to other cores and point the to address of the tasks that should be executed in a separate thread? If yes then how is restored the original program that could be executed by OS on this core?
Assume that it is not correct vision as suppose that this tasks of involving other CPUs/cores should be managed via OS. Overwise we could interrupt execution of some OS processes running in parallel on other cores. So if JVM wants to start new thread on other CPU/core it makes some OS call and send address of the task to be executed to the OS. OS schedule execution as for other programs but with different that this execution should happen in the same process to be able to access the same address space as the rest JVM threads.
How is it done? Can somebody describe it in more details?
The OS manages and schedule threads by default. The JVM makes the right calls to the OS to make this happen, but doesn't get involved.
Does JVM use the same mechanism as OS
The JVM uses the OS, it has no idea what actually happens.
Each process has its own virtual address space, again managed by the OS.
I have a library which uses JNA to wrap setaffinity on Linux and Windows. You need to do this as thread scheduling is controlled by the OS not the JVM.
https://github.com/OpenHFT/Java-Thread-Affinity
Note: in most cases, using affinity either a) doesn't help or b) doesn't help as much as you might think.
We use it to reduce jitter of around 40 - 100 microseconds which doesn't happen often, but often enough to impact our performance profile. If you want your 99%ile latencies to be as low as possible, in the micro-second range, thread affinity is essential. If you are ok with 1 in 100 requests taking 1 ms longer, I wouldn't bother.
Recently I've been learning more about thread and I was wondering why the resource monitor shows always 19 threads running for the Java process.
Now my questions are:
Is this the VM using 19 threads?
If so:
Are you able to access those threads?
Is it possible to use these threads for thread pooling?
Is it possible to decrease the amount of threads?
If not:
What is causing to show up the 19 threads?
I created a small .jar (see bottom for source) that would run and create a fixed threadpool of 5 worker threads. To that pool I sent tasks and I noticed that after all tasks have been handled, the amount of threads java uses goes back to 19.
Are the threads in the fixed threadpool idle or have they been removed and thus new threads are being created whenever new tasks are submitted?
Sorry for the multiple questions in one post.
Link to source: http://pastebin.com/iXpLbFVF
Image while sending tasks: http://gyazo.com/223d720bf73c1b919fbfe0b69088838a
Image after sending tasks: http://gyazo.com/3147269d90eb2c916373220ef53c0b92
It depends on the JVM version, the JVM vendor and some settings like which garbage collector is in place (and how the GC is tuned). Also some add-ons like agents or JMX can change the system running threads. And of course all threads started by the actual Java program. You can use the jstack program to actually list them (most of the system threads have obvious names). They include threads for finalisation, GC, the main thread, the Gui threads (if used), also JIT Compiler Threads and reference weakeners.
I want to exit a java process and free all the resources before it finishes its normal running, if a certain condition is meet. I dont however want to quit JVM, as I have other java programs running at the same time. Does return; do the above, or is there a better way to do it?
Thanks.
There is one JVM process per running Java application. If you exit that application, the process's JVM gets shut down. However, this does not affect other Java processes.
You need to understand the JVM mechanism and clarify the terminology.
Let's use the following as datum for the terminology.
Threads are divisions of concurrently processed flows within a process.
A process is an OS level thread. The OS manages the processes. A process is terminated by sending a termination signal to the OS management. The signal may be sent by the process itself or by another process that has the applicable privilege.
Within a process, you can create process level threads. Process level threads are normally facilitated by the process management of the OS, but they are initiated by the process and terminated by the process. Therefore, process level threads are not the same as processes.
An application is a collection of systems, programs and/or threads that cooperate in various forms. A program or process within an application may terminate without terminating the whole application.
Within the context of JVM terminology, program may be one of the following.
A program is run per JVM process. Each program consumes one JVM process and is invoked by supplying the classpath of java bytecode and specifying the main entry point found in the classpath. When you terminate a java program, the whole jvm process that ran that program also terminates.
A program is run per process level thread. For example, an application run within a tomcat or JEE server is run as a thread within the JEE process. The JEE process is itself a program consuming one JVM process. When you terminate an application program, the JEE process does not terminate.
You may initiate process level threads within a java program. You may write code that terminates a thread but that would not terminate the process (unless it is the last and only running thread in the process). The JVM garbage collection would take care of freeing of resources and you do not need to free resources yourself after a process level thread is terminated.
The above response is simplified for comprehension. Please read up on OS design and threading to facilitate a better understanding of processes and the JVM mechanism.
If the other threads running concurrently are not daemon threads, leaving main will not terminate the VM. The other threads will continue running.
I completely missed the point though.
If you start each program in a separate JVM, calling System.exit() in one of them will not influence the others, they're entirely different processes.
If you're starting them through a single script or something, depending on how it is written, something else could be killing the other processes. Without precise information about how you start these apps, there's really no telling what is going on.
#aix's answer is probably apropos to your question. Each time you run the java command (or the equivalent) you get a different JVM instance. Calling System.exit() in one JVM instance won't cause other JVM instances to exit. (Try it and see!)
It is possible to create a framework in which you do run multiple programs within the same JVM. Indeed this is effectively what you do when you run a "bean shell". The same sort of thing happens when your "programs" are services (or webapps, or whatever you call them) running in some application server framework.
The bad news is that if you do this kind of thing, there is no entirely reliable way make an individual "program" go away. In particular, if the program is not designed to be cooperative (e.g. if it doesn't check for interrupts), you will have to resort to the DEPRECATED Thread.stop() method and friends. And those methods can have nasty consequences for the JVM and the other programs running in it.
In theory, the solution to that problem is to use Isolates. Unfortunately, I don't think that any mainstream JVMs support Isolates.
Some common usecases leading these kind of requirements can be solved through tools like Nailgun, or Drip.
Nailgun allows you to run what appears to be multiple independent executions of a commandline program, but they all happen in the same JVM. Therefore repeated JVM start-up time does not have to be endured. If these execution interact with global state, then the JVM will get polluted in time and things start to break up.
Drip will use a new JVM for each execution, but it always keeps a precreated JVM with the correct classpath and options ready. This is less performant, but it can guarantee correctness through isolation.
We've developed a Java standalone program. We've configured in our Linux (RedHat ES 4) cron
schedule to execute this Java standalone every 10 minutes. Each standalone
may sometime take more than 1 hour to complete, or sometime it may complete
even within 5 minutes.
My problem/solution I'm looking for is, the number of Java standalones executing
at any time should not exceed, for example, 5 process. So, for example,
before even a Java standalone/process starts, if there are already 5 processes running,
then this process should not be started; otherwise this would indirectly start
creating OutOfMemoryError problems. How do I control this? I would also like to make this 5 process limit configurable.
Other Information:
I've also configured -Xms and -Xmx heap size settings.
Is there any tool/mechanism by which we can control this?
I also heard about Java Service Wrapper. What is this all about?
You can create 5 empty files (with names "1.lock",...,"5.lock") and make the app to lock one of them to execute (or exit if all files are already locked).
First, I am assuming you are using the words "thread" and "process" interchangably. Two ideas:
Have the cron job be a script that will check the currently running processes and count them. If less than threshold spawn new process, otherwise exit, here threshold can be defined in your script.
Have the main method in your executing java file check some external resource (a file, database table, etc) for a count of running processes, if it is below threshold increment and start process, otherwise exit (this is assuming the simple main method will not be enough to cause your OOME problem). You may also need to use an appropriate locking mechanism on the external resource (though if your job is every 10 minutes, this may be overkill), here you could defin threshold in a .properties, or some other configuration file for your program.
Java Service Wrapper helps you set up a java program as a Windows service or a *nix daemon. It doesn't really deal with the concurrency issue you are looking at--the closest thing is a config setting that disallows concurrent instances if its a Windows service.