Is there any way to determine how many threads can specific JVM + Application Server combination handle? The question is not only about web application request serving threads, but also about background threads.
For all practical purposes this is situational, it really does "depend".
what are the threads are doing?
how much memory do they each need?
how much garbage collection is going on?
how much memory have you got?
how many CPUs have you got?
how fast are they?
Java EE App Server applications tend not to create threads themselves. Rather you configure thread pools. I've never yet been in a situation where the ability to create 10 more threads would solve a problem and some app server limitation prevented me from doing so.
Making performance comparisons between different App Servers is very non-trivial and the answers tend to brittle - ie. small changes in the type of work can produce different answers.
Why are you asking the question?
This is really dependent on the particular hardware you are running with (number of CPUs, amount of memory, etc.) and also dependent on the OS (Solaris vs. Windows) since the underlying threading is dependent on the OS-provided thread management. It also depends on the application and app server itself, since the amount of resources each thread consumes is application-dependent.
This really isn't something that you can find out purely from the Java VM. It is more of a hardware/OS limitation than anything specific to the VM. The best way to find out this answer is to test with a large number of threads and see where you start to see a performance dropoff. See also this devx discussion.
Related
I am working on a platfor that hosts small Java applications, all of which currently uses a single thread, living inside a Docker engine, consuming data from a Kafka server and logging to a central DB.
Now, I need to put another Java application to this platform. This app at hand uses multithreading relatively heavily, I already tested it inside a Docker container and it works perfectly there, so I'm ready to deploy it on the platform where it would be scaled manually, that is, some human would define the number of containers that would be started, each of them containing an instance of this app.
My Architect has an objection, saying that "In a distributed environment we never use multithreading". So now, I have to refactor my application eliminating any thread related logic from it, making it single threaded. I requested a more detailed reasoning from him, but he yells "If you are not aware of this principle, you have no place near Java".
Is it really a mistake to use a multithreaded Java application in a distributed system - a simple cluster with ten or twenty physical machines, each hosting a number of virtual machines, which then runs Docker containers, with Java applications inside them.
Honestly, I don't see the problem of multithreading inside a container.
Is it really a mistake or somehow "forbidden"?
Thanks.
When you write for example a web application that will run in a Java EE application server, then normally you should not start up your own threads in your web application. The application server will manage threads, and will allocate threads to process incoming requests on the server.
However, there is no hard rule or reason why it is never a good idea to use multi-threading in a distributed environment.
There are advantages to making applications single-threaded: the code will be simpler and you won't have to deal with difficult concurrency issues.
But "in a distributed environment we never use multithreading" is not necessarily always true and "if you are not aware of this principle, you have no place near Java" sounds arrogant and condescending.
I guess he only tells you this as using a single thread eliminates multi threading and data ordering issues.
There is nothing wrong with multithreading though.
Distributed systems usually have tasks that are heavily I/O bound.
If I/O calls are blocking in your system
The only way to achieve concurrency within the process is spawning new threads to do other useful work. (Multi-threading).
The caveat with this approach is that, if they are too many threads
in flight, the operating system will spend too much time context
switching between threads, which is wasteful work.
If I/O calls are Non-Blocking in your system
Then you can avoid the Multi-threading approach and use a single thread to service all your requests. (read about event-loops or Java's Netty Framework or NodeJS)
The upside for single thread approach
The OS does not any wasteful thread context switches.
You will NOT run into any concurrency problems like dead locks or race conditions.
The downside is that
It is often harder to code/think in a non-blocking fashion
You typically end up using more memory in the form of blocking queues.
What? We use RxJava and Spring Reactor pretty heavily in our application and it works pretty fine. You can't work with threads across two JVMs anyway. So just make sure that your logic is working as you expect on a single JVM.
I currently have an application running with Jetty (version 8.1.3). I would like to create an additional version for a different client environment on the same server.
Is there a risk of memory overhead on the server? or other? The two applications used the same database.
"Is there a risk of memory overhead on the server?"
From the Jetty standpoint, unlikely to be a risk, it generally occupies a very small footprint when compared to the applications deployed into it.
From your application standpoint, only you can determine that. You must compute your applications memory needs and what it may scale to in order to make this determination. You need to sort out a high water mark for memory needs for your application, double that and round up a bit to then decide if you have both the processing and memory available to do it. Remember your thread requirements as well, double the connection pooling (or are you sharing the pool with server wise jndi pools) and is your database going to be fine with that, the number of open files on the server allowed, etc, etc.
So long story short, there is no definitely yes or no answer available from a site like stackoverflow on this, it depends too much on your specific application and amount of traffic you have. Knowing that information however will let you have confidence on if you can do this or not.
I am doing web crawling on a server with 32 virtual processors using Java. How can I make full of these processors? I've seen some suggestions on multi-threaded programming, but I wonder how that could ensure all processors would be taken advantage of since we can do multi-threaded programming on single processor machine as well.
There is no simple answer to this ... except the way to ensure all processors are used is to use multi-threading the right way. (Note: that is a circular answer!)
Basically, the way to get effective use of multiple processors is to:
ensure that there is work that can be done in parallel, and
reduce / eliminate contention points that force one thread to wait while another thread does something.
This is difficult enough when you are doing simple computation. For a web crawler, you've got the additional problems that the threads will be competing for network and (possibly) remove server bandwidth, and they will typically be attempting to put their results into a shared data structure or database.
That's about all that can be said at this level of generality ...
And as #veer correctly points, you can't "ensure" it.
... but using a load of threads will surely be quicker wall-time-wise because all the miserable network latency will happen in parallel ...
Actually, if you go overboard, a load of threads can reduce throughput because of contention. Just throwing lots of threads at the problem is rarely a good idea.
A computer or a program is only as fast as the slowest link in its processing chain. Just increasing the CPU capacity is not going to ensure a drastic performance peak. Leaving aside other issues like your cache-size, RAM, etc., there are two basic kinds of approach to your question about how to take advantage of all your processors:
[1] Using a Jit/just-in-time compiler/interpreter technology such as Java/.NET. I don't know much about java, but the .NET jitter is definitely designed to take advantage of all the available processors on the mahcine. In fact, this very feature makes a jitter stand out against other static language compilers like C/C++, because the jitter "knows" that it is sitting on 32 processors, it is in a much better position to take advantage of them than a program statically compiled on any other machine. (provided you have written a robust multi-threading code for it!)
[2] Programming in C/C++. This is the classic approach. If you compile your code on the same machine with 32 CPUs, and take proper care in your program such as memory-management, handling pointers, etc. the C/C++ program will be the most optimal and will perform better than its CLR/JVM counterpart (as it runs without the extra overhead of a garbage-collector or a VM).
But keep in mind that writing robust code is much easier in .NET/Java than C/C++. So, if you are not a "hard-core" programmer, I would suggest going with the former approach. Also remember to handle your multiple threads with care, such as locking variables when multiple threads try to change the same variables. However, excessive locking might make your code hang, if a variable behaves unexpectedly.
Processor management is implemented in native through the Virtual machine you are using i.e., JVM. You can have a look here Java Hotspot VM Options to optimize your machine if you are using Java Hotspot VM. If you are using a third party VM then your provider may help you with tuning it for your requirements.
Application performance in design practically depends on you.
If you would like to monitor your threads and memory usage to optimize your application, you can use any VM monitoring tools available to date. The Java virtual machine (JVM) has built-in instrumentation that enables you to monitor and manage it using JMX.
For details you can check Platform Monitoring and management using JMX. For third party VMs you have to contact the vendor I guess.
I have an legacy in house business application which is running in one JVM and there are many performance issues with it more specifically regarding Heap Usage and Running Concurrent Threads, at the core of it, it's an scheduling application wherein the user can schedule some task from front end and when time arrives the task get's fired up, all code is home grown and we are not using any third party scheduler for scheduling purpose, now my goal is to enhance performance of the application and there are some options which i can try, like using scheduling mechanism like Quartz or distribute application to different jvms, challenge i have here is that i have never being exposed to this kind of situation of re-architecting the application and so am not sure where to start from, i know SO is not right place to ask this type of question but am not sure how to approach and any help/suggestions would be highly appreciated.
From reading your post I don't get the impression that you've really grasped what the underlying cause of your performance problems are. The first step in addressing any such problem should be to identify the cause before proposing a solution. I'd begin by asking some pretty high level questions.
How many concurrent tasks/threads do you currently execute?
Are the jobs CPU or IO bound?
What software stack is the app running on?
What hardware is the app running on?
By distributing the application across multiple JVMs you will invariably add complexity, which is fine, provided it's a valid and required solution.
I suggest you exercise the application with a realistic workload so that the server is busy and profile it to find CPU, memory and resource bottleneck.
IMHO: Separating JVM might be an option if you are using more than 1 - 8 GB of heap AND Full GC times are an issue. If you are using much less than it, its unlikely to help.
DON'T jump to any conclusions about which solution should be until you have a very good understanding of the problem or you can end up spending a lot of time optimising the wrong things and possibly making it worse.
I'm working on an application which interacts with hundreds of devices across a network. The type of work being committed requires a lot of the concurrent threads (mostly because each of them requires network interaction and does so separately, but for other reasons as well). At the moment, we're in the area of requiring about 20-30 threads per device being interacted with.
A simple calculation puts this at thousands of threads, even up to 10,000 threads. If we put aside the CPU penalty for thread-switching, etc., how many threads can Java 5 running on CentOS 64-bit handle? Is this just a matter of RAM or is there anything else we should consider?
Thanks!
In such situation its always recomended to use Thread Pooling.
Thread pools address two different problems: they usually provide improved performance when executing large numbers of asynchronous tasks, due to reduced per-task invocation overhead, and they provide a means of bounding and managing the resources, including threads, consumed when executing a collection of tasks. Each ThreadPoolExecutor also maintains some basic statistics, such as the number of completed tasks.
ThreadPoolExecutor is class you should be using.
http://www.javamex.com/tutorials/threads/ThreadPoolExecutor.shtml
I think up to 65k threads is OK with java, the only thing you need to consider is stack space - linux by default allocates 48k per thread/process as stack space, which is wasteful for java (which doesn't have stack-allocated objects, hence uses much less stack space). This will easily use 500 megs for 10k threads.
If this is really an absolute requirement, you might wan't to have a look at a language that's specifically build to deal with this level of concurrent threads, such as erlang.
Like others are suggesting, you should use NIO. We had an app that used a lot (but much less than you are planning) of threads (e.g. 1,000 ) and it was already very inefficient. If you have to use THAT much threads, it's definitely time to consider the use of NIO.
For network, if your apps are using HTTP, one very easy tool would be Async-HTTP-client by 2 very famous author in this field.
If you use a different protocol, using the underlying implementation of Async-HTTP-client (netty) would be recommendable.