WorkManager and a high workload - java

I'm working on an application which interacts with hundreds of devices across a network. The type of work being committed requires a lot of the concurrent threads (mostly because each of them requires network interaction and does so separately, but for other reasons as well). At the moment, we're in the area of requiring about 20-30 threads per device being interacted with.
A simple calculation puts this at thousands of threads, even up to 10,000 threads. If we put aside the CPU penalty for thread-switching, etc., how many threads can Java 5 running on CentOS 64-bit handle? Is this just a matter of RAM or is there anything else we should consider?
Thanks!

In such situation its always recomended to use Thread Pooling.
Thread pools address two different problems: they usually provide improved performance when executing large numbers of asynchronous tasks, due to reduced per-task invocation overhead, and they provide a means of bounding and managing the resources, including threads, consumed when executing a collection of tasks. Each ThreadPoolExecutor also maintains some basic statistics, such as the number of completed tasks.
ThreadPoolExecutor is class you should be using.
http://www.javamex.com/tutorials/threads/ThreadPoolExecutor.shtml

I think up to 65k threads is OK with java, the only thing you need to consider is stack space - linux by default allocates 48k per thread/process as stack space, which is wasteful for java (which doesn't have stack-allocated objects, hence uses much less stack space). This will easily use 500 megs for 10k threads.

If this is really an absolute requirement, you might wan't to have a look at a language that's specifically build to deal with this level of concurrent threads, such as erlang.

Like others are suggesting, you should use NIO. We had an app that used a lot (but much less than you are planning) of threads (e.g. 1,000 ) and it was already very inefficient. If you have to use THAT much threads, it's definitely time to consider the use of NIO.
For network, if your apps are using HTTP, one very easy tool would be Async-HTTP-client by 2 very famous author in this field.
If you use a different protocol, using the underlying implementation of Async-HTTP-client (netty) would be recommendable.

Related

Non-blocking vs blocking Java server with JDBC calls

Our gRPC need to handle 1000 QPS and each request requires a list of sequential operations to happen, including one which is to read data from the DB using JDBC. Handling a single request takes at most 50ms.
Our application can be written in two ways:
Option 1 - Classic one blocking thread per request: we can create a large thread pool (~200) and simply assign one thread per request and have that thread block while it waits for the DB.
Option 2 - Having each request handled in a truly non-blocking fashion:. This would require us to use a non-blocking MySQL client which I don't know if it exist, but for now let's assume it exist.
My understanding is that non-blocking approach has these pros and cons:
Pros: Allows to reduce the number of threads required, and as a such reduce the memory footprint
Pros: Save some overhead on the OS since it doesn't need to give CPU time to the thread waiting for IO
Cons: For a large application (where each task is subscribing a callback to the previous task), it requires to split a single request to multiple threads creating a different kind of overhead. And potentially if a same request gets executed on multiple physical core, it adds overhead as data might not be available in L1/L2 core cache.
Question 1: Even though non blocking application seems to be the new cool thing, my understanding is that for an application that aren't memory bounded and where creating more threads isn't a problem, it's not clear that writing a non-blocking application is actually more CPU efficient than writing blocking application. Is there any reason to believe otherwise?
Question 2: My understanding is also that if we use JDBC, the connection is actually blocking and even if we make the rest of our application to be non-blocking, because of the JDBC client we lose all the benefit and in that case a Option 1 is most likely better?
For question 1, you are correct -- non-blocking is not inherently better (and with the arrival of Virtual Threads, it's about to become a lot worse in comparison to good old thread-per-request). At best, you could look at the tools you are working with and do some performance testing with a small scale example. But frankly, that is down to the tool, not the strategy (at least, until Virtual Threads get here).
For question 2, I would strongly encourage you to choose the solution that works best with your tool/framework. Staying within your ecosystem will allow you to make more flexible moves when the time comes to optimize.
But all things equal, I would strongly encourage you to stick with thread-per-request, since you are working with Java. Ignoring Virtual Threads, thread-per-request allows you to work with and manage simple, blocking, synchronous code. You don't have to deal with callbacks or tracing the logic through confusing and piecemeal logs. Simply make a thread per request, let it block where it does, and then let your scheduler handle which thread should have the CPU core at any given time.
Pros: Save some overhead on the OS since it doesn't need to give CPU time to the thread waiting for IO
It’s not just the CPU time for waiting threads, but also the overhead of switching between threads competing for the CPU. As you have more threads, more of them will be in a running state, and the CPU time must be spread between them. This requires a lot of memory management for switching.
Cons: For a large application (where each task is subscribing a callback to the previous task), it requires to split a single request to multiple threads creating a different kind of overhead. And potentially if a same request gets executed on multiple physical core, it adds overhead as data might not be available in L1/L2 core cache.
This also happens with the “classic” approach since blocking calls will cause the CPU to switch to a different thread, and, as stated before, the CPU will even have to switch between runnable threads to share the CPU time as their number increases.
Question 1: […] for an application that aren't memory bounded and where creating more threads isn't a problem
In the current state of Java, creating more threads is always going to be a problem at some point. With the thread-per-request model, it depends how many requests you have in parallel. 1000, probably ok, 10000… maybe not.
it's not clear that writing a non-blocking application is actually more CPU efficient than writing blocking application. Is there any reason to believe otherwise?
It is not just a question of efficiency, but also scalability. For the performance itself, this would require proper load testing. You may also want to check Is non-blocking I/O really faster than multi-threaded blocking I/O? How?
Question 2: My understanding is also that if we use JDBC, the connection is actually blocking and even if we make the rest of our application to be non-blocking, because of the JDBC client we lose all the benefit and in that case a Option 1 is most likely better?
JDBC is indeed a synchronous API. Oracle was working on ADBA as an asynchronous equivalent, but they discontinued it, considering that Project Loom will make it irrelevant. R2DBC provides an alternative which supports MySQL. Spring even supports reactive transactions.

java concurrency - Is Instruction Level Parallelism(ILP) used underhood

Concurrency in Java or some similar languages is achieved through threads or task level parallelism. But under the hood does the hardware or run time also use ILP to achieve best performance.
Little further elaboration: In a multi core processor (say 4 per system) with multiple threads (say 2 per core) ( i.e total 8 threads per system), a java thread is executed in one of the several (8 in this case) processor threads. But if the system determines that all or several other threads are doing nothing but staying ideal, can the hardware or runtime do any legal re-orderings and execute them in other threads on same or other cores and fetch the results back(or in to main memory)
I am bothered about does java implementation allow this or even otherwise it is up to hardware to handle this independently even with out the JVM even knowing anything.
It's a little unclear what you're asking, but I don't think it has much to do with Java.
I think you're talking about (at least) two different things:
"ILP" is generally used to refer to a set of techniques that occur within a single core (such as pipelining and branch prediction), and has little to do with threading or multi-core. These techniques are transparent implementation details of the CPU, and typically not exposed in a way that you (or the runtime) can interact with directly.
Threads are swapped on and off cores by the kernel scheduler if they become blocked (and even if they're not, to ensure fairness).

Blocking I/O vs NIO for connecting to four different devices in Android

I am writing an Android application to simultaneously upgrade the firmware of four-five medical devices. Should i go with traditional blocking I/O employing thread per connection approach or non-blocking NIO approach.
The program is already working fine for upgarding one device at a time.
What would have greater overhead here ? java NIO overhead or context switching overhead
Any help would be greatly appreciated.
If the number of devices is four or five, and isn't going to grow by at least two orders of magnitude any time soon, I'd suggest you stick with blocking I/O and thread per connection model due to its simplicity and the fact that you are unlikely to see any performance improvements from NIO in this case and a performance drop is actually quite possible. Thread switching works quite well even for many more tasks than 4 or 5 and the programming model is so much simpler. If the communication code uses sockets, there may be an advantage in using NIO, though, because Selectors allow you to detect some kinds of broken connections better than IO Streams. Even for that case, however, I would recommend using NIO in a thread-per-client model, as it will probably save you lots of work on the code.
java NIO overhead or context switching overhead
That's the wrong question. NIO doesn't have much in the way of overhead, but you have to implement the 'context switching' yourself in your application, you can't just pretend it's gone away with NIO ... and your code that schedules between channels may not be as efficient as the operating system's thread scheduler. The real question is whether the saving in threads and more specifically thread stacks has any significance versus the greatly increased complexity of NIO code. Unless you are planning to service tens of thousand of connections it generally doesn't.
It's worth noting that the NIO model using selectors originated prior to threads, i.e. when the choice was between more complex code and more processes. These days there is a significant school of thought that holds that you should use blocking I/O and threads in almost all circumstances. There is a study and a paper out there somewhere, by I think Peter Lawrey, but I don't have the citation to hand.

Java thread per connection model vs NIO

Is the non-blocking Java NIO still slower than your standard thread per connection asynchronous socket?
In addition, if you were to use threads per connection, would you just create new threads or would you use a very large thread pool?
I'm writing an MMORPG server in Java that should be able to scale 10000 clients easily given powerful enough hardware, although the maximum amount of clients is 24000 (which I believe is impossible to reach for the thread per connection model because of a 15000 thread limit in Java).
From a three year old article, I've heard that blocking IO with a thread per connection model was still 25% faster than NIO (namely, this document http://www.mailinator.com/tymaPaulMultithreaded.pdf), but can the same still be achieved on this day? Java has changed a lot since then, and I've heard that the results were questionable when comparing real life scenarios because the VM used was not Sun Java.
Also, because it is an MMORPG server with many concurrent users interacting with each other, will the use of synchronization and thread safety practices decrease performance to the point where a single threaded NIO selector serving 10000 clients will be faster? (all the work doesn't necessary have to be processed on the thread with the selector, it can be processed on worker threads like how MINA/Netty works).
Thanks!
NIO benefits should be taken with a grain of salt.
In a HTTP server, most connections are keep-alive connections, they are idle most of times. It would be a waste of resource to pre-allocate a thread for each.
For MMORPG things are very different. I guess connections are constantly busy receiving instructions from users and sending latest system state to users. A thread is needed most of time for a connection.
If you use NIO, you'll have to constantly re-allocate a thread for a connection. It may be a inferior solution, to the simple fixed-thread-per-connection solution.
The default thread stack size is pretty large, (1/4 MB?) it's the major reason why there can only be limited threads. Try reduce it and see if your system can support more.
However if your game is indeed very "busy", it's your CPU that you need to worry the most. NIO or not, it's really hard to handle thousands of hyper active gamers on a machine.
There are actually 3 solutions:
Multiple threads
One thread and NIO
Both solutions 1 and 2 at the same
time
The best thing to do for performance is to have a small, limited number of threads and multiplex network events onto these threads with NIO as new messages come in over the network.
Using NIO with one thread is a bad idea for a few reasons:
If you have multiple CPUs or cores, you will be idling resources since you can only use one core at a time if you only have one thread.
If you have to block for some reason (maybe to do a disk access), you CPU is idle when you could be handling another connection while you're waiting for the disk.
One thread per connection is a bad idea because it doesn't scale. Let's say have:
10 000 connections
2 CPUs with 2 cores each
only 100 threads will be block at any given time
Then you can work out that you only need 104 threads. Any more and you're wasting resources managing extra threads that you don't need. There is a lot of bookkeeping under the hood needed to manage 10 000 threads. This will slow you down.
This is why you combine the two solutions. Also, make sure your VM is using the fastest system calls. Every OS has its own unique system calls for high performance network IO. Make sure your VM is using the latest and greatest. I believe this is epoll() in Linux.
In addition, if you were to use
threads per connection, would you just
create new threads or would you use a
very large thread pool?
It depends how much time you want to spend optimizing. The quickest solution is to create resources like threads and strings when needed. Then let the garbage collection claim them when you're done with them. You can get a performance boost by having a pool of resources. Instead of creating a new object, you ask the pool for one, and return it to the pool when you're done. This adds the complexity of concurrency control. This can be further optimized with advance concurrency algorithms like non-blocking algorithms. New versions of the Java API have a few of these for you. You can spend the rest of your life doing these optimizations on just one program. What is the best solution for your specific application is probably a question that deserves its own post.
If you willing to spend any amount of money on powerful enough hardware why limit yourself to one server. google don't use one server, they don't even use one datacenter of servers.
A common misconception is that NIO allows non-blocking IO therefor its the only model worth benchmarking. If you benchmark blocking NIO you can get it 30% faster than old IO. i.e. if you use the same threading model and compare just the IO models.
For a sophisticated game, you are far more likely to run out of CPU before you hit 10K connections. Again it is simpler to have a solution which scales horizontally. Then you don't need to worry about how many connections you can get.
How many users can reasonably interact? 24? in which case you have 1000 independent groups interacting. You won't have this many cores in one server.
How much money per users are you intending to spend on server(s)? You can buy an 12 core server with 64 GB of memory for less than £5000. If you place 2500 users on this server you have spent £2 per user.
EDIT: I have a reference http://vanillajava.blogspot.com/2010/07/java-nio-is-faster-than-java-io-for.html which is mine. ;) I had this reviewed by someone who is a GURU of Java Networking and it broadly agreed with what he had found.
If you have busy connections, which means they constantly send you data and you send them back, you may use non-Blocking IO in conjunction with Akka.
Akka is an open-source toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM. Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Erlang. Language bindings exist for both Java and Scala.
Akka's logic is non-blocking so its perfect for asynchronous programming. Using Akka Actors you may remove Thread overhead.
But if your socket streams block more often, I suggest using Blocking IO in conjunction with Quasar
Quasar is an open-source library for simple, lightweight JVM concurrency, which implements true lightweight threads (AKA fibers) on the JVM. Quasar fibers behave just like plain Java threads, except they have virtually no memory and task-switching overhead, so that you can easily spawn hundreds of thousands of fibers – or even millions – in a single JVM. Quasar also provides channels for inter-fiber communications modeled after those offered by the Go language, complete with channel selectors. It also contains a full implementation of the actor model, closely modeled after Erlang.
Quasar's logic is blocking, so you may spawn, say 24000 fibers waiting on different connections. One of positive points about Quasar is, fibers can interact with plain Threads very easily. Also Quasar has integrations with popular libraries, such as Apache HTTP client or JDBC or Jersey and so on, so you may use benefits of using Fibers in many aspects of your project.
You may see a good comparison between these two frameworks here.
As most of you guys are saying that the server is bound to be locked up in CPU usage before 10k concurrent users are reached, I suppose it is better for me to use a threaded blocking (N)IO approach considering the fact that for this particular MMORPG, getting several packets per second for each player is not uncommon and might bog down a selector if one were to be used.
Peter raised an interesting point that blocking NIO is faster than the old libraries while irreputable mentioned that for a busy MMORPG server, it would be better to use threads because of how many instructions are received per player. I wouldn't count on too many players going idle on this game, so it shouldn't be a problem for me to have a bunch of non-running threads. I've come to realize that synchronization is still required even when using a framework based on NIO because they use several worker threads running at the same time to process packets received from clients. Context switching may prove to be expensive, but I'll give this solution a try. It's relatively easy to refactor my code so that I could use a NIO framework if I find there is a bottleneck.
I believe my question has been answered. I'll just wait a little bit more in order to receive even more insight from more people. Thank you for all your answers!
EDIT: I've finally chosen my course of action. I actually was indecisive and decided to use JBoss Netty and allow the user to switch between either oio or nio using the classes
org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
org.jboss.netty.channel.socket.oio.OioServerSocketChannelFactory;
Quite nice that Netty supports both!
You might get some inspiration from the former Sun sponsored project, now named Red Dwarf.
The old website at http://www.reddwarfserver.org/ is down.
Github to the rescue: https://github.com/reddwarf-nextgen/reddwarf
If you do client side network calls, most likely you just need plain socket io.
If you are creating server side technologies, then NIO would help you in separating the network io part from fulfillment/processing work.
IO threads configured as 1 or 2 for network IO. Worker threads are for actual processing part(which ranges from 1 to N, based on machine capabilities).

Can the thread per request model be faster than non-blocking I/O?

I remember 2 or 3 years ago reading a couple articles where people claimed that modern threading libraries were getting so good that thread-per-request servers would not only be easier to write than non-blocking servers but that they'd be faster, too. I believe this was even demonstrated in Java with a JVM that mapped Java threads to pthreads (i.e. the Java nio overhead was more than the context-switching overhead).
But now I see all the "cutting edge" servers use asynchronous libraries (Java nio, epoll, even node.js). Does this mean that async won?
Not in my opinion. If both models are well implemented (this is a BIG requirement) I think that the concept of NIO should prevail.
At the heart of a computer are cores. No matter what you do, you cannot parallelize your application more than you have cores. i.e. If you have a 4 core machine, you can ONLY do 4 things at a time (I'm glossing over some details here, but that suffices for this argument).
Expanding on that thought, if you ever have more threads than cores, you have waste. That waste takes two forms. First is the overhead of the extra threads themselves. Second is the time spent switching between threads. Both are probably minor, but they are there.
Ideally, you have a single thread per core, and each of those threads is running at 100% processing speed on their core. Task switching wouldn't occur in the ideal. Of course there is the OS, but if you take a 16 core machine and leave 2-3 threads for the OS, then the remaining 13-14 go towards your app. Those threads can switch what they are doing within your app, like when they are blocked by IO requirements, but don't have to pay that cost at the OS level. Write it right into your app.
An excellent example of this scaling is seen in SEDA http://www.eecs.harvard.edu/~mdw/proj/seda/ . It showed much better scaling under load than a standard thread-per-request model.
My personal experience is with Netty. I had a simple app. I implemented it well in both Tomcat and Netty. Then I load tested it with 100s of concurrent requests (upwards of 800 I believe). Eventually Tomcat slowed to a crawl and exhibited extremely bursty/laggy behavior. Whereas the Netty implementation simply increased response time, but continued with incredibly overall throughput.
Please note, this hinges on solid implementation. NIO is still getting better with time. We are learning how to tune our servers OSes to work better with it as well as how to implement the JVMs to better leverage the OS functionality. I don't think a winner can be declared yet, but I believe NIO will be the eventual winner, and it's doing quite well already.
It is faster as long as there is enough memory.
When there are too many connections, most of which are idle, NIO can save threads therefore save memory, and the system can handle a lot more users than thread-per-connection model.
CPU is not a direct factor here. With NIO, you effectively need to implement a threading model yourself, which is unlikely to be faster than JVM's threads.
In either choice, memory is the ultimate bottleneck. When load increases and memory used approaches max, GC will be very busy, and the system often demonstrate the symptom of 100% CPU.
Some time ago I found rather interesting presentation providing some insight on "why old thread-per-client model is better". There are even measurements. However I'm still thinking it through. In my opinion the best answer to this question is "it depends" because most (if not all) engineering decisions are trade offs.
Like that presentation said - there's speed and there's scalability.
One scenario where thread-per-request will almost certainly be faster than any async solution is when you have a relatively small number of clients (e.g. <100), but each client is very high volume. e.g. a realtime app where no more than 100 clients are sending/generating 500 messages a second each. Thread-per-request model will certainly be more efficient than any async event loop solution. Async scales better when there are many requests/clients because it doesn't waste cycles waiting on many client connections, but when you have few high volume clients with little waiting, it's less efficient.
From what I seen, authors of Node and Netty both recognize that these frameworks are meant to primarily address high volumes/many connections scalability situations, rather than being faster for for a smaller number of high volume clients.

Categories