How to avoid tight thread loop? - java

I have a Java program with multiple sockets that occasionally have data that need to be read and processed, but there is an indeterminate amount of time which there is no data to be read. I need a good way to constantly check if there is data in the sockets, and process the data. Assigning one thread per socket is not a good idea since there could be too many sockets and use too much memory.
Currently, I have a couple threads, each one assigned to service its own list of sockets. If there was nothing to read in any of the sockets, then sleep one second, then loop. If there was something to read in any of the sockets, just loop without waiting and iterate through the sockets again.
The reason I do this is because I don't want to use up too much resources if there is nothing to read, and the one second delay is not a problem. The only down side is that there is no flexibility for sockets to jump threads, so the worst case scenario is that a single thread is overloaded with work, while the other threads are doing nothing.
Another idea I've had: create a thread pool, and queue up all the sockets to be serviced, and re-add them when they are serviced, but there is no good way to know if none of the sockets need servicing and the threads can take a break to free up CPU cycles.
Is there a good way to assign threads tasks, but not overload computer resources if there is nothing to do?
Ideally an event is triggered each time there is data available in a socket, but as far as I know, there is no way to do this, and I must poll the sockets.
To reiterate, I do not want a one to one relationship between socket and thread.

there could be too many sockets and use too much memory.
You can achieve 1,000 to 10,000 this way. Memory is much cheaper than it was when NIO was introduced 12 years ago and threads are more efficient and scalable than they used to be.
I have a couple threads, each one assigned to service its own list of sockets. If there was nothing to read in any of the sockets, then sleep one second, then loop.
I use a pause which busy waits for a short period and yeilds and finally sleeps for an escalating period of time.
You can use Selectors, but these are not simple to use correctly. In this situation I would use a library like netty or at the very least read the code it uses.
The only down side is that there is no flexibility for sockets to jump threads, so the worst case scenario is that a single thread is overloaded with work, while the other threads are doing nothing.
This is where using a thread per socket is better.
I must poll the sockets.
You can use Selectors, but these are single threaded and switch sockets between selectors is not simple.
I would reconsider using more threads for simplicity.

Related

Reasonable thread count in game networking

I'm just curious as to what a reasonable number of threads is for a simple 2D mmo in Java. Is it reasonable to have two threads per connection, one for the input stream and one for the output stream? The reason I ask is because I use a blocking method on the input stream, and a workaround seems unnecessarily complex if I were to try to get around it without adding threads.
This is mostly for my own edification; I don't expect to have 5 million people playing it ever, or even 5, but I'm wondering what a good scalable solution is, and if this is reasonable for a small server (<30 connections).
Considering that one thread (input) is going to be blocking most of the time anyway, two threads per connection is more than reasonable.

Socket vs SocketChannel

I am trying to understand SocketChannels, and NIO in general. I know how to work with regular sockets and how to make a simple thread-per-client server (using the regular blocking sockets).
So my questions:
What is a SocketChannel?
What is the extra I get when working with a SocketChannel instead of a Socket.
What is the relationship between a channel and a buffer?
What is a selector?
The first sentance in the documentation is A selectable channel for stream-oriented connecting sockets.. What does that mean?
I have read the also this documentation, but somehow I am not getting it...
A Socket is a blocking input/output device. It makes the Thread that is using it to block on reads and potentially also block on writes if the underlying buffer is full. Therefore, you have to create a bunch of different threads if your server has a bunch of open Sockets.
A SocketChannel is a non-blocking way to read from sockets, so that you can have one thread communicate with a bunch of open connections at once. This works by adding a bunch of SocketChannels to a Selector, then looping on the selector's select() method, which can notify you if sockets have been accepted, received data, or closed. This allows you to communicate with multiple clients in one thread and not have the overhead of multiple threads and synchronization.
Buffers are another feature of NIO that allows you to access the underlying data from reads and writes to avoid the overhead of copying data into new arrays.
By now NIO is so old that few remember what Java was like before 1.4, which is what you need to know in order to understand the "why" of NIO.
In a nutshell, up to Java 1.3, all I/O was of the blocking type. And worse, there was no analog of the select() system call to multiplex I/O. As a result, a server implemented in Java had no choice but to employ a "one-thread-per-connection" service strategy.
The basic point of NIO, introduced in Java 1.4, was to make the functionality of traditional UNIX-style multiplexed non-blocking I/O available in Java. If you understand how to program with select() or poll() to detect I/O readiness on a set of file descriptors (sockets, usually), then you will find the services you need for that in NIO: you will use SocketChannels for non-blocking I/O endpoints, and Selectors for fdsets or pollfd arrays. Servers with threadpools, or with threads handling more than one connection each, now become possible. That's the "extra".
A Buffer is the kind of byte array you need for non-blocking socket I/O, especially on the output/write side. If only part of a buffer can be written immediately, with blocking I/O your thread will simply block until the entirety can be written. With non-blocking I/O, your thread gets a return value of how much was written, leaving it up to you to handle the left-over for the next round. A Buffer takes care of such mechanical details by explicitly implementing a producer/consumer pattern for filling and draining, it being understood that your threads and the JVM's kernel will not be in sync.
Even though you are using SocketChannels, It's necessary to employ thread pool to process channels.
Thinking about the scenairo you use only one thread which is responsible for both polling select() and processing the SocketChannels selected from Selectors, if one channel takes 1 seconds for processing, and there are 10 channels in queue, it means you have to wait 10 seconds before next polling which is untolerable. so there should be a thread pool for channels processing.
In this sense, i don't see tremendous difference to the thread-per-client blocking sockets pattern. the major difference is in NIO pattern, the task is smaller, it's more like thread-per-task, and tasks could be read, write, biz process etc. for more detail, you can take a look at Netty's implementation of NioServerSocketChannelFactory, which is using one Boss thread accepting connection, and dispatch tasks to a pool of Worker threads for processing
If you are really fancy at one thread, the bottom-line is at least you shold have pooled I/O threads, because I/O operations is often oders of magnitude slower than instruction-processing cycles, you would not want the precious one thread being blocked by I/O, and this is exactly NodeJS doing, using one thread accept connection, and all I/O are asynchornous and being parallelly processed by back-end I/O threads pool
is the old style thread-per-client dead?
I don't think so, NIO programming is complex, and multi-threads is not naturally evil, Keep in mind that modern operating systems and CPU's become better and better at multitasking, so the overheads of multithreading becomes smaller over time.

Making sense of Java's non blocking I/O

Let's say I'm running a server, and set client SocketChannels that I accept as non blocking, and read them through a thread pool's threads. But what does that buy me? I anyway need to read the full client request before processing it, which means I need to make multiple read calls.
I've also come across articles saying that threads should block naturally so it gives a chance to other threads to run. However this won't happen in the aforementioned case as these threads will not block.
So how would non blocking IO be efficient? How to make sense of this all? Some multi-core CPU angle to it perhaps? But how?
EDIT: found a pretty good link that explains it programmatically:
http://rox-xmlrpc.sourceforge.net/niotut/
The problem using blocking IO starts when you want to scale your server program. You'd have to hold a blocking thread-per-request. Many many requests will introduce man many threads. This might make some hard time for a server application that serves thousands and more of IO involving concurrent requests.
Using nio non-blocking IO, this request-to-thread coupling is redundant. You can use any thread to complete the IO operation of any request. This lets you use the great pooling pattern for your IO handling threads, and decrease significantly the thread creation and management overhead. On the other hand, you'd have to work harder to sustain data consistency, but that would be the price of scalability.
Unless you want to use busy waiting (which sounds unlikely) if you want to use non-blocking you usually use a small number of threads (may be only one) and a Selector.
If you are going to use blocking IO, that is when you dedicate one or two threads per connection.

Is this a good multi-threaded server design?

I have a server for a client-server game (ideally the basis for small MMO) and I am trying to determine the best way to organize everything. Here is an overview of what I have:
[server start]
load/create game state
start game loop on new thread
start listening for udp packets on new thread
while not closing
listen for new tcp connection
create new tcp client
start clients tcp listener on new thread
save game state
exit
[game loop]
sleep n milliseconds // Should I sleep here or not?
update game state
send relevant udp packet updates to client
every second
remove timed out clients
[listen for udp]
on receive, send to correct tcp client to process
[listen for tcp] (1 for each client)
manage tcp packets
Is this a fair design for managing the game state, tcp connections, and send/receive udp packets for state updates? Any comments or problems?
I am most interested on the best way to do the game loop. I know I will have issues if I have a large number of clients because I am spawning a new thread for each new client.
That looks like a reasonable start to the design. It wouldn't be bad to go beyond 10 clients (per your comment) in terms of scaling the number of threads. As long as the threads are mostly waiting and not actually processing something you can easily have thousands of them before things start to break down. I recall hitting some limits with a similar design over 5 years ago, and it was around 7000 threads.
Looks like a good design. If I were you I would use an existing nio framework like netty.
Just google java nio frameworks and you'll find several frameworks with examples and documentation.
Update
Thanks, but I prefer to do it all on my own. This is only for a side project of mine, and much of its purpose is
imho you'll learn more by using an existing framework and focus on the game design than doing everything by yourself from start.
Next time you'll know how do design the game and that makes it easier to design the IO framework too.
I'd say that (especially) in Java, threads are more for convenience than for performance (there are only oh-so-many cores, and I guess you'd like to keep some control on which threads have priority).
You haven't said anything about the volume of message exchange and number of clients, but you might want to consider more natural from the perspective of having a single thread handling the network IO, and having it dispatch incoming requests in a tight loop, and mapping the ACTUAL work (i.e. requests that you can't handle in the tight loop) to a thread pool. IIRC, in such a setup the limiting factor will be the maximum number of TCP connections you can wait for input on simultaneously in a single thread.
For increased fun, compare this with a solution in a language with has more light-weight threads, like in Erlang.
Of course, as #WhiteFang34 said, you won't need such a solution until you do some serious simulation/use of your code. Related techniques (and frameworks like Netty where you could find inspiration) might also be found in this question here.
There is a certain cost to thread creation, mostly the per-thread stack.

Java NIO Threading issue with SocketChannel.write()

Sometimes, while sending a large amount of data via SocketChannel.write(), the underlying TCP buffer gets filled up, and I have to continually re-try the write() until the data is all sent.
So, I might have something like this:
public void send(ByteBuffer bb, SocketChannel sc){
sc.write(bb);
while (bb.remaining()>0){
Thread.sleep(10);
sc.write(bb);
}
}
The problem is that the occasional issue with a large ByteBuffer and an overflowing underlying TCP buffer means that this call to send() will block for an unexpected amount of time. In my project, there are hundreds of clients connected simultaneously, and one delay caused by one socket connection can bring the whole system to a crawl until this one delay with one SocketChannel is resolved. When a delay occurs, it can cause a chain reaction of slowing down in other areas of the project, and having low latency is important.
I need a solution that will take care of this TCP buffer overflow issue transparently and without causing everything to block when multiple calls to SocketChannel.write() are needed. I have considered putting send() into a separate class extending Thread so it runs as its own thread and does not block the calling code. However, I am concerned about the overhead necessary in creating a thread for EACH socket connection I am maintaining, especially when 99% of the time, SocketChannel.write() succeeds on the first try, meaning there's no need for the thread to be there. (In other words, putting send() in a separate thread is really only needed if the while() loop is used -- only in cases where there is a buffer issue, perhaps 1% of the time) If there is a buffer issue only 1% of the time, I don't need the overhead of a thread for the other 99% of calls to send().
I hope that makes sense... I could really use some suggestions. Thanks!
Prior to Java NIO, you had to use one Thread per socket to get good performance. This is a problem for all socket based applications, not just Java. Support for non-blocking IO was added to all operating systems to overcome this. The Java NIO implementation is based on Selectors.
See The definitive Java NIO book and this On Java article to get started. Note however, that this is a complex topic and it still brings some multithreading issues into your code. Google "non blocking NIO" for more information.
The more I read about Java NIO, the more it gives me the willies. Anyway, I think this article answers your problem...
http://weblogs.java.net/blog/2006/05/30/tricks-and-tips-nio-part-i-why-you-must-handle-opwrite
It sounds like this guy has a more elegant solution than the sleep loop.
Also I'm fast coming to the conclusion that using Java NIO by itself is too dangerous. Where I can, I think I'll probably use Apache MINA which provides a nice abstraction above Java NIO and its little 'surprises'.
You don't need the sleep() as the write will either return immediately or block.
You could have an executor which you pass the write to if it doesn't write the first time.
Another option is to have a small pool of thread to perform the writes.
However, the best option for you may be to use a Selector (as has been suggested) so you know when a socket is ready to perform another write.
For hundreds of connections, you probably don't need to bother with NIO. Good old fashioned blocking sockets and threads will do you.
With NIO, you can register interest in OP_WRITE for the selection key, and you will get notified when there is room to write more data.
There are a few things you need to do, assuming you already have a loop using
Selector.select(); to determine which sockets are ready for I/O.
Set the socket channel to non-blocking after you've created it, sc.configureBlocking(false);
Write (possibly parts of) the buffer and check if there's anything left. The buffer itself takes care of current position and how much is left.
Something like
sc.write(bb);
if(sc.remaining() == 0)
//we're done with this buffer, remove it from the select set if there's nothing else to send.
else
//do other stuff/return to select loop
Get rid of your while loop that sleeps
I am facing some of the same issues right now:
- If you have a small amount of connections, but with large transfers, I would just create a threadpool, and let the writes block for the writer threads.
- If you have a lot of connections then you could use full Java NIO, and register OP_WRITE on your accept()ed sockets, and then wait for the selector to come in.
The Orielly Java NIO book has all this.
Also:
http://www.exampledepot.com/egs/java.nio/NbServer.html?l=rel
Some research online has led me to believe NIO is pretty overkill unless you have a lot of incoming connections. Otherwise, if its just a few large transfers - then just use a write thread. It will probably have quicker response. A number of people have issues with NIO not repsonding as quick as they want. Since your write thread is on its own blocking it wont hurt you.

Categories