Make InputStream non-blocking - java

Currently I have a server that listens for connections (it is a basic highscore server for a mobile game I have), loops through the connections every 1000ms and listens for any incoming data.
public void readData(Connection c) throws IOException {
PacketBuffer readBuffer = new PacketBuffer(Server.PACKET_CAPACITY);
int packetSize = c.getIn().read();
c.getIn().mark(packetSize);
byte[] buffer = new byte[packetSize];
c.getIn().read(buffer, 0, buffer.length);
readBuffer.setBuffer(buffer);
packetHandler.addProcess(c, readBuffer);
}
I use my own PacketBuffer and I need to find a way so that c.getIn().read() (That is my connections InputStream) doesn't block. Currently the socket is set to 500ms timeout, and my server would run fine that way. My problem is if someone tries to make their own program to connect to try and hack their own highscores or ddos the server it will become convoluted with a bunch of useless connections that block the thread for 500ms a piece when the connection isn't writing.

You could try something like this. Every time readData gets called it will check to see if bytes are available to read. I used a while loop here because you want it to process all the data it can before the thread sleeps again. This will ensure messages dont get backed up, if it were to only read one every x milliseconds.
public void readData(Connection c) throws IOException {
while (c.getIn().available() > 0) {
int packetSize = c.getIn().read();
c.getIn().mark(packetSize);
byte[] buffer = new byte[packetSize];
c.getIn().read(buffer, 0, buffer.length);
PacketBuffer readBuffer = new PacketBuffer(Server.PACKET_CAPACITY);
readBuffer.setBuffer(buffer);
packetHandler.addProcess(c, readBuffer);
}
I dont know why you are using the mark method. Looks problematic to me.
You also really need to use a readFully() style method (see DataInputStream) which won't return until it's definitely read the full byte array. Regular reads can always "return short", even when the sender has sent the full data block (due to network packet sizing etc).

There are two classic ways to implement servers in java.
The first and oldest way is to use a read/write thread pair for each connected client. This is okay for smaller servers without a lot of connected clients as each client requires two threads to manage it. It doesn't scale very well to a lot of concurrent clients.
The second and newer way is to use java.nio.ServerSocketChannel, java.nio.SocketChannel, and java.nio.Selector. These three classes allow you to manage all IO operations for every client you have connected in a single thread. Here is an example for how to implement a very basic server using the java.nio package.
A much better way to implement a server would be to use a third-party framework. A great library that I have used in the past is Netty. It handles all the nitty-gritty details of sockets and provides a fairly clean and simple api that scales well.

You can't. InputStreams are blocking. Period. It's hard to see how non-blocking mode would actually solve the problem you mention. I suggest a redesign is in order.

Related

How many threads is good for a datagram receiver?

I created a send-receive datagram system for a game I have created in java (and LWJGL).
However, these datagrams often got dropped. That was because the server was waiting for various IO operations and other processing to be finished in the main loop, while new datagrams were being sent to it (which it was obviously not listening for).
To combat this, I have kept my main thread with the while true loop that catches datagrams, but instead of doing the processing in the main thread, I branch out into different threads.
Like this:
ArrayList<RecieveThread> threads = new ArrayList<RecieveThread>();
public void run(){
while (true){
//System.out.println("Waiting!");
byte[] data = new byte[1024];
DatagramPacket packet = new DatagramPacket(data, data.length);
try {
socket.receive(packet);
} catch (IOException e) {
e.printStackTrace();
}
//System.out.println("Recieved!");
String str = new String(packet.getData());
str = str.trim();
if (threads.size() < 50){
RecieveThread thr = new RecieveThread();
thr.packet = packet;
thr.str = str;
threads.add(thr);
thr.start();
}else{
boolean taskProcessed = false;
for (RecieveThread thr : threads){
if (!thr.nextTask){
thr.packet = packet;
thr.str = str;
thr.nextTask = true;
taskProcessed = true;
break;
}
}
if (!taskProcessed){
System.out.println("[Warning] All threads full! Defaulting to main thread!");
process(str, packet);
}
}
}
}
That is creating a new thread for every incoming datagram until it hits 50 packets, at which point it chooses to process in one of the existing threads that is waiting for a next task - And if all threads are processing, it defaults to the main thread.
So my question is this: How many threads is a good amount? I don't want to overload anybody's system (The same code will also be run on players' clients), but I also don't want to increase system packet loss.
Also, is different threads even a good idea? Does anybody have a better way of doing this?
Edit: Here is my RecieveThread class (class is 777 lines long):
String str;
DatagramPacket packet;
boolean nextTask = true;
public void run(){
while (true){
////System.out.println("CLIENT: " + str);
//BeforeGame
while (!nextTask){
//Nothing
}
<Insert processing code here that you neither know about, nor care to know about, nor is relevant to the issue. Still, I pastebinned it below>
}
}
Full receiving code
First and foremost, any system that uses datagrams (e.g. UDP) for communication has to be able to cope with dropped requests. They will happen. The best you can do is reduce the typical drop rate to something that is acceptable. But you also need to recognize that if your application can't cope with lost datagrams, then it should not be using datagrams. Use regular sockets instead.
Now to the question of how many threads to use. The answer is "it depends".
On the one hand, if you don't have enough threads, there could be unused hardware capacity (cores) that could be used at peak times ... but isn't.
If you have too many threads running (or runnable) at a time, they will be competing for resources at various levels:
competition for CPU
competition for memory bandwidth
contention on locks and shared memory.
All of these things (and associated 2nd order effects) can reduce throughput ... relative to the optimal ... if you have too many threads.
If your request processing involves talking to databases or servers on other machines, then you need enough threads to allow something else to happen while waiting for responses.
As a rule of thumb, if your requests are independent (minimal contention on shared data) and exclusively in-memory (no databases or external service requests) then one worker thread per core is a good place to start. But you need to be prepared to tune (and maybe re-tune) this.
Finally, there is the problem of dealing with overload. On the one hand, if the overload situation is transient, then queuing is a reasonable strategy ... provided that the queue doesn't get too deep. On the other hand, if you anticipate overload to be common, then the best strategy is to drop requests early.
However, there is a secondary problem. A dropped request will probably entail the client noticing that it hasn't gotten a reply in a given time, and resending then resending request. And that can lead to worse problems; i.e. the client resending a request before the server has actually dropped it ... which can lead to the same request being processed multiple times, and a catastrophic drop in effective throughput.
Note that the same thing can happen if you have too many threads and they get bogged down due to resource contention.
Probably just one thread, assuming you have one DatagramSocket. You could always spawn a processData thread from your thread that reads the UDPSocket. Like people said in the comments it's up to you, but usually one is good.
Edit:
Also look into mutexes if you do this.
Seems you're in the same question about the doubt of using NginX or Apache. Have you ever read about the NginX and the 10k problem? If not read about here. There is no "correct" answer for questions like this one. As the other mates have highlighted this question is about the needs (aspects) of your application's environment. Remember that we have so many framework for web: every framework solve the same problem what is serving html documents but using different ways to do the task.

continuous data through a socket

lets say I had a socket that needed to send continuous data at random (but small) intervals, lets say about 20 objects a second, for any span of time.
I am for-seeing possible issues that I am not sure how to handle.
1) If I send one object at a time as in example A, may they still arrive in bunches? So as to make it better to do as in example B?
2) Would the thread receiving the data possibly try to read the data before a entire object was sent, thus splitting data, and making another issue I will have to look out for?
pseudo for sending data might look like this
EXAMPLE A
void run()
{
get socket outputstream
while (isBroadcasting)
{
if (myQueue.isEmpty() == false)
send first object in queue through outputstream
Thread.sleep(25);
}
}
EXAMPLE B
void run()
{
get socket outputstream
while (isBroadcasting)
{
while (myQueue.isEmpty() == false)
send all object in queue through outputstream
Thread.sleep(25);
}
}
and finally read it like this
void run()
{
get socket inputstream
while (isReceiving)
{
get object(s) from inputstream and publish them to main thread
Thread.sleep(25);
}
}
3) Would this be a viable solution? Is it ok to keep the streams open at both ends, looping and writing/reading data until finished?
1) If I send one object at a time as in example A, may they still arrive in bunches? So as to make it better to do as in example B?
Depending on the type of network you are sending them over, they may definitely still arrive in bunches at the receiving end. In addition, if you are using TCP, the application of Nagle's algorithm may introduce more bunching and apparent delay.
2) Would the thread receiving the data possibly try to read the data before a entire object was sent, thus splitting data, and making another issue I will have to look out for?
This is very dependent on your implementation details, and is impossible to answer with the pseudocode provided.
Is it ok to keep the streams open at both ends, looping and writing/reading data until finished?
Yes, it is perfectly reasonable to use a TCP connection like this for a long period of time. However, your application must be willing to reconnect if the connection is lost for some reason.

A non-blocking server with java.io

Everybody knows that java IO is blocking, and java NIO is non-blocking. In IO you will have to use the thread per client pattern, in NIO you can use one thread for all clients.
Now my question follows: is it possible to make a non-blocking design using only the Java IO api. (not NIO)
I was thinking about a pattern like this (obviously very simplified);
List<Socket> li;
for (Socket s : li) {
InputStream in = s.getInputStream();
byte[] data = in.available();
in.read(data);
// processData(data); (decoding packets, encoding outgoing packets
}
Also note that the client will always be ready for reading data.
What are your opinions on this? Will this be suitable for a server that should at least hold a few hundred of clients without major performance issues?
It is possible but pointless. There is no select() in java.net, so you are reduced to polling the sockets, which implies sleeping between polls, and you can't tell how long to sleep for, so you will sleep for longer than necessary, so you will waste time, add latency, etc; or else you must sleep for stupidly short intervals and so consume pointless CPU.
For a mere few hundred clients there is no possible objection to the conventional use of a thread per connection.
I don't know what 'the client will always be ready for reading data' means. You can't tell that from the server, and if it isn't ready, writes to it can block, which will upset your applecard completely.

Why should I use NIO for TCP multiplayer gaming instead of simple sockets (IO) - or: where is the block?

I'm trying to create a simple multiplayer game for Android devices. I have been thinking about the netcode and read now a lot of pages about Sockets. The Android application will only be a client and connect only to one server.
Almost everywhere (here too) you get the recommendation to use NIO or a framework which uses NIO, because of the "blocking".
I'm trying to understand what the problem of a simple socket implementation is, so I created a simple test to try it out:
My main application:
[...]
Socket clientSocket = new Socket( "127.0.0.1", 2593 );
new Thread(new PacketReader(clientSocket)).start();
PrintStream os = new PrintStream( clientSocket.getOutputStream() );
os.println( "kakapipipopo" );
[...]
The PacketReader Thread:
class PacketReader implements Runnable
{
Socket m_Socket;
BufferedReader m_Reader;
PacketReader(Socket socket)
{
m_Reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
public void run()
{
char[] buffer = new char[200];
int count = 0;
while(true)
{
count = m_Reader.read(buffer, 0, 200);
String message = new String(buffer, 0, count);
Gdx.app.log("keks", nachricht);
}
}
}
I couldn't experience the blocking problems I should get. I thought the read() function will block my application and I couldn't do anything - but everything worked just fine.
I have been thinking: What if I just create a input and output buffer in my application and create two threads which will write and read to the socket from my two buffers? Would this work?
If yes - why does everyone recommend NIO? Somewhere in the normal IO way there must a block happen, but I can't find it.
Are there maybe any other benifits of using NIO for Android multiplayer gaming? I thought that NIO seems to be more complex, therefore maybe less suited for a mobile device, but maybe the simple socket way is worse for a mobile device.
I would be very happy if someone could tell me where the problem happens. I'm not scared of NIO, but at least I would like to find out why I'm using it :D
Greetings
-Thomas
The blocking is, the read() will block current thread until it can read data from socket's input stream. Thus, you need a thread dedicate on that single TCP connection.
What if you have more than 10k client devices connected with your server? You need at least 10k threads to handle all client devices (assume each device maintain a single TCP connection) no matter they are active or not. You have too much overhead on context switch and other multi-threads overhead even only 100 of them are active.
The NIO use a selector model to handle those clients, means you don't need a dedicate thread for each TCP connection to receive data. You just need to select all active connections (which has data already received) and to process those active connections. You can control how many threads should be maintained in server side.
EDIT
This answer is kind of not exactly answering about what OP asked. For client side its fine because the client is going to connect to just one server. Although my answer gives some generic idea about Blocking and Non Blocking IO.
I know this answer is coming 3 years later but this is something which might help someone in future.
In a Blocking Socket model, if data is not available for reading or if the server is not ready for writing then the network thread will wait on a request to read from or write to a socket until it either gets or sends the data or
times out. In other words, the program may halt at that point for quite some time if it can't proceed. To cancel this out we can create a thread per connection which can handle out requests from each client concurrently. If this approach is chosen, the scalability of the application can suffer because in a scenario where thousands of clients are connected, having thousands of threads can eat up all the memory of the system as threads are expensive to create and can affect the performance of the application.
In a Non-Blocking Socket model, the request to read or write on a socket returns immediately whether or not it was successful, in other words, asynchronously. This keeps the network thread busy. It is then our task to decide whether to try again or consider the read/write operation complete. Having this creates an event driven approach towards communication where we can create threads when needed and which leads to a more scalable system.
Diagram below explains the difference between Blocking and Non-Blocking socket model.

Best way to block on a socket for data

What is the efficient way of blocking on the socket for data after opening it.The method i used is to call read on input stream (this is a blocking call that waits till some data written to this socket).
//Socket creation
SocketForCommunication = new Socket();
InetSocketAddress sAddr = new InetSocketAddress("hostName", 8800);
SocketForCommunication.connect(sAddr,10000);
is = new DataInputStream(SocketForCommunication.getInputStream());
os = new DataOutputStream(SocketForCommunication.getOutputStream());
//Waiting on socket using read method for data
while(true)
{
int data = is.read();
if(data == HEADER_START)
{
processPackage(is);
}
}
Here problem is read can timeout.Is there a way to register a callback that gets called when data available to read on socket.?
The socket will timeout by default, but you can change this if you really want to. See the Socket.setSoTimeout() call (a timeout of zero means "indefinite").
N.B. Even if you specify a zero timeout, your O/S may or may not actually let you keep a socket open indefinitely. For example, idle sockets may get closed after a certain amount of time. In environments, e.g. shared web hosting environments, it's not uncommon for a housekeeping routine to also run (say) once a day and shut down idle sockets.
And of course, stuff happens on networks. Either way, you shouldn't rely on the socket staying open indefinitely...
You want to use the java.nio (non blocking IO) system. While its not callback driven, it allows you much more flexibility in handling IO.
http://rox-xmlrpc.sourceforge.net/niotut/index.html

Categories