Best way to block on a socket for data - java

What is the efficient way of blocking on the socket for data after opening it.The method i used is to call read on input stream (this is a blocking call that waits till some data written to this socket).
//Socket creation
SocketForCommunication = new Socket();
InetSocketAddress sAddr = new InetSocketAddress("hostName", 8800);
SocketForCommunication.connect(sAddr,10000);
is = new DataInputStream(SocketForCommunication.getInputStream());
os = new DataOutputStream(SocketForCommunication.getOutputStream());
//Waiting on socket using read method for data
while(true)
{
int data = is.read();
if(data == HEADER_START)
{
processPackage(is);
}
}
Here problem is read can timeout.Is there a way to register a callback that gets called when data available to read on socket.?

The socket will timeout by default, but you can change this if you really want to. See the Socket.setSoTimeout() call (a timeout of zero means "indefinite").
N.B. Even if you specify a zero timeout, your O/S may or may not actually let you keep a socket open indefinitely. For example, idle sockets may get closed after a certain amount of time. In environments, e.g. shared web hosting environments, it's not uncommon for a housekeeping routine to also run (say) once a day and shut down idle sockets.
And of course, stuff happens on networks. Either way, you shouldn't rely on the socket staying open indefinitely...

You want to use the java.nio (non blocking IO) system. While its not callback driven, it allows you much more flexibility in handling IO.
http://rox-xmlrpc.sourceforge.net/niotut/index.html

Related

How can I get notified of client disconnects?

ServerSocket serverSocket = new ServerSocket(portNumber)
Socket socket = serverSocket.accept();
try (
BufferedReader in = new BufferedReader(
new InputStreamReader(
socket.getInputStream()));
) {
while (in.readLine() != null) {
//do something
}
System.out.println("reach me if you can");
socket.close();
}
Writing my Server/Client software, I tried to implement functionality to show number of current connections. But I realized that my server never gets the message when a client abruptly terminates; it just keeps waiting at in.readLine(). How should I ensure that a Thread created to handle a specific connection is not left running while the connection is dead?
It is a general TCP problem that the machine on one end of a connection can go away without any notification to the machine on the other end. Machines are not supposed to do that, but it isn't always under their control. The usual way for one end to avoid waiting forever for data in such a case, and / or to avoid being loaded down with dead connections, is to employ a timeout.
The general problem is bigger than you described, but you should be able to solve the particular part you asked about by invoking setSoTimeout() on the socket some time before you try to read from it. The socket will then throw an exception if your read attempt blocks for longer than the time you specify. This setting persists for the lifetime of the socket (or until you set a different value), so you can apply it immediately after accepting the connection if you wish.
Be aware, however, that a sufficiently long period of simple client inactivity can also cause a timeout.

Make InputStream non-blocking

Currently I have a server that listens for connections (it is a basic highscore server for a mobile game I have), loops through the connections every 1000ms and listens for any incoming data.
public void readData(Connection c) throws IOException {
PacketBuffer readBuffer = new PacketBuffer(Server.PACKET_CAPACITY);
int packetSize = c.getIn().read();
c.getIn().mark(packetSize);
byte[] buffer = new byte[packetSize];
c.getIn().read(buffer, 0, buffer.length);
readBuffer.setBuffer(buffer);
packetHandler.addProcess(c, readBuffer);
}
I use my own PacketBuffer and I need to find a way so that c.getIn().read() (That is my connections InputStream) doesn't block. Currently the socket is set to 500ms timeout, and my server would run fine that way. My problem is if someone tries to make their own program to connect to try and hack their own highscores or ddos the server it will become convoluted with a bunch of useless connections that block the thread for 500ms a piece when the connection isn't writing.
You could try something like this. Every time readData gets called it will check to see if bytes are available to read. I used a while loop here because you want it to process all the data it can before the thread sleeps again. This will ensure messages dont get backed up, if it were to only read one every x milliseconds.
public void readData(Connection c) throws IOException {
while (c.getIn().available() > 0) {
int packetSize = c.getIn().read();
c.getIn().mark(packetSize);
byte[] buffer = new byte[packetSize];
c.getIn().read(buffer, 0, buffer.length);
PacketBuffer readBuffer = new PacketBuffer(Server.PACKET_CAPACITY);
readBuffer.setBuffer(buffer);
packetHandler.addProcess(c, readBuffer);
}
I dont know why you are using the mark method. Looks problematic to me.
You also really need to use a readFully() style method (see DataInputStream) which won't return until it's definitely read the full byte array. Regular reads can always "return short", even when the sender has sent the full data block (due to network packet sizing etc).
There are two classic ways to implement servers in java.
The first and oldest way is to use a read/write thread pair for each connected client. This is okay for smaller servers without a lot of connected clients as each client requires two threads to manage it. It doesn't scale very well to a lot of concurrent clients.
The second and newer way is to use java.nio.ServerSocketChannel, java.nio.SocketChannel, and java.nio.Selector. These three classes allow you to manage all IO operations for every client you have connected in a single thread. Here is an example for how to implement a very basic server using the java.nio package.
A much better way to implement a server would be to use a third-party framework. A great library that I have used in the past is Netty. It handles all the nitty-gritty details of sockets and provides a fairly clean and simple api that scales well.
You can't. InputStreams are blocking. Period. It's hard to see how non-blocking mode would actually solve the problem you mention. I suggest a redesign is in order.

Serving Multiple Clients

I wonder if it is really possible for a simple server program using server socket can handle multiple clients at the same time simultaneously?
I am creating a server program that needs to handle multiple clients. With the same port number. But the problem is the program will only serve one client at a time, and in order for it to serve the other client, the first connection have to be terminated.
Here is the code:
try{
ServerSocket server = new ServerSocket(PORT);
System.out.println("Server Running...");
while(true){
Socket socket = server.accept();
System.out.println("Connection from:"+socket.getInetAddress());
Scanner in = new Scanner (socket.getInputStream());
PrintWriter output = new PrintWriter(socket.getOutputStream());
}
}catch(Exception e){
System.out.println(e);
}
Is there any possible java code that will be added here in order to for the program to serve multiple clients simultaneously?
The code you've posted doesn't actually do anything with the client connection, which makes it hard to help you. But yes, it's entirely possible for a server to handle multiple concurrent connections.
My guess is that the problem is that you're doing everything on a single thread, with synchronous IO. That means that while you're waiting for data from the existing client, you're not accepting new connections. Typically a server takes one of two approaches:
Starting a thread (or reusing one from a thread pool) when it accepts a connection, and letting that thread deal with that connection exclusive.
Using asynchronous IO to do everything on a single thread (or a small number of threads).
The latter approach can be more efficient, but it also significantly more complicated. I'd suggest you use a "thread per connection" approach to start with. While you're experimenting, you can simply start a brand new Thread each time a client connects - but for production use, you'd use an ExecutorService or something similar, in order to reuse threads.
Note that depending on what kind of server you're building, all of this may well be done for you in third party libraries.

How to approach the dispatching of open sockets to threads in Java?

we need to create a ListeningDispatcher that accepts connections on the port P from N clients (for now they are local, so they are identified just by a port, but it could be an address later). My approach would be to put a .accept() call, retrieve the Socket, start a new thread and let it handle the message from the Socket. So if we have n clients in our distributed system (broadcast based, with a logical token ring), i would keep N threads, with N Sockets.
My mate is arguing that this would keep too many threads open and it's better to start a new thread on every new connection but instead of keeping the thread running, closing the socket and stop the thread after the message is recieved. This way, we would use less threads but we would have to create a new Socket for every message.
I think this would degrade the communication because it takes time to open a new Socket.
Consider that the system must be scalable and has an heavy communication part, because every event is broadcasted to every client.
Note: we can't use ThreadPools
My approach would be to put a .accept() call, retrieve the Socket,
start a new thread and let it handle the message from the Socket.
Don't start a new thread. Use a thread pool and reuse threads.
This way, we would use less thread but we would have to create a new
Socket for every message
For each client you use a different client socket via accept. This sentence does not make sense
So you have to make a decision between fewer threads and fewer new connections. Your 'mate' might want to ponder why HTTP keep-alive was invented. It was invented to re-use connections where possible rather than pay the cost of creating new ones.

Why should I use NIO for TCP multiplayer gaming instead of simple sockets (IO) - or: where is the block?

I'm trying to create a simple multiplayer game for Android devices. I have been thinking about the netcode and read now a lot of pages about Sockets. The Android application will only be a client and connect only to one server.
Almost everywhere (here too) you get the recommendation to use NIO or a framework which uses NIO, because of the "blocking".
I'm trying to understand what the problem of a simple socket implementation is, so I created a simple test to try it out:
My main application:
[...]
Socket clientSocket = new Socket( "127.0.0.1", 2593 );
new Thread(new PacketReader(clientSocket)).start();
PrintStream os = new PrintStream( clientSocket.getOutputStream() );
os.println( "kakapipipopo" );
[...]
The PacketReader Thread:
class PacketReader implements Runnable
{
Socket m_Socket;
BufferedReader m_Reader;
PacketReader(Socket socket)
{
m_Reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
public void run()
{
char[] buffer = new char[200];
int count = 0;
while(true)
{
count = m_Reader.read(buffer, 0, 200);
String message = new String(buffer, 0, count);
Gdx.app.log("keks", nachricht);
}
}
}
I couldn't experience the blocking problems I should get. I thought the read() function will block my application and I couldn't do anything - but everything worked just fine.
I have been thinking: What if I just create a input and output buffer in my application and create two threads which will write and read to the socket from my two buffers? Would this work?
If yes - why does everyone recommend NIO? Somewhere in the normal IO way there must a block happen, but I can't find it.
Are there maybe any other benifits of using NIO for Android multiplayer gaming? I thought that NIO seems to be more complex, therefore maybe less suited for a mobile device, but maybe the simple socket way is worse for a mobile device.
I would be very happy if someone could tell me where the problem happens. I'm not scared of NIO, but at least I would like to find out why I'm using it :D
Greetings
-Thomas
The blocking is, the read() will block current thread until it can read data from socket's input stream. Thus, you need a thread dedicate on that single TCP connection.
What if you have more than 10k client devices connected with your server? You need at least 10k threads to handle all client devices (assume each device maintain a single TCP connection) no matter they are active or not. You have too much overhead on context switch and other multi-threads overhead even only 100 of them are active.
The NIO use a selector model to handle those clients, means you don't need a dedicate thread for each TCP connection to receive data. You just need to select all active connections (which has data already received) and to process those active connections. You can control how many threads should be maintained in server side.
EDIT
This answer is kind of not exactly answering about what OP asked. For client side its fine because the client is going to connect to just one server. Although my answer gives some generic idea about Blocking and Non Blocking IO.
I know this answer is coming 3 years later but this is something which might help someone in future.
In a Blocking Socket model, if data is not available for reading or if the server is not ready for writing then the network thread will wait on a request to read from or write to a socket until it either gets or sends the data or
times out. In other words, the program may halt at that point for quite some time if it can't proceed. To cancel this out we can create a thread per connection which can handle out requests from each client concurrently. If this approach is chosen, the scalability of the application can suffer because in a scenario where thousands of clients are connected, having thousands of threads can eat up all the memory of the system as threads are expensive to create and can affect the performance of the application.
In a Non-Blocking Socket model, the request to read or write on a socket returns immediately whether or not it was successful, in other words, asynchronously. This keeps the network thread busy. It is then our task to decide whether to try again or consider the read/write operation complete. Having this creates an event driven approach towards communication where we can create threads when needed and which leads to a more scalable system.
Diagram below explains the difference between Blocking and Non-Blocking socket model.

Categories