Java SocketChannel write blocked while reading - java

I am trying to use SocketChannel.write and SocketChannel.read at the same time in two different threads (Android API Level 25).
I configured the SocketChannel as blocking mode.
For reading, I created an endless loop to read everything from server:
// Make socketChannel.read() return after 500ms at most
socketChannel.socket().setSoTimeout(500);
while(!SHUTDOWN) {
int read = socketChannel.read(buffer);
if(read > 0 ){
// Do something nice
break;
}
}
And for writing, I write data each 10 seconds.
The problem is, I found that sometimes the writing operations were blocked while reading.
But if I make the reading thread sleep for a short period in each loop, e.g. 100ms, this problem won't appear anymore.
looks like reading thread is blocking writing thread
AFAIK, TCP connections can offer bi-direction operations at the same time. Can anyone help to explain this?

As explained in TCP Wikipedia - Flow Control:
TCP uses an end-to-end flow control protocol to avoid having the
sender send data too fast for the TCP receiver to receive and process
it reliably. Having a mechanism for flow control is essential in an
environment where machines of diverse network speeds communicate. For
example, if a PC sends data to a smartphone that is slowly processing
received data, the smartphone must regulate the data flow so as not to
be overwhelmed.

Related

Java serial port IO reading

I am stuck in a strange issue while reading data from serial port in Java.
I have to read data from serial port via a polling method in a thread which is working fine, but I have a requirement where I need to write data to a serial port and read ACK back. Writing data to the serial port is successful but I am not able to read data back. Here there are two read operations one in thread and one in main thread.
Once I receive serial write data I paused the thread which is reading data from the serial port using a flag and started reading data from serial port again once write is done, but I am not able to read data. I disabled reading serial port after write operation and enabled thread which reads serial port in thread, here I am seeing ACK data from serial port.
Can any suggest what is going on wrong with this serial read operation? It is not buffered read/write operation.
I strongly recommend using only one dedicated thread for accessing serial port reading. The most reliable solution used to be an interrupt handler shoveling all the received data to a threadsafe state machine. Trying to read the serial port from multiple threads calls for problems. Serial port IO doesn't care that you "paused your thread", the data can be already fetched in and lost due to context switch.
So simply keep reading what comes in and if ACK is expected and obtained, inform the main thread via semaphore. In a dirty brutally simplified pseudocode:
Main thread loop:
{
serialReaderThread.isAckExpected = true
sendWriteCommand();
ackReceivedSemaphore.wait();
}
Serial reader thread loop:
{
readData();
if( isAckExpected && data == ack ) {
mainThread.ackReceivedSemaphore.notify();
isAckExpected = false
}
}
You need to set isAckExpected before sending the write command, because if your serial peer is fast enough, you might get the response back before your sendWriteCommand even returns.
You should not have different threads attempting to read from the serial port. The correct architecture is to have the single thread do the reading and distribute the incoming data to interested clients via multiple queues.
You would have a "normal read processing" thread that is given data by the read thread. When you need to do the write/ack sequence, the thread doing the write/ack would temporarily register itself with the read thread and divert the data stream.
You still have to deal with any interleaving of data (i.e. normal data being received after the write request but before the ack is received), but that is up to your application.

How to write an UDP server that will service n concurrent requests from different clients?

I am connecting 10 devices to a LAN, all of them have a udp server that goes like:
while(true){
serverSocket.receive(receivePacket);
dostuff(receivePacket);
}
serverSocket.close();
Now lets assume 9 of the devices try to initiate connection to the 10th device simultaenously. How can I accept all 9 instead of just the first which will then block the socket untill the server completes computation? Should I start a thread which will take care of dostuf() ? Will this let me get request from all of the simultaneous requests I got?
A basic design would have on thread responsible for handling incoming requests (with your desired limit) and then handing them off to worker/request handler threads. When each of these worker threads is finished, you'd want to update a shared/global counter to let the main thread know that it can establish a new connection. This will require a degree of synchronization, but it can be pretty fun.
Here's the idea:
serverThread:
while true:
serverLock.acquire()
if numberOfRequests < MAX_REQUESTS:
packet = socket.receive()
numberOfRequests++
requestThread(packet).run()
else
serverMonitor.wait(serverLock);
serverLock.release()
requestThread:
//handle packet
serverLock.acquire()
if numberOfRequests == MAX_REQUESTS:
numberOfRequests--
serverMonitor.pulse();
serverLock.release()
You'll want to make sure the synchronization is all correct, this is just to give you an idea of what you can start out with. But when you get the hang of it, you'll be able to make optimizations and enhancements. One particular enhancement, which also lends itself to limited number of requests, is something called a ThreadPool.
Regardless the basic structure is very much the same with most servers: a main thread responsible for handing off requests to worker threads. It's a neat and simple abstraction.
You can use threads in order to solve that problem. Since java already has an API that handles threads you can just create instance of runnable executors, take a look at the Executor Interface. Here is another useful link that could potentially help: blocking queue
Use a relatively larger size threadpool since udp doesn't require response.
main method will run as a listener and a threadpool will be doing rest of the heavy lifting

Why should I use NIO for TCP multiplayer gaming instead of simple sockets (IO) - or: where is the block?

I'm trying to create a simple multiplayer game for Android devices. I have been thinking about the netcode and read now a lot of pages about Sockets. The Android application will only be a client and connect only to one server.
Almost everywhere (here too) you get the recommendation to use NIO or a framework which uses NIO, because of the "blocking".
I'm trying to understand what the problem of a simple socket implementation is, so I created a simple test to try it out:
My main application:
[...]
Socket clientSocket = new Socket( "127.0.0.1", 2593 );
new Thread(new PacketReader(clientSocket)).start();
PrintStream os = new PrintStream( clientSocket.getOutputStream() );
os.println( "kakapipipopo" );
[...]
The PacketReader Thread:
class PacketReader implements Runnable
{
Socket m_Socket;
BufferedReader m_Reader;
PacketReader(Socket socket)
{
m_Reader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
public void run()
{
char[] buffer = new char[200];
int count = 0;
while(true)
{
count = m_Reader.read(buffer, 0, 200);
String message = new String(buffer, 0, count);
Gdx.app.log("keks", nachricht);
}
}
}
I couldn't experience the blocking problems I should get. I thought the read() function will block my application and I couldn't do anything - but everything worked just fine.
I have been thinking: What if I just create a input and output buffer in my application and create two threads which will write and read to the socket from my two buffers? Would this work?
If yes - why does everyone recommend NIO? Somewhere in the normal IO way there must a block happen, but I can't find it.
Are there maybe any other benifits of using NIO for Android multiplayer gaming? I thought that NIO seems to be more complex, therefore maybe less suited for a mobile device, but maybe the simple socket way is worse for a mobile device.
I would be very happy if someone could tell me where the problem happens. I'm not scared of NIO, but at least I would like to find out why I'm using it :D
Greetings
-Thomas
The blocking is, the read() will block current thread until it can read data from socket's input stream. Thus, you need a thread dedicate on that single TCP connection.
What if you have more than 10k client devices connected with your server? You need at least 10k threads to handle all client devices (assume each device maintain a single TCP connection) no matter they are active or not. You have too much overhead on context switch and other multi-threads overhead even only 100 of them are active.
The NIO use a selector model to handle those clients, means you don't need a dedicate thread for each TCP connection to receive data. You just need to select all active connections (which has data already received) and to process those active connections. You can control how many threads should be maintained in server side.
EDIT
This answer is kind of not exactly answering about what OP asked. For client side its fine because the client is going to connect to just one server. Although my answer gives some generic idea about Blocking and Non Blocking IO.
I know this answer is coming 3 years later but this is something which might help someone in future.
In a Blocking Socket model, if data is not available for reading or if the server is not ready for writing then the network thread will wait on a request to read from or write to a socket until it either gets or sends the data or
times out. In other words, the program may halt at that point for quite some time if it can't proceed. To cancel this out we can create a thread per connection which can handle out requests from each client concurrently. If this approach is chosen, the scalability of the application can suffer because in a scenario where thousands of clients are connected, having thousands of threads can eat up all the memory of the system as threads are expensive to create and can affect the performance of the application.
In a Non-Blocking Socket model, the request to read or write on a socket returns immediately whether or not it was successful, in other words, asynchronously. This keeps the network thread busy. It is then our task to decide whether to try again or consider the read/write operation complete. Having this creates an event driven approach towards communication where we can create threads when needed and which leads to a more scalable system.
Diagram below explains the difference between Blocking and Non-Blocking socket model.

implementing keepalives with Java

I am building a client-server application where I have to implement a keepalive mechanism in order to detect that the client has crashed or not. I have separate threads on both client and server side. the client thread sends a "ping" then sleeps for 3 seconds, while the server reads the BufferedInputStream and checks whether ping is received, if so it makes the ping counter equals zero, else it increments the counter by +1, the server thread then sleeps for 3 seconds, if the ping counter reaches 3, it declares the client as dead.
The problem is that when the server reads the input stream, its a blocking call, and it blocks until the next ping is received, irrespective of how delayed it is, so the server never detects a missed ping.
any suggestions, so that I can read the current value of the stream and it doesn't block if there is nothing on the incoming stream.
Thanks,
Java 1.4 introduced the idea of non-blocking I/O, represented by the java.nio package. This is probably what you need.
See this tutorial for how to use non-blocking I/O.
Also, assuming this isn't homework or a learning exercise, then I recommend using a more robust protocol framework such as Apache Mina or JBoss Netty, rather than building this stuff from scratch. See this comparison between them, and why you'd want to use them.
You can have a separate monitoring thread which monitors all the blocking connections. When a connection receives anything it can reset a counter. (I would treat any packet as good as a heartbeat) Your monitoring thread can increment this counter each times it runs and when it reaches a limit (i.e. because it wasn't reset back to zero) you can close the connection. You only need one such thread. The thread which is blocking on the connection you just closed with throw an IOException, waking the thread.
On the other side, a heartbeat can be triggered whenever a packet has not been sent for some period of time. This mean a busy connection doesn't send any heartbeats, it shouldn't need to.

Java sockets: can I write a TCP server with one thread?

From what I read about Java NIO and non-blocking [Server]SocketChannels, it should be possible to write a TCP server that sustains several connections using only one thread - I'd make a Selector that waits for all relevant channels in the server's loop.
Is that right, or am I missing some important detail? What problems can I encounter?
(Background: The TCP communication would be for a small multiplayer game, so max. 10-20 simultaneous connections. Messages will be sent about every few seconds.)
Yes, you are right. The problems you can encounter is when the duration of processing is too long. In this case, you'd have to wrap the processing inside another thread, such that it will not interfere with the networking thread, and prevent noticeable delay.
Another detail; Channels are all about "moving" data. If your data you'd wish to send is ready, then you can move this data to a networking channel. The copying/buffering/etc. is all done by the NIO implementation, then.
Your single-threaded "networking thread" is only steering the connection, but not throttling it (read: weird analogy with a car).
The basic multithreaded approach is easier to design and implement than a single threaded NIO. Performance gain isn't noticeable in a small multiplayer game server/client, especially if a message is only sent every few seconds.
Brian Agnew said:
This all works well when the server-side processing
for each client is negligible. However a multi-threaded
approach will scale much better.
I beg to disagree. A one-client-one-thread approach will exhaust memory much faster than if you handle multiple clients per thread as you won't need a full stack per client. See the C10K paper for more on the topic: http://www.kegel.com/c10k.html
Anyway, if there won't be more than 20 clients, just use whatever is easiest to code and debug.
Yes you can. See this example for an illustration on how to do this.
The important section is this:
for (;;) { // Loop forever, processing client connections
// Wait for a client to connect
SocketChannel client = server.accept();
// Build response string, wrap, and encode to bytes (elided)
client.write(response);
client.close();
}
This all works well when the server-side processing for each client is negligible. However a multi-threaded approach will scale much better.

Categories