I built a tcp server based on apache mina 2.0.4, and have some problems writing back to the client.
We have some tcp clients that can handle only one message at a time and with a buffer size of 256 bytes max. When I send 2+ messages (< 256 bytes) to the client, they arrive in one or two big blocks that the client can't handle, instead of 2+ separated messages.
I tried to set sessionConfig.setTcpNoDelay(true/false); with no success, as well as sessionConfig.setSendBufferSize( 256 );.
In the message response encoder I also tried to flush the output:
int capacity = 256;
IoBuffer buffer = IoBuffer.allocate(capacity, false);
buffer.setAutoExpand(false);
buffer.setAutoShrink(true);
buffer.putShort(type);
buffer.putShort(length);
buffer.put(gmtpMsg.getMessage().getBytes());
buffer.flip();
out.write(buffer);
out.flush();
And in the thread responsible to send the messages, I tried to wait for the message to be written
for (Entry<Long, OutgoingMessage> outgoingMsg : outgoingMsgs.entrySet()) {
WriteFuture future = session.write(outgoingMsg.getValue());
future.awaitUninterruptibly();
}
All this fails miserably, and the only solution working is a ridiculous 500 msec sleep between the session write, which is hardly acceptable.
Anyone see what I am doing wrong?
After reading a bit more on the tcp protocol and specially https://stackoverflow.com/a/6614586/1280034, it is clear that the problem is on the client side, not handling the packets correctly.
Since we can't rebuilt the clients, my only solution is to delay each outgoing messages by approx 500ms. To do so I created an extra queue in charge of the writting to clients in order to let the server continue its normal job.
Related
I have Java web socket web application. The websocket endpoint interacts with mobile clients. In one of the use case, the web application needs to write bytes of the size of 10MB or more to the web-socket outputstream. Following is the code which writes to the output stream :
if (webSocSession.isOpen()) {
webSocSession.getBasicRemote().sendBinary(byteBuffer);
byteBuffer.clear();
}
I am getting the following exception at times when the writing to the web-socket :
IOException writing to web-socket
java.io.IOException: java.net.SocketTimeoutException: Write timeout
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:324)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:259)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendBytes(WsRemoteEndpointImplBase.java:131)
at org.apache.tomcat.websocket.WsRemoteEndpointBasic.sendBinary(WsRemoteEndpointBasic.java:43)
at test.web.websocket.LIMSEndpoint$SocketWorker.writeToWebSocket(LIMSEndpoint.java:1188)
at test.web.websocket.LIMSEndpoint$SocketWorker.run(LIMSEndpoint.java:1127)
Caused by: java.net.SocketTimeoutException: Write timeout
at org.apache.tomcat.util.net.SocketWrapperBase.vectoredOperation(SocketWrapperBase.java:1458)
at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:1376)
at org.apache.tomcat.util.net.SocketWrapperBase.write(SocketWrapperBase.java:1347)
at org.apache.tomcat.websocket.server.WsRemoteEndpointImplServer.doWrite(WsRemoteEndpointImplServer.java:93)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.writeMessagePart(WsRemoteEndpointImplBase.java:509)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendMessageBlock(WsRemoteEndpointImplBase.java:311)
I tried setting the up the following SEND timeout property to 0 ( infinite write timeout ) , but it does not help. The following Apache Tomcat doc helped - https://tomcat.apache.org/tomcat-8.5-doc/web-socket-howto.html
org.apache.tomcat.websocket.BLOCKING_SEND_TIMEOUT
The web socket session max idle timeout has been set to 0 ( not to timeout )
Session.setMaxIdleTimeout(0)
Any help in this regard will be great help.
I would suspect that the clients receive buffer is filled up and it cuts the connection. I know this will happen the other way around, when a client sends large data to an apache server and the server isn't configured to have an extended binary buffer. See: org.apache.tomcat.websocket.binaryBufferSize
Perhaps you can get around the problem by sending the data in smaller chunks, sendBytes have a partial version.
https://tomcat.apache.org/tomcat-10.0-doc/websocketapi/index.html?jakarta/websocket/Session.html
Deplyment environment:
I have created a TCP server using JAVA over windows 10 OS. My TCP client program is written in VC++ and runs on windows 7 OS (I don't have any control over this part of the code, it is a black box to me).
My TCP server code is like this:
Socket s = ss.accept();
s.setReceiveBufferSize(2000);
s.setSendBufferSize(2000);
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
Following is the TCP connection handler snippet:
InputStream incomingPacketBuffer = this.clientSocket.getInputStream();
OutputStream outgoingPacketBuffer = this.clientSocket.getOutputStream();
int bufferLen=0;
byte inBuffer[] = new byte[this.clientSocket.getReceiveBufferSize()];
byte outBuffer[] = new byte[this.clientSocket.getSendBufferSize()];
while(this.clientSocket.isConnected())
{
bufferLen = incomingPacketBuffer.read(inBuffer);
if(bufferLen>0)
{
outBuffer = (byte[]) this.packetHandlerModule.invoke(this.packetHandler,Arrays.copyOf(inBuffer, bufferLen));
}
if(outBuffer != null)
{
if(this.clientSocket.isConnected())
{
outgoingPacketBuffer.write(outBuffer);
outgoingPacketBuffer.flush();
}
}
}
this.clientSocket.close();
The communication is packet based and the protocol/parsing is handled by packetHandler.
Two more variant I've tried:
I have tried to close the socket as and when a reply is sent back to the client. That is, after receiving one packet of data, I reply to the client and close the connection.
I used inputStream.available before using the read method.
The problem I face:
Most of the time the TCP server replies to incoming packets within a second. If the server receives a packet after some idle time, the server doesn't reply to the packet. Sometimes even when there is active communication is going on, the reply is not being transmitted. Secondly, the isConnected function returns true even when the client socket closed the connection.
Debugging attempts:
I used teraterm to send packets and checked it. The behavior is same. As long as I send packets one after another, I don't have an issue. If one packet doesn't get a reply, then every packet sent after that does not get reply from the server.
When I press Ctrl+C in server console, all the packets sent from teraterm is processed by TCP server and reply is sent back. After this the server works properly for some duration.
I checked the packet flow with wireshark. When the replies are sent back normally, it is sent along with the ACK of client request (SYN, SYN+ACK, ACK, PSH, PSH+ACK, FYN, FYN+ACK, ACK). When the reply gets staled (may not be the right term, it is stuck in inputStream.available or inputStream.read), only ACK packet is sent by server (SYN, SYN+ACK, ACK, PSH, ACK).
I checked many forums and other threads in stackexchange, learned about Nagle's algorithm, applicaion must take care of packetization in TCP, TCP may receive 10+10 packets as 8+12 or 15+5 or any such manner. The server code takes care of packetization, setKeepAlive is set to true (there is no problem when a packet is sent from server).
Problem in short: "At times, TCP read call is getting blocked for a long duration even when there is incoming packets. When Ctrl+C is pressed, they are getting processed."
PS: I just started posting queries on stackexchange, so kindly let me know if there is any issues in the way of formulating the query.
PPS: Sorry for such a long post.
UPDATE
The comment from EJB helped me to identify the peer disconnect.
I made another setup with Ubuntu 16.04 as operating system for server. It has been 3 days, windows system had the issue occasionally. Ubuntu 16.04 never staled.
Some things to consider;
the TCP buffer sizes are usually 8K at least and I don't think you can skink them to 2000 bytes, or if you can, I don't think it's a good idea.
the size of the byte[] doesn't really matter over about 2K, you may as well pick a value.
you can't need to be creating a buffer more than once.
So in short I would try.
Socket s = ss.accept();
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
and
try {
InputStream in = this.clientSocket.getInputStream();
OutputStream out = this.clientSocket.getOutputStream();
int bufferLen = 0;
byte[] buffer = new byte[2048];
while ((bufferLen = in.read(buffer)) > 0) {
out.write(buffer, 0, bufferLen); // not buffered so no need to flush
}
} finally {
this.clientSocket.close();
}
At times, TCP read call is getting blocked for a long duration even when there is incoming packets.
Would write a test Java client to see that this is not due to behaviour in Java.
I need to build a UDP server which can handle ~10_000 requests/sec. Started with below code, to test whether a java socket can handle those number of requests.
I am bombarding the server for a minute with ~9000 requests,
Total number of requests sent from the client : 596951
and in the tcp dump I see
90640 packets captured
175182 packets received by filter
84542 packets dropped by kernel
UDP Server code :
try (DatagramSocket socket = new DatagramSocket(port)) {
System.out.println("Udp Server started at port :" + port);
while (true) {
byte[] buffer = new byte[1024];
DatagramPacket incomingDatagramPacket = new DatagramPacket(buffer, buffer.length);
try {
socket.receive(incomingDatagramPacket);
LinkedTransferQueue.add(incomingDatagramPacket);
} catch (IOException e) {
e.printStackTrace();
continue;
}
}
} catch (SocketException e) {
e.printStackTrace();
}
What is the the probable cause kernel dropping the packets in program
this simple ?
How to reduce it ? Any other implementation ?
From this link, reading from the comments,lose of packets for UDP protocol can always happen even between network to java socket.recieve method.
Note: Have to figure out regarding anomalies in the tcpdump packets captured, but there is quite number of packets dropped.
The anomalies in the tcpdump is the lack of buffer space, In order to know the number of packets received , I am using the iptraf-ng which gives the number of packets received per port :)
Mutli-threading
Your code sample does nothing after the a packet is received. If that is the case, multi-threading cant help you.
However if that's just for testing and your actual application needs to do something with the received packet, you need to push the packet to another Thread (or a pool of them) and go immediately back to listening for the next packet.
Basically you need to minimize the time between two calls of the socket.receive().
Note: this is not the only mutli-threading model available for this case.
Buffer size
Increase the buffer size with socket.setReceiveBufferSize which maps to the SO_RCVBUF:
Increasing SO_RCVBUF may allow the network implementation to buffer multiple packets when packets arrive faster than are being received using receive(DatagramPacket).
However, this is just a hint:
The SO_RCVBUF option is used by the the network implementation as a hint to size the underlying network I/O buffers.
You could also, if your setup allows it, go directly to the OS and change the size of the buffer.
Irrelevant
Note: Read this only if you are not sure that the packet size is less than 1024 bytes.
Your packet buffer size seems low for generic packets, which can lead to bugs because: If a packet is larger than your buffer there will be no error, it will just ignore the overflowing bytes.
EDIT:
Other Multi-threading model
Note: This is an idea, I don't know if it actually works.
3 Threads:
Thread A: handling packets
Thread B1: receive packets
Thread B2: receive packets
Init:
Atomic counter set to 0
B1 is receiving, B2 is waiting.
While loop of the B1:
while counter > 0 wait
counter += 1
received the packet
counter -= 1
wake up the B2
push the packet to A's queue
Same for B2.
This the threads diagram (line where the packet has been received):
B1 [--------|---] [--------|---]
B2 [--------|---] [--------|---]
Instead of using threads, can you check the possibility of using NIO2 APIs here by using AsynchronousDatagramChannel.
Help link:
https://www.ibm.com/developerworks/library/j-nio2-1/index.html
The actual number of packets what can be handled depends on CPU of your and target server, the network connection between them and your actual program. If you need a high performance solution for networking in Java you can use coral reactor: http://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/
One disadvantage of UDP is it does not come with the reliable delivery guarantees provided by TCP
The UDP protocol's mcast_recv_buf_size and ucast_recv_buf_size configuration attributes are used to specify the amount of receive buffer.
It Depends upon the OS you are using to run your program. Buffer size for different OS are :
<table sytle="width:100% border:1px solid black">
<tr>
<th><b>Operating System</b></th>
<th><b>Default Max UDP Buffer (in bytes)</b></th>
</tr>
<tr><td>Linux</td> <td>131071</td></tr>
<tr><td>Windows</td> <td>No known limit</td></tr>
<tr><td>Solaris</td> <td>262144</td></tr>
<tr><td>FreeBSD</td> <td>262144</td></tr>
<tr><td>AIX</td> <td>1048576</td></tr>
</table>
So UDP load handling depends upon machine as well as OS configuration.
Hi let me get straight to the problem. I have a big JSON packet that the server sends to this client once the client is authenticated
But the packet comes back in a weird way like it's split or something example:
The JSON should be:
Received: {"UserID":1,"PlayerID":2,"EXP":0,"Lvl":1,"Coins":0,"ItemSlots":30}
When it comes through:
Received: {"UserID":1,"PlayerID":2,"EXP":0,"Lvl":1,"Coins":0,
Received: "ItemSlots":30}
Why does it split the packet or something when it comes to the client and how can I fix this anyway?
Java Receive Code:
private class ClientThread extends Thread {
public void run() {
try {
while (selector.select() > 0) {
for (SelectionKey sk : selector.selectedKeys()) {
selector.selectedKeys().remove(sk);
if (sk.isReadable()) {
SocketChannel sc = (SocketChannel)sk.channel();
ByteBuffer buff = ByteBuffer.allocate(1024);
String content = "";
while (sc.read(buff) > 0) {
sc.read(buff);
buff.flip();
content += charset.decode(buff);
buff.clear();
}
System.out.println("Recieved: " + content);
sk.interestOps(SelectionKey.OP_READ);
}
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
Thanks have a wonderful day.
Hi lemme get straight to the problem so i got a big JSON packet that the server sends to this client once the client is authenticated
You mean you have a big JSON message. Packets are things that network protocols used to exchange information.
But the packet comes back in a weird way like its split or something example:
Unless you're looking at the wire, you aren't looking at packets. You're looking at the bytes you got from your end of the TCP connection.
The JSON should be:
Recieved: {"UserID":1,"PlayerID":2,"EXP":0,"Lvl":1,"Coins":0,"ItemSlots":30}
When it comes through:
Recieved: {"UserID":1,"PlayerID":2,"EXP":0,"Lvl":1,"Coins":0,
Recieved: "ItemSlots":30}
Excellent. You got the same bytes. Now make a JSON parser that figures out where the message ends and parses it.
Why does it split the packet or something when it comes to the client
It splits the message into packets because that's how TCP gets the message to the other side. TCP is not a message protocol and it doesn't know or care what the application considers to be a message -- that's the application's job.
and how i can i fix this anyway?
Write a JSON parser to figure out where the messages end. You haven't implemented any code to receive JSON over TCP yet, so that won't work until you do.
TL;DR: If you want an application-level message protocol, you need to implement one. TCP is not one.
TCP protocol does not maintain message boundaries. It is not guaranteed that what the server sends is received as-is by the client and vice-versa.
If the server sends 1000 bytes data, the client application can receive the same across multiple recv or single recv. TCP does not guarantee any behaviour. "Split" can happen, it is upto the application to handle the data coming in multiple chunks, coalesce it to one unit of application data for further processing. One can see this particularly with large data sizes.
It looks like you've got a non-blocking socket channel, meaning that the while (sc.read(buff) > 0) { loop is terminating due to sc.read(buff) returning 0 since only a portion of the data sent has, at this point, been received.
Why does it split the packet or something when it comes to the client
Most likely the data is being split into two or more packets.
and how i can i fix this anyway?
Keep filling your buffer until the socket is closed by the server (read should return -1 rather than 0 in that case). You need to maintain a separate buffer per channel. If the server doesn't close its end after sending the data, you'll need to delineate in some other way; you could prefix the JSON blob with a size header, for instance.
I have a server application which received requests and forwards them on a Unix Domain Socket. This works perfectly under reasonable usage but when I am doing some load tests with a few thousand requests I am getting a Broken Pipe error.
I am using Java 7 with junixsocket to send the requests. I have lots of concurrent requests, but I have a thread pool of 20 workers which is writing to the unix domain socket, so there is no issue of too many concurrent open connections.
For each request I am opening, sending and closing the connection with the Unix Domain Socket.
What is the reason that could cause a Broken Pipe on Unix Domain Sockets?
UPDATE:
Putting a code sample if required:
byte[] mydata = new byte[1024];
//fill the data with bytes ...
AFUNIXSocketAddress socketAddress = new AFUNIXSocketAddress(new File("/tmp/my.sock"));
Socket socket = AFUNIXSocket.connectTo(socketAddress);
OutputStream out = new BufferedOutputStream(socket.getOutputStream());
InputStream in = new BufferedInputStream(socket.getInputStream()));
out.write(mydata);
out.flush(); //The Broken Pipe occurs here, but only after a few thousand times
//read the response back...
out.close();
in.close();
socket.close();
I have a thread pool of 20 workers, and they are doing the above concurrently (so up to 20 concurrent connections to the same Unix Domain Socket), with each one opening, sending and closing. This works fine for a load test of a burst of 10,000 requests but when I put a few thousand more I suddenly get this error, so I am wondering whether its coming from some OS limit.
Keep in mind that this is a Unix Domain Socket, not a network TCP socket.
'Broken pipe' means you have written to a connection that had already been closed by the other end. It is detected somewhat asynchronously due to buffering. It basically means you have an error in your application protocol.
From the Linux Programmer's Manual (similar language is also in the socket man page on Mac):
The communications protocols which implement a SOCK_STREAM ensure that data is not lost or duplicated. If a piece of data for which the peer protocol has buffer space cannot be successfully transmitted within a reasonable length of time, then the connection is considered to be dead. When SO_KEEPALIVE is enabled on the socket the protocol checks in a protocol-specific manner if the other end is still alive. A SIGPIPE signal is raised if a process sends or receives on a broken stream; this causes naive processes, which do not handle the signal, to exit.
In other words, if data gets stuck in a stream socket for too long, you'll end up with a SIGPIPE. It's reasonable that you would end up with this if you can't keep up with your load test.