Deplyment environment:
I have created a TCP server using JAVA over windows 10 OS. My TCP client program is written in VC++ and runs on windows 7 OS (I don't have any control over this part of the code, it is a black box to me).
My TCP server code is like this:
Socket s = ss.accept();
s.setReceiveBufferSize(2000);
s.setSendBufferSize(2000);
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
Following is the TCP connection handler snippet:
InputStream incomingPacketBuffer = this.clientSocket.getInputStream();
OutputStream outgoingPacketBuffer = this.clientSocket.getOutputStream();
int bufferLen=0;
byte inBuffer[] = new byte[this.clientSocket.getReceiveBufferSize()];
byte outBuffer[] = new byte[this.clientSocket.getSendBufferSize()];
while(this.clientSocket.isConnected())
{
bufferLen = incomingPacketBuffer.read(inBuffer);
if(bufferLen>0)
{
outBuffer = (byte[]) this.packetHandlerModule.invoke(this.packetHandler,Arrays.copyOf(inBuffer, bufferLen));
}
if(outBuffer != null)
{
if(this.clientSocket.isConnected())
{
outgoingPacketBuffer.write(outBuffer);
outgoingPacketBuffer.flush();
}
}
}
this.clientSocket.close();
The communication is packet based and the protocol/parsing is handled by packetHandler.
Two more variant I've tried:
I have tried to close the socket as and when a reply is sent back to the client. That is, after receiving one packet of data, I reply to the client and close the connection.
I used inputStream.available before using the read method.
The problem I face:
Most of the time the TCP server replies to incoming packets within a second. If the server receives a packet after some idle time, the server doesn't reply to the packet. Sometimes even when there is active communication is going on, the reply is not being transmitted. Secondly, the isConnected function returns true even when the client socket closed the connection.
Debugging attempts:
I used teraterm to send packets and checked it. The behavior is same. As long as I send packets one after another, I don't have an issue. If one packet doesn't get a reply, then every packet sent after that does not get reply from the server.
When I press Ctrl+C in server console, all the packets sent from teraterm is processed by TCP server and reply is sent back. After this the server works properly for some duration.
I checked the packet flow with wireshark. When the replies are sent back normally, it is sent along with the ACK of client request (SYN, SYN+ACK, ACK, PSH, PSH+ACK, FYN, FYN+ACK, ACK). When the reply gets staled (may not be the right term, it is stuck in inputStream.available or inputStream.read), only ACK packet is sent by server (SYN, SYN+ACK, ACK, PSH, ACK).
I checked many forums and other threads in stackexchange, learned about Nagle's algorithm, applicaion must take care of packetization in TCP, TCP may receive 10+10 packets as 8+12 or 15+5 or any such manner. The server code takes care of packetization, setKeepAlive is set to true (there is no problem when a packet is sent from server).
Problem in short: "At times, TCP read call is getting blocked for a long duration even when there is incoming packets. When Ctrl+C is pressed, they are getting processed."
PS: I just started posting queries on stackexchange, so kindly let me know if there is any issues in the way of formulating the query.
PPS: Sorry for such a long post.
UPDATE
The comment from EJB helped me to identify the peer disconnect.
I made another setup with Ubuntu 16.04 as operating system for server. It has been 3 days, windows system had the issue occasionally. Ubuntu 16.04 never staled.
Some things to consider;
the TCP buffer sizes are usually 8K at least and I don't think you can skink them to 2000 bytes, or if you can, I don't think it's a good idea.
the size of the byte[] doesn't really matter over about 2K, you may as well pick a value.
you can't need to be creating a buffer more than once.
So in short I would try.
Socket s = ss.accept();
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
and
try {
InputStream in = this.clientSocket.getInputStream();
OutputStream out = this.clientSocket.getOutputStream();
int bufferLen = 0;
byte[] buffer = new byte[2048];
while ((bufferLen = in.read(buffer)) > 0) {
out.write(buffer, 0, bufferLen); // not buffered so no need to flush
}
} finally {
this.clientSocket.close();
}
At times, TCP read call is getting blocked for a long duration even when there is incoming packets.
Would write a test Java client to see that this is not due to behaviour in Java.
Related
I need to build a UDP server which can handle ~10_000 requests/sec. Started with below code, to test whether a java socket can handle those number of requests.
I am bombarding the server for a minute with ~9000 requests,
Total number of requests sent from the client : 596951
and in the tcp dump I see
90640 packets captured
175182 packets received by filter
84542 packets dropped by kernel
UDP Server code :
try (DatagramSocket socket = new DatagramSocket(port)) {
System.out.println("Udp Server started at port :" + port);
while (true) {
byte[] buffer = new byte[1024];
DatagramPacket incomingDatagramPacket = new DatagramPacket(buffer, buffer.length);
try {
socket.receive(incomingDatagramPacket);
LinkedTransferQueue.add(incomingDatagramPacket);
} catch (IOException e) {
e.printStackTrace();
continue;
}
}
} catch (SocketException e) {
e.printStackTrace();
}
What is the the probable cause kernel dropping the packets in program
this simple ?
How to reduce it ? Any other implementation ?
From this link, reading from the comments,lose of packets for UDP protocol can always happen even between network to java socket.recieve method.
Note: Have to figure out regarding anomalies in the tcpdump packets captured, but there is quite number of packets dropped.
The anomalies in the tcpdump is the lack of buffer space, In order to know the number of packets received , I am using the iptraf-ng which gives the number of packets received per port :)
Mutli-threading
Your code sample does nothing after the a packet is received. If that is the case, multi-threading cant help you.
However if that's just for testing and your actual application needs to do something with the received packet, you need to push the packet to another Thread (or a pool of them) and go immediately back to listening for the next packet.
Basically you need to minimize the time between two calls of the socket.receive().
Note: this is not the only mutli-threading model available for this case.
Buffer size
Increase the buffer size with socket.setReceiveBufferSize which maps to the SO_RCVBUF:
Increasing SO_RCVBUF may allow the network implementation to buffer multiple packets when packets arrive faster than are being received using receive(DatagramPacket).
However, this is just a hint:
The SO_RCVBUF option is used by the the network implementation as a hint to size the underlying network I/O buffers.
You could also, if your setup allows it, go directly to the OS and change the size of the buffer.
Irrelevant
Note: Read this only if you are not sure that the packet size is less than 1024 bytes.
Your packet buffer size seems low for generic packets, which can lead to bugs because: If a packet is larger than your buffer there will be no error, it will just ignore the overflowing bytes.
EDIT:
Other Multi-threading model
Note: This is an idea, I don't know if it actually works.
3 Threads:
Thread A: handling packets
Thread B1: receive packets
Thread B2: receive packets
Init:
Atomic counter set to 0
B1 is receiving, B2 is waiting.
While loop of the B1:
while counter > 0 wait
counter += 1
received the packet
counter -= 1
wake up the B2
push the packet to A's queue
Same for B2.
This the threads diagram (line where the packet has been received):
B1 [--------|---] [--------|---]
B2 [--------|---] [--------|---]
Instead of using threads, can you check the possibility of using NIO2 APIs here by using AsynchronousDatagramChannel.
Help link:
https://www.ibm.com/developerworks/library/j-nio2-1/index.html
The actual number of packets what can be handled depends on CPU of your and target server, the network connection between them and your actual program. If you need a high performance solution for networking in Java you can use coral reactor: http://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/
One disadvantage of UDP is it does not come with the reliable delivery guarantees provided by TCP
The UDP protocol's mcast_recv_buf_size and ucast_recv_buf_size configuration attributes are used to specify the amount of receive buffer.
It Depends upon the OS you are using to run your program. Buffer size for different OS are :
<table sytle="width:100% border:1px solid black">
<tr>
<th><b>Operating System</b></th>
<th><b>Default Max UDP Buffer (in bytes)</b></th>
</tr>
<tr><td>Linux</td> <td>131071</td></tr>
<tr><td>Windows</td> <td>No known limit</td></tr>
<tr><td>Solaris</td> <td>262144</td></tr>
<tr><td>FreeBSD</td> <td>262144</td></tr>
<tr><td>AIX</td> <td>1048576</td></tr>
</table>
So UDP load handling depends upon machine as well as OS configuration.
I'm implementing a FTP program using UDP in Java (TCP is not an option), but I'm having trouble grasping the basics of how it's supposed to work.
As I understand, it's connectionless, so I should just have one server thread running which processes every request by any client.
Where I'm getting confused is during the actual file transfer. If the server is in the middle of a loop sending datagrams containing bits of a requested file to the client, and is waiting for an ACK from the client, but instead of that receives a completely different request from a different client, how am I supposed to handle that?
I know I could jump out of the loop to handle it, but then if the initial expected packet finally arrives, how can I pick up where I left off?
A UDP server works similar to a TCP in many respects. The major difference is that you will not receive a acknowledgement that your packets were received. You still have to know which client you are sending to, so use the DatagramSocket class. This is the Oracle tutorial for UDP: http://docs.oracle.com/javase/tutorial/networking/datagrams/index.html. It has a pretty good example in it. The significant part is getting the address and port of the original client, and returning your packets to that client:
InetAddress address = packet.getAddress();
int port = packet.getPort();
new DatagramPacket(buf, buf.length, address, port);
You could start a new thread on the server side for sending the bits every time a client sends a request. The thread would save the return address and port of the client, and die when the file send was done.
So I am sending audio over UDP via multicast.
And the sender is sending a raw audio UDP packet every 10 ms. Unfortunately every now and then it misses a packet. So what I did was try to time the send/receive so that I can work out if I have missed one.
Here is what I currently have:
prevReceived = System.currentTimeMillis();
socket.receive(recv);
long messageReceived = System.currentTimeMillis();
if (dateDiff > 20) {
... Missed packet add the previous packet
The problem that I am having is that sometimes the java multisocket receive method is taking 70ms to receive a message. But when I check with Microsoft network monitor the sending is still sending messages.
So I was wondering if there is a way to look at if the multisocket object has any pending packets: socket.count() or something.
or does the datagrampacket received time from the socket time. eg something like recv.timestamp().
So far I have not found anything and cannot work out why it is taking 70ms to process the message when Microsoft network monitor is processing it every 10 ms.
Apache mina can count your datagram packets.
http://mina.apache.org/mina/userguide/ch2-basics/sample-udp-server.html
http://mina.apache.org/mina/userguide/user-guide-toc.html
I have a server application which received requests and forwards them on a Unix Domain Socket. This works perfectly under reasonable usage but when I am doing some load tests with a few thousand requests I am getting a Broken Pipe error.
I am using Java 7 with junixsocket to send the requests. I have lots of concurrent requests, but I have a thread pool of 20 workers which is writing to the unix domain socket, so there is no issue of too many concurrent open connections.
For each request I am opening, sending and closing the connection with the Unix Domain Socket.
What is the reason that could cause a Broken Pipe on Unix Domain Sockets?
UPDATE:
Putting a code sample if required:
byte[] mydata = new byte[1024];
//fill the data with bytes ...
AFUNIXSocketAddress socketAddress = new AFUNIXSocketAddress(new File("/tmp/my.sock"));
Socket socket = AFUNIXSocket.connectTo(socketAddress);
OutputStream out = new BufferedOutputStream(socket.getOutputStream());
InputStream in = new BufferedInputStream(socket.getInputStream()));
out.write(mydata);
out.flush(); //The Broken Pipe occurs here, but only after a few thousand times
//read the response back...
out.close();
in.close();
socket.close();
I have a thread pool of 20 workers, and they are doing the above concurrently (so up to 20 concurrent connections to the same Unix Domain Socket), with each one opening, sending and closing. This works fine for a load test of a burst of 10,000 requests but when I put a few thousand more I suddenly get this error, so I am wondering whether its coming from some OS limit.
Keep in mind that this is a Unix Domain Socket, not a network TCP socket.
'Broken pipe' means you have written to a connection that had already been closed by the other end. It is detected somewhat asynchronously due to buffering. It basically means you have an error in your application protocol.
From the Linux Programmer's Manual (similar language is also in the socket man page on Mac):
The communications protocols which implement a SOCK_STREAM ensure that data is not lost or duplicated. If a piece of data for which the peer protocol has buffer space cannot be successfully transmitted within a reasonable length of time, then the connection is considered to be dead. When SO_KEEPALIVE is enabled on the socket the protocol checks in a protocol-specific manner if the other end is still alive. A SIGPIPE signal is raised if a process sends or receives on a broken stream; this causes naive processes, which do not handle the signal, to exit.
In other words, if data gets stuck in a stream socket for too long, you'll end up with a SIGPIPE. It's reasonable that you would end up with this if you can't keep up with your load test.
I built a tcp server based on apache mina 2.0.4, and have some problems writing back to the client.
We have some tcp clients that can handle only one message at a time and with a buffer size of 256 bytes max. When I send 2+ messages (< 256 bytes) to the client, they arrive in one or two big blocks that the client can't handle, instead of 2+ separated messages.
I tried to set sessionConfig.setTcpNoDelay(true/false); with no success, as well as sessionConfig.setSendBufferSize( 256 );.
In the message response encoder I also tried to flush the output:
int capacity = 256;
IoBuffer buffer = IoBuffer.allocate(capacity, false);
buffer.setAutoExpand(false);
buffer.setAutoShrink(true);
buffer.putShort(type);
buffer.putShort(length);
buffer.put(gmtpMsg.getMessage().getBytes());
buffer.flip();
out.write(buffer);
out.flush();
And in the thread responsible to send the messages, I tried to wait for the message to be written
for (Entry<Long, OutgoingMessage> outgoingMsg : outgoingMsgs.entrySet()) {
WriteFuture future = session.write(outgoingMsg.getValue());
future.awaitUninterruptibly();
}
All this fails miserably, and the only solution working is a ridiculous 500 msec sleep between the session write, which is hardly acceptable.
Anyone see what I am doing wrong?
After reading a bit more on the tcp protocol and specially https://stackoverflow.com/a/6614586/1280034, it is clear that the problem is on the client side, not handling the packets correctly.
Since we can't rebuilt the clients, my only solution is to delay each outgoing messages by approx 500ms. To do so I created an extra queue in charge of the writting to clients in order to let the server continue its normal job.