I'm creating a program on my Android phone to send the output of the camera to a server on the same network. Here is my Java code:
camera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera cam) {
try {
socket = new Socket("XXX.XXX.XXX.XXX", 3000);
out = socket.getOutputStream();
out.write(data);
socket.close();
} catch (Exception e) {
e.printStackTrace();
}
camera.addCallbackBuffer(data);
}
The server is a NodeJS server:
time = 0
video_server.on 'connection', (socket) ->
buffer = []
socket.on 'data', (data) ->
buffer.push data
socket.on 'end', ->
new_time = (new Date()).getTime()
fps = Math.round(1000/(new_time - time)*100)/100
console.log fps
time = new_time
stream = fs.createWriteStream 'image.jpg'
stream.on 'close', ->
console.log 'Image saved.', fps
stream.write data for data in buffer
stream.end()
My terminal is showing about 1.5 fps (5 Mbps). I know very little about network programming, but I do know there should definitely be enough bandwidth. Each image is 640x480x1.5 at 18 fps, which is about 63 Mbps. The local network should easily be able to handle this, but my debugger in Android is giving me a lot of "Connection refused" messages.
Any help on fixing my bad network practices would be great. (I'll get to image compression in a little bit -- but right now I need to optimize this step).
You've designed the system so that it has to do many times more work than it should have to do. You're requiring a connection to be built up and torn down for each frame transferred. That is not only killing your throughput, but it can also run you out of resources.
With a sane design, all that would be required to transfer a frame is to send and receive the frame data. With your design, for each frame, a TCP connection has to be built up (3 steps), the frame data has to be sent and received, and the TCP connection has to be torn down. Worse, the receiver cannot know it has received all of the frame data until the connection shutdown occurs. So this cannot be hidden in the background.
Design a sane protocol and the problems will go away.
Is this working at all? I do not see where you are binding to port 3000 on the server.
In any case, if this is a video stream, you should probably be using UDP instead of TCP. In UDP, packets may be dropped, but for a video stream this will probably not be noticeable. UDP communication requires much less overhead than TCP due to the number of messages exchanged. TCP contains a lot of "acking" to make sure each piece of data reaches its destination; UDP doesn't care, and thus sends less packets. In my experience, UDP based code is generally less complex than TCP based code.
_ryan
Related
Deplyment environment:
I have created a TCP server using JAVA over windows 10 OS. My TCP client program is written in VC++ and runs on windows 7 OS (I don't have any control over this part of the code, it is a black box to me).
My TCP server code is like this:
Socket s = ss.accept();
s.setReceiveBufferSize(2000);
s.setSendBufferSize(2000);
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
Following is the TCP connection handler snippet:
InputStream incomingPacketBuffer = this.clientSocket.getInputStream();
OutputStream outgoingPacketBuffer = this.clientSocket.getOutputStream();
int bufferLen=0;
byte inBuffer[] = new byte[this.clientSocket.getReceiveBufferSize()];
byte outBuffer[] = new byte[this.clientSocket.getSendBufferSize()];
while(this.clientSocket.isConnected())
{
bufferLen = incomingPacketBuffer.read(inBuffer);
if(bufferLen>0)
{
outBuffer = (byte[]) this.packetHandlerModule.invoke(this.packetHandler,Arrays.copyOf(inBuffer, bufferLen));
}
if(outBuffer != null)
{
if(this.clientSocket.isConnected())
{
outgoingPacketBuffer.write(outBuffer);
outgoingPacketBuffer.flush();
}
}
}
this.clientSocket.close();
The communication is packet based and the protocol/parsing is handled by packetHandler.
Two more variant I've tried:
I have tried to close the socket as and when a reply is sent back to the client. That is, after receiving one packet of data, I reply to the client and close the connection.
I used inputStream.available before using the read method.
The problem I face:
Most of the time the TCP server replies to incoming packets within a second. If the server receives a packet after some idle time, the server doesn't reply to the packet. Sometimes even when there is active communication is going on, the reply is not being transmitted. Secondly, the isConnected function returns true even when the client socket closed the connection.
Debugging attempts:
I used teraterm to send packets and checked it. The behavior is same. As long as I send packets one after another, I don't have an issue. If one packet doesn't get a reply, then every packet sent after that does not get reply from the server.
When I press Ctrl+C in server console, all the packets sent from teraterm is processed by TCP server and reply is sent back. After this the server works properly for some duration.
I checked the packet flow with wireshark. When the replies are sent back normally, it is sent along with the ACK of client request (SYN, SYN+ACK, ACK, PSH, PSH+ACK, FYN, FYN+ACK, ACK). When the reply gets staled (may not be the right term, it is stuck in inputStream.available or inputStream.read), only ACK packet is sent by server (SYN, SYN+ACK, ACK, PSH, ACK).
I checked many forums and other threads in stackexchange, learned about Nagle's algorithm, applicaion must take care of packetization in TCP, TCP may receive 10+10 packets as 8+12 or 15+5 or any such manner. The server code takes care of packetization, setKeepAlive is set to true (there is no problem when a packet is sent from server).
Problem in short: "At times, TCP read call is getting blocked for a long duration even when there is incoming packets. When Ctrl+C is pressed, they are getting processed."
PS: I just started posting queries on stackexchange, so kindly let me know if there is any issues in the way of formulating the query.
PPS: Sorry for such a long post.
UPDATE
The comment from EJB helped me to identify the peer disconnect.
I made another setup with Ubuntu 16.04 as operating system for server. It has been 3 days, windows system had the issue occasionally. Ubuntu 16.04 never staled.
Some things to consider;
the TCP buffer sizes are usually 8K at least and I don't think you can skink them to 2000 bytes, or if you can, I don't think it's a good idea.
the size of the byte[] doesn't really matter over about 2K, you may as well pick a value.
you can't need to be creating a buffer more than once.
So in short I would try.
Socket s = ss.accept();
s.setTcpNoDelay(true);
s.setKeepAlive(true);
new TcpConnectionHandler(s,this.packetHandler);
and
try {
InputStream in = this.clientSocket.getInputStream();
OutputStream out = this.clientSocket.getOutputStream();
int bufferLen = 0;
byte[] buffer = new byte[2048];
while ((bufferLen = in.read(buffer)) > 0) {
out.write(buffer, 0, bufferLen); // not buffered so no need to flush
}
} finally {
this.clientSocket.close();
}
At times, TCP read call is getting blocked for a long duration even when there is incoming packets.
Would write a test Java client to see that this is not due to behaviour in Java.
I need to build a UDP server which can handle ~10_000 requests/sec. Started with below code, to test whether a java socket can handle those number of requests.
I am bombarding the server for a minute with ~9000 requests,
Total number of requests sent from the client : 596951
and in the tcp dump I see
90640 packets captured
175182 packets received by filter
84542 packets dropped by kernel
UDP Server code :
try (DatagramSocket socket = new DatagramSocket(port)) {
System.out.println("Udp Server started at port :" + port);
while (true) {
byte[] buffer = new byte[1024];
DatagramPacket incomingDatagramPacket = new DatagramPacket(buffer, buffer.length);
try {
socket.receive(incomingDatagramPacket);
LinkedTransferQueue.add(incomingDatagramPacket);
} catch (IOException e) {
e.printStackTrace();
continue;
}
}
} catch (SocketException e) {
e.printStackTrace();
}
What is the the probable cause kernel dropping the packets in program
this simple ?
How to reduce it ? Any other implementation ?
From this link, reading from the comments,lose of packets for UDP protocol can always happen even between network to java socket.recieve method.
Note: Have to figure out regarding anomalies in the tcpdump packets captured, but there is quite number of packets dropped.
The anomalies in the tcpdump is the lack of buffer space, In order to know the number of packets received , I am using the iptraf-ng which gives the number of packets received per port :)
Mutli-threading
Your code sample does nothing after the a packet is received. If that is the case, multi-threading cant help you.
However if that's just for testing and your actual application needs to do something with the received packet, you need to push the packet to another Thread (or a pool of them) and go immediately back to listening for the next packet.
Basically you need to minimize the time between two calls of the socket.receive().
Note: this is not the only mutli-threading model available for this case.
Buffer size
Increase the buffer size with socket.setReceiveBufferSize which maps to the SO_RCVBUF:
Increasing SO_RCVBUF may allow the network implementation to buffer multiple packets when packets arrive faster than are being received using receive(DatagramPacket).
However, this is just a hint:
The SO_RCVBUF option is used by the the network implementation as a hint to size the underlying network I/O buffers.
You could also, if your setup allows it, go directly to the OS and change the size of the buffer.
Irrelevant
Note: Read this only if you are not sure that the packet size is less than 1024 bytes.
Your packet buffer size seems low for generic packets, which can lead to bugs because: If a packet is larger than your buffer there will be no error, it will just ignore the overflowing bytes.
EDIT:
Other Multi-threading model
Note: This is an idea, I don't know if it actually works.
3 Threads:
Thread A: handling packets
Thread B1: receive packets
Thread B2: receive packets
Init:
Atomic counter set to 0
B1 is receiving, B2 is waiting.
While loop of the B1:
while counter > 0 wait
counter += 1
received the packet
counter -= 1
wake up the B2
push the packet to A's queue
Same for B2.
This the threads diagram (line where the packet has been received):
B1 [--------|---] [--------|---]
B2 [--------|---] [--------|---]
Instead of using threads, can you check the possibility of using NIO2 APIs here by using AsynchronousDatagramChannel.
Help link:
https://www.ibm.com/developerworks/library/j-nio2-1/index.html
The actual number of packets what can be handled depends on CPU of your and target server, the network connection between them and your actual program. If you need a high performance solution for networking in Java you can use coral reactor: http://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/
One disadvantage of UDP is it does not come with the reliable delivery guarantees provided by TCP
The UDP protocol's mcast_recv_buf_size and ucast_recv_buf_size configuration attributes are used to specify the amount of receive buffer.
It Depends upon the OS you are using to run your program. Buffer size for different OS are :
<table sytle="width:100% border:1px solid black">
<tr>
<th><b>Operating System</b></th>
<th><b>Default Max UDP Buffer (in bytes)</b></th>
</tr>
<tr><td>Linux</td> <td>131071</td></tr>
<tr><td>Windows</td> <td>No known limit</td></tr>
<tr><td>Solaris</td> <td>262144</td></tr>
<tr><td>FreeBSD</td> <td>262144</td></tr>
<tr><td>AIX</td> <td>1048576</td></tr>
</table>
So UDP load handling depends upon machine as well as OS configuration.
I am trying to build a bandwidth testing tool, kind of like IPerf but in java, I seem to be getting more packet loss than expected however at slightly higher bandwidths (starts at about 30-40Mb/s) and I was hoping someone could possibly point out some optimization or something that I am doing wrong that would cause me to be missing packets.
this is the receiving code, which hands off queues of size 2000 to another class which gathers metrics, it only passes relevant information from the packet. using NIO
while (data.isRunning())
{
if(channel.receive(buf) != null)
{
int j = buf.array().length;
//add the packets important information to the queue
packet_info.add(new PacketInfoContainer(buf.getLong(j-12), System.nanoTime(), buf.getInt(j-4)));
// if we have 2000 packets worth of information, time to handle it!
if((packet_info.size() == 2000))
{
Runnable r1;
//if this is running on the client side, do it this way so that we can calculate progress
if(client_side)
{
if(data_con.isUserRequestStop())
{
System.out.println("suposed to quit");
data.stopTest();
break;
}
if(packets_expected > 0)
{
total_packets_received+=1000;
setChanged();
notifyObservers("update_progress" + Integer.toString( (int) (((double)total_packets_received/(double)packets_expected) * 1000) ) );
}
r1 = new PacketHandler(packet_info, results, buffer_size, client);
}
//server side, no nonsense
else
{
r1 = new PacketHandler(packet_info, results, buffer_size);
}
pool.submit(r1);
packet_info = new LinkedList<PacketInfoContainer>();
}
}
buf.clear();
}
UDP is not very well... may be you can use TCP & check tcp stats of S.O. to see retransmissions...
netstat -s
you can use CharacterGenerator, change BufferedOutputStream to 64KB and removing os.flush(); to speedup and test...
It won't allow me to comment yet, here I go.
You shouldn't be seeing dropped packets until the wire limit is hit. I suggest isolating the problem of dropped packets and using tools to figure out if you have a hardware / environment problem before spending lots of time looking at your code.
https://serverfault.com/questions/561107/how-to-find-out-the-reasons-why-the-network-interface-is-dropping-packets
Have you tried to run iperf in UDP mode? then check the dropped packet statistics? https://iperf.fr/iperf-doc.php
bmon is a neat tool that will show you the carrier error, dropped, fifo error stats.
I have a server application which received requests and forwards them on a Unix Domain Socket. This works perfectly under reasonable usage but when I am doing some load tests with a few thousand requests I am getting a Broken Pipe error.
I am using Java 7 with junixsocket to send the requests. I have lots of concurrent requests, but I have a thread pool of 20 workers which is writing to the unix domain socket, so there is no issue of too many concurrent open connections.
For each request I am opening, sending and closing the connection with the Unix Domain Socket.
What is the reason that could cause a Broken Pipe on Unix Domain Sockets?
UPDATE:
Putting a code sample if required:
byte[] mydata = new byte[1024];
//fill the data with bytes ...
AFUNIXSocketAddress socketAddress = new AFUNIXSocketAddress(new File("/tmp/my.sock"));
Socket socket = AFUNIXSocket.connectTo(socketAddress);
OutputStream out = new BufferedOutputStream(socket.getOutputStream());
InputStream in = new BufferedInputStream(socket.getInputStream()));
out.write(mydata);
out.flush(); //The Broken Pipe occurs here, but only after a few thousand times
//read the response back...
out.close();
in.close();
socket.close();
I have a thread pool of 20 workers, and they are doing the above concurrently (so up to 20 concurrent connections to the same Unix Domain Socket), with each one opening, sending and closing. This works fine for a load test of a burst of 10,000 requests but when I put a few thousand more I suddenly get this error, so I am wondering whether its coming from some OS limit.
Keep in mind that this is a Unix Domain Socket, not a network TCP socket.
'Broken pipe' means you have written to a connection that had already been closed by the other end. It is detected somewhat asynchronously due to buffering. It basically means you have an error in your application protocol.
From the Linux Programmer's Manual (similar language is also in the socket man page on Mac):
The communications protocols which implement a SOCK_STREAM ensure that data is not lost or duplicated. If a piece of data for which the peer protocol has buffer space cannot be successfully transmitted within a reasonable length of time, then the connection is considered to be dead. When SO_KEEPALIVE is enabled on the socket the protocol checks in a protocol-specific manner if the other end is still alive. A SIGPIPE signal is raised if a process sends or receives on a broken stream; this causes naive processes, which do not handle the signal, to exit.
In other words, if data gets stuck in a stream socket for too long, you'll end up with a SIGPIPE. It's reasonable that you would end up with this if you can't keep up with your load test.
I have a Java TCP game server, I use java.net.ServerSocket and everything runs just fine, but recently my ISP did a some kind of an upgrade, where, if you send two packets very fast for the same TCP connexion, they close it by force.
This is why a lot of my players are disconnected randomly when there's a lot of traffic in game (when there is a lot of chance that the server will send 2 packets at same time for the same person)
Here is an example of what I mean:
If I do something like this, my ISP will close the connexion for no reason to both client and server side:
tcpOut.print("Hello.");
tcpOut.flush();
tcpOut.print("How are you?");
tcpOut.flush();
But it will work just fine if i do something like this:
tcpOut.print("Hello.");
tcpOut.flush();
Thread.sleep(200);
tcpOut.print("How are you?");
tcpOut.flush();
Or this:
tcpOut.print("Hello.");
tcpOut.print("How are you?");
tcpOut.flush();
This only started a couple of weeks ago when they (the ISP) did some changes to the service and the network. I noticed using Wireshark that you have to have at least ~150ms time between two packets for same TCP connexion or else it will close.
1)Do you guys know what is this called ? does is it even have a name ? Is it legal ?
Now I have to re-write my game server knowing that I use a method called: send(PrintWriter out, String packetData);
2)Is there any easy solution to ask java to buffer the data before it sends it to clients ? Or wait 150ms before each sending without having to rewrite the whole thing ? I did some googling but I can't find anything that deals with this problem. Any tips or information to help about this would be really appreciated, btw speed optimisation is very crucial. Thank you.
If your ISP imposes such quality of service policies and you have no way to negotiate them with it, I propose you enforce that rules on your side too with TCP/IP stack QoS configuration.
A flush marks your TCP packet as urgent (URG flag) so that it is sent whatever the buffer/TCP window state is. Now you have to tell your operating system or any network equipment on the line to either
ignore (or simply reset) the urgent flag when the previous packet has been sent in the last 150 ms and do some buffering if necessary
delay the delivery of consecutive urgent packets to honor the 150 ms constraint.
Probably an expensive software for Windows exists to do so. Personally, I think putting a Linux box as router between your Windows workstations and modem with the appropriate QoS settings in iptables and qdisc will do the trick.
You may create a Writer wrapper implementation to keep track of last flush call timestamp. A quick implementation is to add a wait call to honor the 150 ms delay between two consecutive flushes.
public class ControlledFlushWriter extends Writer {
private long enforcedDelay = 150;
private long lastFlush = 0;
private Writer delegated;
public ControlledFlushWriter(Writer writer, long flushDelay) {
this.delegated = writer:
this.enforcedDelay = flushDelay;
}
/* simple delegation for other abstract methods... */
public void flush() {
long now = System.currentTimeMillis();
if (now < lastFlush + enforcedDelay) {
try {
Thread.sleep(lastFlush + enforcedDelay - now);
} catch (InterruptedException e) {
// probably prefer to give up flushing
// instead of risking a connection reset !
return;
}
}
lastFlush = System.currentTimeMillis();
this.delegated.flush();
}
}
It now should be enough to wrap your existing PrintWriter with this ControlledFlushWriter to work-around your ISP QoS without re-writing all your application.
After all, it sounds reasonable to prevent a connection to flag any of its packet as urgent... In such a condition, it is difficult to implement a fair QoS link sharing.