I'm using an ordinary serial port on a PC to send and receive data in a Java application. The PC runs Windows XP SP3 with java 1.6.0. Here is the code:
import gnu.io.CommPortIdentifier;
import gnu.io.SerialPort;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.util.concurrent.ArrayBlockingQueue;
// Open the serial port.
CommPortIdentifier portId;
SerialPort serialPort;
portId = CommPortIdentifier.getPortIdentifier("COM1");
serialPort = (SerialPort) portId.open("My serial port", 1000 /* 1 second timeout */);
serialPort.setSerialPortParams(115200, SerialPort.DATABITS_8, SerialPort.STOPBITS_1, SerialPort.PARITY_NONE);
// Set up input and output streams which will be used to receive and transmit data on the UART.
InputStream input;
OutputStream output;
input = serialPort.getInputStream();
output = serialPort.getOutputStream();
// Wrap the input and output streams in buffers to improve performance. 1024 is the buffer size in bytes.
input = new BufferedInputStream(input, 1024);
output = new BufferedOutputStream(output, 1024);
// Sync connection.
// Validate connection.
// Start Send- and Receive threads (see below).
// Send a big chunk of data.
To send data I've set up a thread that takes packages from a queue (ArrayBlockingQueue) and sends it on the UART. Similar for receive. Other parts of the application can simply insert packages into the send queue and then poll the receive queue to get the reply.
private class SendThread extends Thread {
public void run() {
try {
SendPkt pkt = SendQueue.take();
// Register Time1.
output.write(pkt.data);
output.flush();
// Register Time2.
// Put the data length and Time2-Time1 into an array.
// Receive Acknowledge.
ResponsePkt RspPkt = new ResponsePkt();
RspPkt.data = receive(); // This function calls "input.read" and checks for errors.
ReceiveQueue.put(RspPkt);
} catch (IOException e) { ... }
}
Each send packet is at most 256 bytes, which should take 256*8 bits / 115200 bits/s = 17,7ms to transfer.
I put measurements of Time2-Time1 in an array, i.e. the send time, and check it later. It turns out that sometimes a transfer of 256 bytes takes 15ms to transfer, which seems good since it's close to the teoretical minimum. I'm not sure though why it's faster in practice than in theory. However, the problem is that sometimes a transfer of 256 bytes takes 32ms, i.e. twice as much as needed. What could be causing this?
/Henrik
A (windows) PC is not a real-time machine. That means that when ever your application has to access a hardware layer it may be delayed. You have no control over this and there is no fixed amount of time between entering your function and exiting your function due to how the system (kernel) works.
Most Linux machines behave the same way. There are other tasks (applications) running in the background which consume processing power and thus your application might be moved around a bit in the process queue before sending the real data.
Even in the sending process there might be delays between each byte being send. This is all handled by the kernel/hardware layer and your software can't change that.
If you do need real-time excecution then you'll have to look for a real-time operating system.
This line sums it up pretty nicely:
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter.
Where with a RTOS this jitter is known/defined and with a normal OS this jitter is unknown/undefined.
Do you measure the time with System.nanoTime()?
Windows clock resolution used by System.currentTimeMillis() is by default around 15ms so perhaps in real time each task takes 20ms but some are spread over two ticks instead of one.
See System.currentTimeMillis vs System.nanoTime for more info.
Related
I need to build a UDP server which can handle ~10_000 requests/sec. Started with below code, to test whether a java socket can handle those number of requests.
I am bombarding the server for a minute with ~9000 requests,
Total number of requests sent from the client : 596951
and in the tcp dump I see
90640 packets captured
175182 packets received by filter
84542 packets dropped by kernel
UDP Server code :
try (DatagramSocket socket = new DatagramSocket(port)) {
System.out.println("Udp Server started at port :" + port);
while (true) {
byte[] buffer = new byte[1024];
DatagramPacket incomingDatagramPacket = new DatagramPacket(buffer, buffer.length);
try {
socket.receive(incomingDatagramPacket);
LinkedTransferQueue.add(incomingDatagramPacket);
} catch (IOException e) {
e.printStackTrace();
continue;
}
}
} catch (SocketException e) {
e.printStackTrace();
}
What is the the probable cause kernel dropping the packets in program
this simple ?
How to reduce it ? Any other implementation ?
From this link, reading from the comments,lose of packets for UDP protocol can always happen even between network to java socket.recieve method.
Note: Have to figure out regarding anomalies in the tcpdump packets captured, but there is quite number of packets dropped.
The anomalies in the tcpdump is the lack of buffer space, In order to know the number of packets received , I am using the iptraf-ng which gives the number of packets received per port :)
Mutli-threading
Your code sample does nothing after the a packet is received. If that is the case, multi-threading cant help you.
However if that's just for testing and your actual application needs to do something with the received packet, you need to push the packet to another Thread (or a pool of them) and go immediately back to listening for the next packet.
Basically you need to minimize the time between two calls of the socket.receive().
Note: this is not the only mutli-threading model available for this case.
Buffer size
Increase the buffer size with socket.setReceiveBufferSize which maps to the SO_RCVBUF:
Increasing SO_RCVBUF may allow the network implementation to buffer multiple packets when packets arrive faster than are being received using receive(DatagramPacket).
However, this is just a hint:
The SO_RCVBUF option is used by the the network implementation as a hint to size the underlying network I/O buffers.
You could also, if your setup allows it, go directly to the OS and change the size of the buffer.
Irrelevant
Note: Read this only if you are not sure that the packet size is less than 1024 bytes.
Your packet buffer size seems low for generic packets, which can lead to bugs because: If a packet is larger than your buffer there will be no error, it will just ignore the overflowing bytes.
EDIT:
Other Multi-threading model
Note: This is an idea, I don't know if it actually works.
3 Threads:
Thread A: handling packets
Thread B1: receive packets
Thread B2: receive packets
Init:
Atomic counter set to 0
B1 is receiving, B2 is waiting.
While loop of the B1:
while counter > 0 wait
counter += 1
received the packet
counter -= 1
wake up the B2
push the packet to A's queue
Same for B2.
This the threads diagram (line where the packet has been received):
B1 [--------|---] [--------|---]
B2 [--------|---] [--------|---]
Instead of using threads, can you check the possibility of using NIO2 APIs here by using AsynchronousDatagramChannel.
Help link:
https://www.ibm.com/developerworks/library/j-nio2-1/index.html
The actual number of packets what can be handled depends on CPU of your and target server, the network connection between them and your actual program. If you need a high performance solution for networking in Java you can use coral reactor: http://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/
One disadvantage of UDP is it does not come with the reliable delivery guarantees provided by TCP
The UDP protocol's mcast_recv_buf_size and ucast_recv_buf_size configuration attributes are used to specify the amount of receive buffer.
It Depends upon the OS you are using to run your program. Buffer size for different OS are :
<table sytle="width:100% border:1px solid black">
<tr>
<th><b>Operating System</b></th>
<th><b>Default Max UDP Buffer (in bytes)</b></th>
</tr>
<tr><td>Linux</td> <td>131071</td></tr>
<tr><td>Windows</td> <td>No known limit</td></tr>
<tr><td>Solaris</td> <td>262144</td></tr>
<tr><td>FreeBSD</td> <td>262144</td></tr>
<tr><td>AIX</td> <td>1048576</td></tr>
</table>
So UDP load handling depends upon machine as well as OS configuration.
The program consist that I have to send a byte from by Android app to a Wifi Access Point, then this byte is interpreted by a hardware device.
I can send byte to client and it receives the byte correctly (and more bytes, don't know why, maybe because protocol). Hardware filter protocol bytes and catch only the correct one.
There's is how i send it (byte es created previously in another method, but it's correct):
public static void sendByte (Byte data) throws IOException {
DataOutputStream output;
Socket client;
client = new Socket("1.2.3.4", 2000);
output = new DataOutputStream(client.getOutputStream());
output.write(data);
output.close();
client.close();
Log.w("INFO","Data sended");
}
When I send the byte, hardware part change the color of a light, and it happens successfully.
I putted this 3 lines too:
StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
StrictMode.setThreadPolicy(policy);
Until here there's no problem.
Then I want to receive from that hardware bytes too. Imagine that someone change the color of that light, I want to know it. So I created a receiving method:
public static String readByte() throws IOException{
InputStream input;
DataInputStream iData;
String data = null;
try {
byte[] bytes = {0, 0, 0, 0, 0, 0, 0, 0};
Socket server = new Socket(2000);
Socket client = server.accept();
input = client.getInputStream();
iData = new DataInputStream(input);
Scanner scanner = new Scanner(iData);
iData.skip(0);
iData.read(bytes, 0, 8);
data = bytesToHex(bytes); //A simple method that change bytes to hex, this method it's correct
Log.w("READ", "" + data);
input.close();
client.close();
server.close();
}catch(IOException e){
Log.w("EROR","No es pot conectar");
}
return data;
}
Here I create a server that connects to client to get data (I don't know if it's necessary to create a server). The problem is that I always receive 7 bytes, and they are always the same, I used skip(7) to skip the protocol bytes, but then I don't receive anymore bytes.
I know that the hardware send bytes through the wifi where I'm connected, but I can't catch them.
_________TO SUM UP_________
I think the problem is that I can't catch the bytes because the hardware part simply send me, and my Android app can't store it. I would like to read the byte just when hardware sends me it, or something like this. I searched methods everywhere and object attributes and I can't find a solution to this :(
Thanks for your attention.
Wanted to know to if ServerSocket is strictly necessary or not.
When java program sends data to hardware, at that time, java program is the client and hardware is the server.
When the hardware sends data to java program, the hardware is the client and your java program is the server.
So, in your java program, you need to have two threads, one client thread (with a Socket to send bytes) and a server thread (with a ServerSocket, lets say for example, on port 8999) to receive bytes.
For the hardware to send you the bytes, it needs the following details.
1. IP Address or host name of the device on which your java program runs.
2. The port in which the ServerSocket is listening (8999 in our case).
If your hardware manual says it can send data to you, then, I am sure you can configure these details somewhere. (I am not sure what your hardware is. So, look at your hardware manual about how to configure these details).
Sometimes, the device can send response on the same Socket you opened. (Again, I am at a loss how it is done in your hardware. Refer your hardware manual). In such case, keep your socket open and read from it in a separate thread.
I've been searching for an answer to my problem, but none of the solutions so far have helped me solve it. I'm working on an app that communicates with another device that works as a server. The app sends queries to the server and receives appropriate responses to dynamically create fragments.
In the first implementation the app sent the query and then waited to receive the answer in a single thread. But that solution wasn't satisfactory since the app did not receive any feedback from the server. The server admin said he was receiving the queries, however he hinted that the device was sending the answer back too fast and that the app probably wasn't already listening by the time the answer arrived.
So what I am trying to achieve is create seperate threads: one for listening and one for sending the query. The one that listens would start before we sent anything to the server, to ensure the app does not miss the server response.
Implementing this so far hasn't been succesful. I've tried writing and running seperate Runnable classes and AsyncTasks, but the listener never received an answer and at some points one of the threads didn't even execute. Here is the code for the asynctask listener:
#Override
protected String doInBackground(String... params) {
int bufferLength = 28;
String masterIP = "192.168.1.100";
try {
Log.i("TCPQuery", "Listening for ReActor answers ...");
Socket tcpSocket = new Socket();
SocketAddress socketAddress = new InetSocketAddress(masterIP, 50001);
try {
tcpSocket.connect(socketAddress);
Log.i("TCPQuery", "Is socket connected: " + tcpSocket.isConnected());
} catch (IOException e) {
e.printStackTrace();
}
while(true){
Log.i("TCPQuery", "Listening ...");
try{
Log.i("TCPQuery", "Waiting for ReActor response ...");
byte[] buffer = new byte[bufferLength];
tcpSocket.getInputStream().read(buffer);
Log.i("TCPQuery", "Received message " + Arrays.toString(buffer) + " from ReActor.");
}catch(Exception e){
e.printStackTrace();
Log.e("TCPQuery", "An error occured receiving the message.");
}
}
} catch (Exception e) {
Log.e("TCP", "Error", e);
}
return "";
}
And this is how the tasks are called:
if (Build.VERSION.SDK_INT>=Build.VERSION_CODES.HONEYCOMB) {
listener.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR, "");
}
else {
listener.execute();
sender.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
}
How exactly would you approach this problem? If this code is not sufficient I would be glad to post more.
This is because Android's AsyncTask is actually only one thread, no matter how many you create, so if you really want 2 threads running at the same time, I suggest you use standard Java concurrent package tools, not AsyncTask. As explained in the documentation:
AsyncTask is designed to be a helper class around Thread and Handler
and does not constitute a generic threading framework. AsyncTasks
should ideally be used for short operations (a few seconds at the
most.) If you need to keep threads running for long periods of time,
it is highly recommended you use the various APIs provided by the
java.util.concurrent pacakge such as Executor, ThreadPoolExecutor and
FutureTask.
Look this is tcp connection. So you don't need to bother about data lose. This is port to port connection and it never sends end of stream (-1). Perhaps you have to care about read functionality. Because you can not conform all steams are received or not. Tcp read method is a blocking call. If your read buffer size is smaller than available stream size then it block until it can read fully. And you are using android device, perhaps available stream can vary depending upon your device network. So you have 2 options,
1) your buffer size should be dynamic. At first check your available input stream size by using is.available() and create your buf size by this size. If available size is zero then sleep for a certain time to check it is lost its stream availability or not.
2) set your input stream timeout. It really works, because it reads its available stream and wait for the timeout delay, if any stream is not available within the timeout period then it throws timeout exception.
Try to change your code.
I'm creating a program on my Android phone to send the output of the camera to a server on the same network. Here is my Java code:
camera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera cam) {
try {
socket = new Socket("XXX.XXX.XXX.XXX", 3000);
out = socket.getOutputStream();
out.write(data);
socket.close();
} catch (Exception e) {
e.printStackTrace();
}
camera.addCallbackBuffer(data);
}
The server is a NodeJS server:
time = 0
video_server.on 'connection', (socket) ->
buffer = []
socket.on 'data', (data) ->
buffer.push data
socket.on 'end', ->
new_time = (new Date()).getTime()
fps = Math.round(1000/(new_time - time)*100)/100
console.log fps
time = new_time
stream = fs.createWriteStream 'image.jpg'
stream.on 'close', ->
console.log 'Image saved.', fps
stream.write data for data in buffer
stream.end()
My terminal is showing about 1.5 fps (5 Mbps). I know very little about network programming, but I do know there should definitely be enough bandwidth. Each image is 640x480x1.5 at 18 fps, which is about 63 Mbps. The local network should easily be able to handle this, but my debugger in Android is giving me a lot of "Connection refused" messages.
Any help on fixing my bad network practices would be great. (I'll get to image compression in a little bit -- but right now I need to optimize this step).
You've designed the system so that it has to do many times more work than it should have to do. You're requiring a connection to be built up and torn down for each frame transferred. That is not only killing your throughput, but it can also run you out of resources.
With a sane design, all that would be required to transfer a frame is to send and receive the frame data. With your design, for each frame, a TCP connection has to be built up (3 steps), the frame data has to be sent and received, and the TCP connection has to be torn down. Worse, the receiver cannot know it has received all of the frame data until the connection shutdown occurs. So this cannot be hidden in the background.
Design a sane protocol and the problems will go away.
Is this working at all? I do not see where you are binding to port 3000 on the server.
In any case, if this is a video stream, you should probably be using UDP instead of TCP. In UDP, packets may be dropped, but for a video stream this will probably not be noticeable. UDP communication requires much less overhead than TCP due to the number of messages exchanged. TCP contains a lot of "acking" to make sure each piece of data reaches its destination; UDP doesn't care, and thus sends less packets. In my experience, UDP based code is generally less complex than TCP based code.
_ryan
I'm investigating a performance problem with Jetty 6.1.26. Jetty appears to use Transfer-Encoding: chunked, and depending on the buffer size used, this can be very slow when transferring locally.
I've created a small Jetty test application with a single servlet that demonstrates the issue.
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.OutputStream;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.mortbay.jetty.Server;
import org.mortbay.jetty.nio.SelectChannelConnector;
import org.mortbay.jetty.servlet.Context;
public class TestServlet extends HttpServlet {
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException {
final int bufferSize = 65536;
resp.setBufferSize(bufferSize);
OutputStream outStream = resp.getOutputStream();
FileInputStream stream = null;
try {
stream = new FileInputStream(new File("test.data"));
int bytesRead;
byte[] buffer = new byte[bufferSize];
while( (bytesRead = stream.read(buffer, 0, bufferSize)) > 0 ) {
outStream.write(buffer, 0, bytesRead);
outStream.flush();
}
} finally {
if( stream != null )
stream.close();
outStream.close();
}
}
public static void main(String[] args) throws Exception {
Server server = new Server();
SelectChannelConnector ret = new SelectChannelConnector();
ret.setLowResourceMaxIdleTime(10000);
ret.setAcceptQueueSize(128);
ret.setResolveNames(false);
ret.setUseDirectBuffers(false);
ret.setHost("0.0.0.0");
ret.setPort(8080);
server.addConnector(ret);
Context context = new Context();
context.setDisplayName("WebAppsContext");
context.setContextPath("/");
server.addHandler(context);
context.addServlet(TestServlet.class, "/test");
server.start();
}
}
In my experiment, I'm using a 128MB test file that the servlet returns to the client, which connects using localhost. Downloading this data using a simple test client written in Java (using URLConnection) takes 3.8 seconds, which is very slow (yes, it's 33MB/s, which doesn't sound slow, except that this is purely local and the input file was cached; it should be much faster).
Now here's where it gets strange. If I download the data with wget, which is a HTTP/1.0 client and therefore doesn't support chunked transfer encoding, it only takes 0.1 seconds. That's a much better figure.
Now when I change bufferSize to 4096, the Java client takes 0.3 seconds.
If I remove the call to resp.setBufferSize entirely (which appears to use a 24KB chunk size), the Java client now takes 7.1 seconds, and wget is suddenly equally slow!
Please note I'm not in any way an expert with Jetty. I stumbled across this problem while diagnosing a performance problem in Hadoop 0.20.203.0 with reduce task shuffling, which transfers files using Jetty in a manner much like the reduced sample code, with a 64KB buffer size.
The problem reproduces both on our Linux (Debian) servers and on my Windows machine, and with both Java 1.6 and 1.7, so it appears to depend solely on Jetty.
Does anyone have any idea what could be causing this, and if there's something I can do about it?
I believe I have found the answer myself, by looking through the Jetty source code. It's actually a complex interplay of the response buffer size, the size of the buffer passed to outStream.write, and whether or not outStream.flush is called (in some situations). The issue is with the way Jetty uses its internal response buffer, and how the data you write to the output is copied to that buffer, and when and how that buffer is flushed.
If the size of the buffer used with outStream.write is equal to the response buffer (I think a multiple also works), or less and outStream.flush is used, then performance is fine. Each write call is then flushed straight to the output, which is fine. However, when the write buffer is larger and not a multiple of the response buffer, this seems to cause some weirdness in how the flushes are handled, causing extra flushes, leading to bad performance.
In the case of chunked transfer encoding, there's an extra kink in the cable. For all but the first chunk, Jetty reserves 12 bytes of the response buffer to contain the chunk size. This means that in my original example with a 64KB write and response buffer, the actual amount of data that fit in the response buffer was only 65524 bytes, so again, parts of the write buffer were spilling into multiple flushes. Looking at a captured network trace of this scenario, I see that the first chunk is 64KB, but all subsequent chunks are 65524 bytes. In this case, outStream.flush makes no difference.
When using a 4KB buffer I was seeing fast speeds only when outStream.flush was called. It turns out that resp.setBufferSize will only increase the buffer size, and since the default size is 24KB, resp.setBufferSize(4096) is a no-op. However, I was now writing 4KB pieces of data, which fit in the 24KB buffer even with the reserved 12 bytes, and are then flushed as a 4KB chunk by the outStream.flush call. However, when the call to flush is removed, it will let the buffer fill up, again with 12 bytes spilling into the next chunk because 24 is a multiple of 4.
In conclusion
It seems that to get good performance with Jetty, you must either:
When calling setContentLength (no chunked transfer encoding) and use a buffer for write that's the same size as the response buffer size.
When using chunked transfer encoding, use a write buffer that's at least 12 bytes smaller than the response buffer size, and call flush after each write.
Note that the performance of the "slow" scenario is still such that you'll likely only see the difference on the local host or very fast (1Gbps or more) network connection.
I guess I should file issue reports against Hadoop and/or Jetty for this.
Yes, Jetty will default to Transfer-Encoding: Chunked if the size of response cannot be determined.
If you know the size of response that what its going to be.
You need to call resp.setContentLength(135*1000*1000*1000); in this case instead of
resp.setBufferSize();
actually setting resp.setBufferSize is immaterial.
Before opening the OutputStream, that is before this line:
OutputStream outStream = resp.getOutputStream();
you need to call
resp.setContentLength(135*1000*1000*1000);
(the line above)
Give it a spin. see if that works.
Those are my guesses from theory.
This is pure speculation, but I'm guessing this is some sort of Garbage Collector issue. Does the performance of the Java client improve when you run the JVM with more heap like...
java -Xmx 128m
I don't recall the JVM switch to turn on GC logging, but figure that out and see if GC kicks in just as you are getting into your doGet.
My 2 cents.