I'm trying to write a proxy-application for Android.
I created a ServerSocket that listens on the localhost and a port.
When a browser requests a site, I open a new thread with the Socket and read the inputstream.
The problem: The read-call is too slow. It needs up to one second.
I don't think the browser's outputstream is that slow.
public Request readRequest() {
int length;
byte[] buffer = new byte[8192];
long startTime = System.nanoTime(); // Messure starts
while ((length = in.read(buffer)) != 0) {
Log.d("PERFORMANCE", "read() needs " + (System.nanoTime() - startTime)/1000000 + " ms for: " + length + " bytes"); // Messure ends
Request request = Request.parse(buffer, length);
if (request != null) {
// read bytes contained a complete Request
return request;
}
// request is incomplete -> read more
startTime = System.nanoTime();
}
return null;
}
I thought it might could be a sheduling problem, so i already tried to increase the priority of the current Thread. It slightly improved the speed.
Is there another way to decrease the idle time or latency?
What about NDK/JNI?
First thing I want to give a note:
int length;
should be IMO:
long length = 0L;
Solution for your problem could be increasing the buffer size for byte[] buffer = new byte[8192];
Also maybe Request request = Request.parse(buffer, length); could be slowing down everything.
Related
I have a client that sends chunked data. My server is expected to read that data. On the server i am using Tomcat 7.0.42 and expecting this data to be loaded via an existing servlet.
I was looking up google to see if i can get any examples that read chunked data, unfortunately i haven't stumbled upon any.
I found few references of ChunkedInputStream provided by Apache Http Client or ChunkedInputFilter provided by Tomcat. But i couldn't find any decent examples on how best to use these.
If any of you guys have any experience with reading/parsing chunked data, please share pointers around those.
Java version used - 1.7.0.45
In my existing servlet code, i have been handling simple request via post using NIO. But now if a client has set transfer encoding to chunked, i need to specifically handle that. So i am having a forking code in place. Something like below,
inputStream = httpServletRequest.getInputStream();
if ("chunked".equals(getRequestHeader(httpServletRequest, "Transfer-Encoding"))) {
// Need to process chunked data
} else {
// normal request data
if (inputStream != null) {
int contentLength = httpServletRequest.getContentLength()
if (contentLength <= 0) {
return new byte[0];
}
ReadableByteChannel channel = Channels.newChannel(inputStream);
byte[] postData = new byte[contentLength];
ByteBuffer buf = ByteBuffer.allocateDirect(contentLength);
int numRead = 0;
int counter = 0;
while (numRead >= 0) {
buf.rewind();
numRead = channel.read(buf);
buf.rewind();
for (int i = 0; i < numRead; i++) {
postData[counter++] = buf.get();
}
}
return postData;
}
}
So if you observe, the normal request case is based on the "content-length" being available, while for chunked encoding, that is not present. And hence an alternative process to handle chunked data.
Thanks,
Vicky
See HTTP 1/1 Chunked Transfer Coding.
You're servlet will be served with chunks of variable size. You'll get the size of each chunk in it's first line. The protocol is quiet simple so you could implement it by yourself.
Following NIO based code worked for me,
ReadableByteChannel channel = Channels.newChannel(chunkedInputStream);
// content length is not known upfront, hence a initial size
int bufferLength = 2048;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ByteBuffer byteBuffer = ByteBuffer.allocate(bufferLength);
int numRead = 0;
while (numRead >= 0) {
byteBuffer.rewind();
//Read bytes from the channel
numRead = channel.read(byteBuffer);
byteBuffer.rewind();
if (numRead > 0) {
byte[] dataBytes = byteBuffer.array();
baos.write(dataBytes, 0, dataBytes.length);
}
byteBuffer.clear();
}
return baos.toByteArray();
I'm downloading video files that are larger than the memory space that Android apps are given. When they're *on the device, the MediaPlayer handles them quite nicely, so their overall size isn't the issue.
The problem is that if they exceed the relatively small number of megabytes that a byte[] can be then I get the dreaded OutOfMemory exception as I download them.
My intended solution is to just write the incoming byte stream straight to the SD card, however, I'm using the Apache Commons library and the way I'm doing it tries to get the entire video read in before it hands it back to me.
My code looks like this:
HttpClient client = new HttpClient();
PostMethod filePost = new PostMethod(URL_PATH);
client.setConnectionTimeout(timeout);
byte [] ret ;
try{
if(nvpArray != null)
filePost.setRequestBody(nvpArray);
}catch(Exception e){
Log.d(TAG, "download failed: " + e.toString());
}
try{
responseCode = client.executeMethod(filePost);
Log.d(TAG,"statusCode>>>" + responseCode);
ret = filePost.getResponseBody();
....
I'm curious what another approach would be to get the byte stream one byte at a time and just write it out to disk as it comes.
You should be able to use the GetResponseBodyAsStream method of your PostMethod object and stream it to a file. Here's an untested example....
InputStream inputStream = filePost.getResponseBodyAsStream();
FileInputStream outputStream = new FileInputStream(destination);
// Per your question the buffer is set to 1 byte, but you should be able to use
// a larger buffer.
byte[] buffer = new byte[1];
int bytesRead;
while ((bytesRead = input.read(buffer)) != -1)
{
outputStream.write(buffer, 0, bytesRead);
}
outputStream.close();
inputStream.close();
Hi I have created a server socket for reading byte array from socket using getInputStream, But getInputStream.read is not exiting after endof data reaches. Below is my code.
class imageReciver extends Thread {
private ServerSocket serverSocket;
InputStream in;
public imageReciver(int port) throws IOException
{
serverSocket = new ServerSocket(port);
}
public void run()
{
Socket server = null;
server = serverSocket.accept();
in = server.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte buffer[] = new byte[1024];
while(true){
int s = 0;
s = in.read(buffer); //Not exiting from here
if(s<0) break;
baos.write(buffer, 0, s);
}
server.close();
return;
}
}
From the client if I sent 2048 bytes, the line in.read(buffer) should return -1 after reading two times, but it waiting there to read for the third time. How can I solve this ?
Thanks in advance....
Your server will need to close the connection, basically. If you're trying to send multiple "messages" over the same connection, you'll need some way to indicate the size/end of a message - e.g. length-prefixing or using a message delimiter. Remember that you're using a stream protocol - the abstraction is just that this is a stream of data; it's up to you to break it up as you see fit.
See the "network packets" in Marc Gravell's IO blog post for more information.
EDIT: Now that we know that you have an expected length, you probably want something like this:
int remainingBytes = expectedBytes;
while (remainingBytes > 0) {
int bytesRead = in.read(buffer, 0, Math.min(buffer.length, remainingBytes));
if (bytesRead < 0) {
throw new IOException("Unexpected end of data");
}
baos.write(buffer, 0, bytesRead);
remainingBytes -= bytesRead;
}
Note that this will also avoid overreading, i.e. if the server starts sending the next bit of data, we won't read into that.
If I send 2048 bytes, the line 'in.read(buffer)' should return -1 after reading two times.
You are mistaken on at least two counts here. If you send 2048 bytes, the line 'in.read(buffer)' should execute an indeterminate number of times, to read a total of 2048 bytes, and then block. It should only return -1 when the peer has closed the connection.
Is there any possibility to upload a file (for example: an image), to a site and to calculate the transfer rate?
I have some code that downloads an image from a specified url and calculates the transfer rate, using the java.net.Url class, something like:
long startTime = System.currentTimeMillis(); //start time
System.out.println("Connecting site...\n");
System.out.println("Downloading......");
URL url = new URL("http://....");
url.openConnection();
InputStream reader = url.openStream();
FileOutputStream writer = new FileOutputStream("D:/imagine.jpg");
byte[] buffer = new byte[153600];
int totalBytesRead = 0;
int bytesRead = 0;
while ((bytesRead = reader.read(buffer)) > 0)
{
writer.write(buffer, 0, bytesRead);
buffer = new byte[153600];
totalBytesRead += bytesRead;
}
long endTime = System.currentTimeMillis();//end of download
long elapsedTime=(endTime-startTime)/1000;//from miliseconds in seconds
System.out.println("ElapsedTime is " +elapsedTime +" s");
int memory=new Integer(totalBytesRead);
double memoryFinal=memory * 0.0009765625; //file in Kb
System.out.println("File size: " +memoryFinal +"Kb");
System.out.println("Speed :" + memoryFinal/elapsedTime + "Kbps");
writer.close();
reader.close();
I need something easy and useful. Thank you.
Yes you can - but it is not simple.
POSTing a file to a server is not implemented in plain java URLConnection, but you have to implements the protocol.
Or, You can use org.apache.commons.httpclient
http://www.theserverside.com/news/1365153/HttpClient-and-FileUpload
I recommend the library Apache FileUpload.
You can implement a progress bar too. See this .
Regards
I am writing a java TCP client that talks to a C server.
I have to alternate sends and receives between the two.
Here is my code.
The server sends the length of the binary msg(len) to client(java)
Client sends an "ok" string
Server sends the binary and client allocates a byte array of 'len' bytes to recieve it.
It again sends back an "ok".
step 1. works. I get "len" value. However the Client gets "send blocked" and the server waits to receive data.
Can anybody take a look.
In the try block I have defined:
Socket echoSocket = new Socket("192.168.178.20",2400);
OutputStream os = echoSocket.getOutputStream();
InputStream ins = echoSocket.getInputStream();
BufferedReader br = new BufferedReader(new InputStreamReader(ins));
String fromPU = null;
if( (fromPU = br.readLine()) != null){
System.out.println("Pu returns as="+fromPU);
len = Integer.parseInt(fromPU.trim());
System.out.println("value of len from PU="+len);
byte[] str = "Ok\n".getBytes();
os.write(str, 0, str.length);
os.flush();
byte[] buffer = new byte[len];
int bytes;
StringBuilder curMsg = new StringBuilder();
bytes =ins.read(buffer);
System.out.println("bytes="+bytes);
curMsg.append(new String(buffer, 0, bytes));
System.out.println("ciphertext="+curMsg);
os.write(str, 0, str.length);
os.flush();
}
UPDATED:
Here is my code. At the moment, there is no recv or send blocking on either sides. However, both with Buffered Reader and DataInput Stream reader, I am unable to send the ok msg. At the server end, I get a large number of bytes instead of the 2 bytes for ok.
Socket echoSocket = new Socket("192.168.178.20",2400);
OutputStream os = echoSocket.getOutputStream();
InputStream ins = echoSocket.getInputStream();
BufferedReader br = new BufferedReader(new InputStreamReader(ins));
DataInputStream dis = new DataInputStream(ins);
DataOutputStream dos = new DataOutputStream(os);
if( (fromPU = dis.readLine()) != null){
//if( (fromPU = br.readLine()) != null){
System.out.println("PU Server returns length as="+fromPU);
len = Integer.parseInt(fromPU.trim());
byte[] str = "Ok".getBytes();
System.out.println("str.length="+str.length);
dos.writeInt(str.length);
if (str.length > 0) {
dos.write(str, 0, str.length);
System.out.println("sent ok");
}
byte[] buffer = new byte[len];
int bytes;
StringBuilder curMsg = new StringBuilder();
bytes =ins.read(buffer);
System.out.println("bytes="+bytes);
curMsg.append(new String(buffer, 0, bytes));
System.out.println("binarytext="+curMsg);
dos.writeInt(str.length);
if (str.length > 0) {
dos.write(str, 0, str.length);
System.out.println("sent ok");
}
Using a BufferedReader around a stream and then trying to read binary data from the stream is a bad idea. I wouldn't be surprised if the server has actually sent all the data in one go, and the BufferedReader has read the binary data as well as the line that it's returned.
Are you in control of the protocol? If so, I suggest you change it to send the length of data as binary (e.g. a fixed 4 bytes) so that you don't need to work out how to switch between text and binary (which is basically a pain).
If you can't do that, you'll probably need to just read a byte at a time to start with until you see the byte representing \n, then convert what you've read into text, parse it, and then read the rest as a chunk. That's slightly inefficient (reading a byte at a time instead of reading a buffer at a time) but I'd imagine the amount of data being read at that point is pretty small.
Several thoughts:
len = Integer.parseInt(fromPU.trim());
You should check the given size against a maximum that makes some sense. Your server is unlikely to send a two gigabyte message to the client. (Maybe it will, but there might be a better design. :) You don't typically want to allocate however much memory a remote client asks you to allocate. That's a recipe for easy remote denial of service attacks.
BufferedReader br = new BufferedReader(new InputStreamReader(ins));
/* ... */
bytes =ins.read(buffer);
Maybe your BufferedReader has sucked in too much data? (Does the server wait for the Ok before continuing?) Are you sure that you're allowed to read from the underlying InputStreamReader object after attaching a BufferedReader object?
Note that TCP is free to deliver your data in ten byte chunks over the next two weeks :) -- because encapsulation, differing hardware, and so forth makes it very difficult to tell the size of packets that will eventually be used between two peers, most applications that are looking for a specific amount of data will instead populate their buffers using code somewhat like this (stolen from Advanced Programming in the Unix Environment, an excellent book; pity the code is in C and your code is in Java, but the principle is the same):
ssize_t /* Read "n" bytes from a descriptor */
readn(int fd, void *ptr, size_t n)
{
size_t nleft;
ssize_t nread;
nleft = n;
while (nleft > 0) {
if ((nread = read(fd, ptr, nleft)) < 0) {
if (nleft == n)
return(-1); /* error, return -1 */
else
break; /* error, return amount read so far */
} else if (nread == 0) {
break; /* EOF */
}
nleft -= nread;
ptr += nread;
}
return(n - nleft); /* return >= 0 */
}
The point to take away is that filling your buffer might take one, ten, or one hundred calls to read(), and your code must be resilient against slight changes in network capabilities.