TCP transfer in Java is VERY slow - java

I am doing a program to saturate a link for performance testing in my networking lab, I tried different things, from changing Send and Receive buffers, creating a file and reading it, creating a long array and sending it through the socket all at once: OutputStream.write(byte[])
The array is 1000000 positions length, when I sniff the network traffic, according to the sniffer, the packets have "Data (1460 bytes)" which make me supose that I'm not sending byte by byte.
The bandwidth used is about 8% of the 100Mbps.
I post the relevant code as there is some interaction between client and server which I don't think is relevant:
Client:
int car=0;
do {
car=is.read();
//System.out.println(car);
contador++;
} while(car!=104);
Server:
byte dades[]=new byte[1000000];
FileInputStream fis=null;
try {
FileOutputStream fos = new FileOutputStream("1MB.txt");
fos.write(dades);
fos=null;
File f = new File("1MB.txt");
fis = new FileInputStream(f);
step=0;
correcte=true;
sck = srvSock.accept();
sck.setSendBufferSize(65535);
sck.setReceiveBufferSize(65535);
os = sck.getOutputStream();
is = sck.getInputStream();
}
...
BufferedInputStream bis = new BufferedInputStream(fis);
bis.read(dades);
for(int i=0;i<100;i++) {
os.write(dades);
}
In this case I put the last idea I had, to create a file with a million positions byte array and then read this file and write to the socket, before this idea I was sending the byte array.
Another thing which make me believe this is not a byte by byte sending is that in a quad core computer the client uses 25% CPU and uses around 8% of the bandwidth, and in an old computer which is single core (AMD Athlon) it uses 100% of the CPU and just 4% of the bandwidth. The server is not so CPU intensive.
Any ideas??? I feel a little lost right now...
Thanks!!!

Perhaps it's related to the fact that client reads data byte by byte, that can force flow control algorithm to limit transmission bandwidth:
int car=0;
do {
car=is.read();
//System.out.println(car);
contador++;
} while(car!=104);
Try to read data into array instead, or use BufferedInputStream:
byte[] buf = new byte[65536];
int size = 0;
boolean stop = false;
while (!stop && (size = is.read(buf)) != -1) {
for (int i = 0; i < size; i++) {
if (buf[i] == 104) {
stop = true;
break;
}
}
}

Related

Java socket InputStream.read() not behaving as expected

Ive read many tutorials and posts about the java InputStream and reading data. Ive established a client and server implementation but i'm having weird issues where reading a variable length "payload" from the client is not consistent.
What im trying to do is to transfer up 100kB max in one single logical payload. Now i have verified that the TCP stack is not sending one mahousive 100kB packet from the client. I have played about with different read forms as per previous questions about the InputStream reading but ive nearly teared my hair out trying to get it to dump the correct data.
Lets for example say the client is sending a 70k payload.
Now the first observation i've noticed is that if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[]. When free running the byte[] will be different sizes every time i run the code with practically the same payload.
Timing problems?
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
please understand i dont like the way ive had to get this to work and im not happy with the solution.
What experiences, problems have you had/seen respectively that might help me fix this code to be more reliable, easier to read.
public void listenForResponses() {
isActive = true;
try {
// apprently read() doesnt return -1 on socket based streams
// if big stuff comes through, TCP packets are segmented, but the inputstream
// does something odd and doesnt return the correct raw data.
// this is a work around to accept vari-length payloads into one byte[] buffer
byte[] inBuffer = new byte[1];
byte[] buffer = null;
int bytesRead = 0;
byte[] finalbuffer = new byte[0];
boolean isMultichunk = false;
InputStream istrm = currentSession.getInputStream();
while ((bytesRead = istrm.read(inBuffer)) > -1 && isActive) {
buffer = new byte[bytesRead];
buffer = Arrays.copyOfRange(inBuffer, 0, bytesRead);
int available = istrm.available();
if(available < 1) {
if(!isMultichunk) {
finalbuffer = buffer;
}
else {
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
notifyOfResponse(deserializePayload(finalbuffer));
finalbuffer = new byte[0];
isMultichunk = false;
}
else {
if(!isMultichunk) {
isMultichunk = true;
finalbuffer = new byte[0];
}
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
}
} catch (IOException e) {
Logger.consoleOut("PayloadReadThread: " + e.getMessage());
currentSession = null;
}
}
InputStream is working as designed.
if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[].
That's because stepping through the code is slower, so more data drives between reads, enough to fill your buffer.
When free running the byte[] will be different sizes every time i run the code with practically the same payload.
That's because InputStream.read() is contracted to block until at least one byte has been transferred, or EOS or an exception occurs. See the Javadoc. There's nothing in there about filling the buffer.
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
That's the correct behaviour in the case of a 1 byte buffer for exactly the same reason given above. It's not the 'correct behaviour' for any other size.
NB Your copy loop is nonsense. available() has few correct uses, and this isn't one of them.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
NB (2) read() does indeed return -1 on socket-based streams, but only when the peer has shutdown or closed the connection.

Preventing high memory usage when using Sockets with NIO

I'm developing client - server architecture for files exchanging, it's for my own purpose. Everything works great except memory usage. After I've sent some files I realized my applications memory management isn't so effective when I was trying to send some videos(something about 900MB), my client's and server's memory usage was about 1,5GB.
I used NetBeans's Profiler and it said that the problem is byte array.
//Client side
FileInputStream f = new FileInputStream(file);
FileChannel ch = f.getChannel();
ByteBuffer bb = ByteBuffer.allocate(8192*32);
int nRead = 0;
while ((nRead = ch.read(bb)) != -1) {
if (nRead == 0) {
continue;
}
bb.position(0);
bb.limit(nRead);
send.writeObject(Arrays.copyOfRange(bb.array(), 0, nRead));
send.flush();
bb.clear();
}
f.close();
ch.close();
bb.clear();
send.writeObject(0xBB);
send.flush();
//Server side
FileOutputStream fos = new FileOutputStream(file);
FileChannel fco = fos.getChannel();
ByteBuffer buffer = ByteBuffer.allocate(8192 * 32);
do {
Object received = download.readObject();
if (received instanceof byte[]) {
byte[] bytes = (byte[]) received;
buffer.put(bytes);
buffer.flip();
buffer.position(0);
buffer.limit(bytes.length);
fco.write(buffer);
buffer.clear();
} else if (received instanceof Integer) {
Integer tempx = (Integer) received;
state = (byte) (tempx & (0xFF));
}
} while (received != (byte) 0xBB);
fco.close();
fos.close();
Is there anyway to fix it, I mean is it possible to clean used memory? Limiting bytebuffer doesnt work properly so I've limited the byte array from buffer, I didn't attached whole code, because working with files is the problem.
SCREEN FROM PROFILER - CLIENT'S MEMORY USAGE
http://i.stack.imgur.com/ouTDk.png
Your buffers are 8192 times 32. If you have memory problems, make them smaller. You don't need them that big for network purposes. It's also a strange way to write 256k.
Don't create pointless copies of byte arrays. ObjectOutputStream.writeUnshared() will do what you need there.
I would strongly suggest getting rid of the serialization and just copying the bytes. The code becomes much simpler and you have less copies of the data, especially at the receiving end.
It's not a direct solution - but you should use try-with-resources blocks on all your streams. That will prevent any possible resource leaks that may be making your situation worse.
try (FileOutputStream fos = new FileOutputStream(file)) {
// Do stuff here, fos is automatically closed when you leave the block
}

How can I buffer my Java input/output/file streams properly?

I'm writing an application that needs to send a file over the network. I've only been taught how to use standard java.net and java.io classes so far (in my first year of college) so I have no experience with java.nio and netty and all those nice things. I've got a working server/client set up using Socket and ServerSocket classes along with BufferedInput/OutputStreams and BufferedFile streams, as follows:
The server:
public class FiletestServer {
static ServerSocket server;
static BufferedInputStream in;
static BufferedOutputStream out;
public static void main(String[] args) throws Exception {
server = new ServerSocket(12354);
System.out.println("Waiting for client...");
Socket s = server.accept();
in = new BufferedInputStream(s.getInputStream(), 8192);
out = new BufferedOutputStream(s.getOutputStream(), 8192);
File f = new File("test.avi");
BufferedInputStream fin = new BufferedInputStream(new FileInputStream(f), 8192);
System.out.println("Sending to client...");
byte[] b = new byte[8192];
while (fin.read(b) != -1) {
out.write(b);
}
fin.close();
out.close();
in.close();
s.close();
server.close();
System.out.println("done!");
}
}
And the client:
public class FiletestClient {
public static void main(String[] args) throws Exception {
System.out.println("Connecting to server...");
Socket s;
if (args.length < 1) {
s = new Socket("", 12354);
} else {
s = new Socket(args[0], 12354);
}
System.out.println("Connected.");
BufferedInputStream in = new BufferedInputStream(s.getInputStream(), 8192);
BufferedOutputStream out = new BufferedOutputStream(s.getOutputStream(), 8192);
File f = new File("test.avi");
System.out.println("Receiving...");
FileOutputStream fout = new FileOutputStream(f);
byte[] b = new byte[8192];
while (in.read(b) != -1) {
fout.write(b);
}
fout.close();
in.close();
out.close();
s.close();
System.out.println("Done!");
}
}
At first I was using no buffering, and writing each int from in.read(). That got me about 200kb/s transfer according to my network monitor gadget on windows 7. I then changed it as above but used 4096 byte buffers and got the same speed, but the file received was usually a couple kilobytes bigger than the source file, and that is what my problem is. I changed the buffer size to 8192 and I now get about 3.7-4.5mb/sec transfer over wireless to my laptop, which is plenty fast enough for now, but I still have the problem of the file getting slightly bigger (which would cause it to fail an md5/sha hash test) when it is received.
So my question is what is the proper way of buffering to get decent speeds and end up with exactly the same file on the other side? Getting it to go a bit faster would be nice too but I'm happy with the speed for now. I'm assuming a bigger buffer is better up to a point, I just need to find what that point is.
You are ignoring the size of data actually read.
while (in.read(b) != -1) {
fout.write(b);
}
will always write 8192 bytes even if only one byte is read. Instead I suggest using
for(int len; ((len = in.read(b)) > 0;)
fout.write(b, 0, len);
Your buffers are the same size as your byte[] so they are not really doing anything at the moment.
The MTU for most networks is around 1500 bytes and you get a performance improvement on slower networks (up to 1 GB) up to 2 KB. 8 KB as fine as well. Larger than that is unlikely to help.
If you actually want to make it 'so perfect', you should take a look at the try-catch-with-resources statement and the java.nio package (or any nio-derivated libraries).

Java InputStream wait for data.

I'm developing Server-Client application and I have a problem with waiting for input data on input stream.
I have thread dedicated to reading input data. Currently it uses while loop to hold until data is available. (N.B. protocol is as follow: send size of packet, say N, as int then send N bytes).
public void run(){
//some initialization
InputStream inStream = sock.getInputStream();
byte[] packetData;
//some more stuff
while(!interrupted){
while(inStream.available()==0);
packetData = new byte[inStream.read()];
while(inStream.available()<packetData.length);
inStream.read(packetData,0,packetData.length);
//send packet for procession in other thread
}
}
It works but blocking the thread by while loop is IMO a bad idea. I could use Thread.sleep(X) to prevent resources being continously consumed by the loop, but there surely must be a better way.
Also I can not rely on InputStream.read to block the thread as part of the data may be sent by the server with delays. I have tried but it always resulted in unexpected behaviour.
I'd appreciate any ideas :)
You can use DataInputStream.readFully()
DataInputStream in = new DataInputStream(sock.getInputStream());
//some more stuff
while(!interrupted) {
// readInt allows lengths of up to 2 GB instead of limited to 127 bytes.
byte[] packetData = new byte[in.readInt()];
in.readFully(packetData);
//send packet for procession in other thread
}
I prefer to use blocking NIO which supports re-usable buffers.
SocketChannel sc =
ByteBuffer bb = ByteBuffer.allocateDirect(1024 *1024); // off heap memory.
while(!Thread.currentThread.isInterrupted()) {
readLength(bb, 4);
int length = bb.getInt(0);
if (length > bb.capacity())
bb = ByteBuffer.allocateDirect(length);
readLength(bb, length);
bb.flip();
// process buffer.
}
static void readLength(ByteBuffer bb, int length) throws EOFException {
bb.clear();
bb.limit(length);
while(bb.remaining() > 0 && sc.read(bb) > 0);
if (bb.remaining() > 0) throw new EOFException();
}
As UmNyobe said, available() is meant to be used if you dont want to block as the default behaviour is blocking.
Just use the normal read to read whatever is available but only send packet for processing in other thread once you have packetData.length bytes in your buffer...

Trying to upload in chunks

I am trying to accomplish a large file upload on a blackberry. I am succesfully able to upload a file but only if I read the file and upload it 1 byte at a time. For large files I think this is decreasing performance. I want to be able to read and write at something more 128 kb at a time. If i try to initialise my buffer to anything other than 1 then I never get a response back from the server after writing everything.
Any ideas why i can upload using only 1 byte at a time?
z.write(boundaryMessage.toString().getBytes());
DataInputStream fileIn = fc.openDataInputStream();
boolean isCancel = false;
byte[]b = new byte[1];
int num = 0;
int left = buffer;
while((fileIn.read(b)>-1))
{
num += b.length;
left = buffer - num * 1;
Log.info(num + "WRITTEN");
if (isCancel == true)
{
break;
}
z.write(b);
}
z.write(endBoundary.toString().getBytes());
It's a bug in BlackBerry OS that appeared in OS 5.0, and persists in OS 6.0. If you try using a multi-byte read before OS 5, it will work fine. OS5 and later produce the behavior you have described.
You can also get around the problem by creating a secure connection, as the bug doesn't manifest itself for secure sockets, only plain sockets.
Most input streams aren't guaranteed to fill a buffer on every read. (DataInputStream has a special method for this, readFully(), which will throw an EOFException if there aren't enough bytes left in the stream to fill the buffer.) And unless the file is a multiple of the buffer length, no stream will fill the buffer on the final read. So, you need to store the number of bytes read and use it during the write:
while(!isCancel)
{
int n = fileIn.read(b);
if (n < 0)
break;
num += n;
Log.info(num + "WRITTEN");
z.write(b, 0, n);
}
Your loop isn't correct. You should take care of the return value from read. It returns how many bytes that were actually read, and that isn't always the same as the buffer size.
Edit:
This is how you usually write loops that does what you want to do:
OutputStream z = null; //Shouldn't be null
InputStream in = null; //Shouldn't be null
byte[] buffer = new byte[1024 * 32];
int len = 0;
while ((len = in.read(buffer)) > -1) {
z.write(buffer, 0, len);
}
Note that you might want to use buffered streams instead of unbuffered streams.

Categories