Trying to upload in chunks - java

I am trying to accomplish a large file upload on a blackberry. I am succesfully able to upload a file but only if I read the file and upload it 1 byte at a time. For large files I think this is decreasing performance. I want to be able to read and write at something more 128 kb at a time. If i try to initialise my buffer to anything other than 1 then I never get a response back from the server after writing everything.
Any ideas why i can upload using only 1 byte at a time?
z.write(boundaryMessage.toString().getBytes());
DataInputStream fileIn = fc.openDataInputStream();
boolean isCancel = false;
byte[]b = new byte[1];
int num = 0;
int left = buffer;
while((fileIn.read(b)>-1))
{
num += b.length;
left = buffer - num * 1;
Log.info(num + "WRITTEN");
if (isCancel == true)
{
break;
}
z.write(b);
}
z.write(endBoundary.toString().getBytes());

It's a bug in BlackBerry OS that appeared in OS 5.0, and persists in OS 6.0. If you try using a multi-byte read before OS 5, it will work fine. OS5 and later produce the behavior you have described.
You can also get around the problem by creating a secure connection, as the bug doesn't manifest itself for secure sockets, only plain sockets.

Most input streams aren't guaranteed to fill a buffer on every read. (DataInputStream has a special method for this, readFully(), which will throw an EOFException if there aren't enough bytes left in the stream to fill the buffer.) And unless the file is a multiple of the buffer length, no stream will fill the buffer on the final read. So, you need to store the number of bytes read and use it during the write:
while(!isCancel)
{
int n = fileIn.read(b);
if (n < 0)
break;
num += n;
Log.info(num + "WRITTEN");
z.write(b, 0, n);
}

Your loop isn't correct. You should take care of the return value from read. It returns how many bytes that were actually read, and that isn't always the same as the buffer size.
Edit:
This is how you usually write loops that does what you want to do:
OutputStream z = null; //Shouldn't be null
InputStream in = null; //Shouldn't be null
byte[] buffer = new byte[1024 * 32];
int len = 0;
while ((len = in.read(buffer)) > -1) {
z.write(buffer, 0, len);
}
Note that you might want to use buffered streams instead of unbuffered streams.

Related

getting fix number of TCP packets in java

I am acquiring thousands of TCP packets. I read them one packet after one packet but I want to read them as whole of 128 packets after 128 packets. For the moment, I use
s = new Socket(ip, port);
byte[] buffer = new byte[some_length];
stream = s.getInputStream();
stream.read(buffer);
Precisely, each ordered sequence of 128 packets corresponds to one image (that will be reconstructed afterwards). By the way, the first byte of each TCP packet corresponds to a number between 1 and 128, so that I can use these numbers as landmarks.
Is there a way, each time I get the first byte of a packet set to 1, to read those packets by sequence of 128 without having to code a dedicated loop (this loop would call 128 times stream.read(buffer);) ?
You state in the comments that every packet is exact 2048 bytes long, while the amount of this number isn't important, important is that the length is fixed.
There are different methods of reading fixed length packets:
Using InputStream.read in a loop
A call to InputStream.read may not fill the buffer fully, it may fill only 1 byte, even if you requested more. To counter this, you need to read in a while loop.
public byte[] readImage(InputStream in, int imageLength) throw IOException{
byte[] out = new byte[imageLength];
int read;
for(int i = 0; read = in.read(out, i, imageLength - i); i += read)
if(read < 0)
throw new EOFException();
return out;
}
In the loop above, we are first allocating a byte array of the required size, then we are calling in.read with our byte array and the current index. This way, we are sure we never return a half read packet to our caller
Using DataInput
Instead of manually reinventing the wheel, you can also use DataInput.readFully to read the byte array fully. This is easy:
byte[] image = new byte[imagelength];
DataInput in = new DataInputStream(inStream);
in.readFully(image);
here's how I proceed
DataInputStream dis = new DataInputStream(stream);
byte[] buffer = new byte[len];
while(buffer[0] !=1){
dis.readFully(buffer);
}
byte[] tmpBuffer = new byte[len];
byte[] finalBuffer = new byte[nb_line * len];
int count_lines = 0;
while(true){
dis.readFully(tmpBuffer);
System.arraycopy(tmpBuffer, 1, finalBuffer, (count_lines + 1) * rows, rows);
count_lines++;
if(count_lines == 127)
break;
}

Java socket InputStream.read() not behaving as expected

Ive read many tutorials and posts about the java InputStream and reading data. Ive established a client and server implementation but i'm having weird issues where reading a variable length "payload" from the client is not consistent.
What im trying to do is to transfer up 100kB max in one single logical payload. Now i have verified that the TCP stack is not sending one mahousive 100kB packet from the client. I have played about with different read forms as per previous questions about the InputStream reading but ive nearly teared my hair out trying to get it to dump the correct data.
Lets for example say the client is sending a 70k payload.
Now the first observation i've noticed is that if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[]. When free running the byte[] will be different sizes every time i run the code with practically the same payload.
Timing problems?
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
please understand i dont like the way ive had to get this to work and im not happy with the solution.
What experiences, problems have you had/seen respectively that might help me fix this code to be more reliable, easier to read.
public void listenForResponses() {
isActive = true;
try {
// apprently read() doesnt return -1 on socket based streams
// if big stuff comes through, TCP packets are segmented, but the inputstream
// does something odd and doesnt return the correct raw data.
// this is a work around to accept vari-length payloads into one byte[] buffer
byte[] inBuffer = new byte[1];
byte[] buffer = null;
int bytesRead = 0;
byte[] finalbuffer = new byte[0];
boolean isMultichunk = false;
InputStream istrm = currentSession.getInputStream();
while ((bytesRead = istrm.read(inBuffer)) > -1 && isActive) {
buffer = new byte[bytesRead];
buffer = Arrays.copyOfRange(inBuffer, 0, bytesRead);
int available = istrm.available();
if(available < 1) {
if(!isMultichunk) {
finalbuffer = buffer;
}
else {
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
notifyOfResponse(deserializePayload(finalbuffer));
finalbuffer = new byte[0];
isMultichunk = false;
}
else {
if(!isMultichunk) {
isMultichunk = true;
finalbuffer = new byte[0];
}
finalbuffer = ConcatTools.ByteArrayConcat(finalbuffer,buffer);
}
}
} catch (IOException e) {
Logger.consoleOut("PayloadReadThread: " + e.getMessage());
currentSession = null;
}
}
InputStream is working as designed.
if I flow through the code line by line initiated from a break point, it will work fine, i get the exact same count in the outbound byte[].
That's because stepping through the code is slower, so more data drives between reads, enough to fill your buffer.
When free running the byte[] will be different sizes every time i run the code with practically the same payload.
That's because InputStream.read() is contracted to block until at least one byte has been transferred, or EOS or an exception occurs. See the Javadoc. There's nothing in there about filling the buffer.
second observation is that when the "inbuffer" size is set to 4096 for example this odd behaviour occurs. setting the "inbuffer" size to 1 presents the correct behaviour i.e. i get the correct payload size.
That's the correct behaviour in the case of a 1 byte buffer for exactly the same reason given above. It's not the 'correct behaviour' for any other size.
NB Your copy loop is nonsense. available() has few correct uses, and this isn't one of them.
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
NB (2) read() does indeed return -1 on socket-based streams, but only when the peer has shutdown or closed the connection.

Input Stream only returning 1 byte

I am using java comm library to try accomplish a simple read/write to a serial port. I am able to successfully write to the port, and catch the return input from the input stream, but when I read from the input stream I am only able to read 1 byte (when I know there should be 11 returned)
I can write to the port successfully using Putty and am receiving the correct return String there. I am pretty new to Java, buffers and serial i/o and think maybe there is some obvious syntax or understanding of how data is returned to the InputStream. Could someone help me? Thanks!
case SerialPortEvent.DATA_AVAILABLE:
System.out.println("Data available..");
byte[] readBuffer = new byte[11];
try {
System.out.println("We trying here.");
while (inputStream.available() > 0) {
int numBytes = inputStream.read(readBuffer, 1, 11);
System.out.println("Number of bytes read:" + numBytes);
}
System.out.println(new String(readBuffer));
} catch (IOException e) {System.out.println(e);}
break;
}
This code returns the following output:
Data available..
We trying here.
Number of bytes read:1
U
As the documentation states
Reads up to len bytes of data from the input stream into an array of bytes. An attempt is made to read as many as len bytes, but a smaller number may be read.
This behavior is perfectly legal. I would also expect that a SerialPortEvent.DATA_AVAILABLE does not guarantee that all data is available. It's potentially just 1 byte and you get that event 11 times.
Things you can try:
1) Keep reading until you have all your bytes. E.g. wrap your InputStream into a DataInputStream and use readFully, that's the simplest way around the behavior of the regular read method. This might fail if the InputStream does not provide any more bytes and signals end of stream.
DataInputStream din = new DataInputStream(in);
byte[] buffer = new byte[11];
din.readFully(buffer);
// either results in an exception or 11 bytes read
2) read them as they come and append them to some buffer. Once you have all of them take the context of the buffer as result.
private StringBuilder readBuffer = new StringBuilder();
public void handleDataAvailable(InputStream in) throws IOException {
int value;
// reading just one at a time
while ((value = in.read()) != -1) {
readBuffer.append((char) value);
}
}
Some notes:
inputStream.read(readBuffer, 1, 11)
Indices start at 0 and if you want to read 11 bytes into that buffer you have to specify
inputStream.read(readBuffer, 0, 11)
It would otherwise try to put the 11th byte at the 12th index which will not work.

IndexOutOfBoundsException When read and write from Standard I/O

I'm new to Java and currently doing some experiments on it.
I wrote a little program that does read and write stream of std I/O but I
kept getting exceptions thrown for out of range. Here is my code
int BLOCKSIZE = 128*1024;
InputStream inStream = new BufferedInputStream(System.in);
OutputStream outStream = new BufferedOutputStream(System.out);
byte[] buffer = new byte[BLOCKSIZE];
int bytesRead = 0;
int writePos = 0;
int readPos = 0;
while ((bytesRead = inStream.read(buffer,readPos,BLOCKSIZE)) != -1) {
outStream.write(buffer,writePos,BLOCKSIZE);
readPos += bytesRead;
writePos += BLOCKSIZE;
buffer = new byte[BLOCKSIZE];
}
Here is the exception thrown:"Exception in thread "main" java.lang.IndexOutOfBoundsException
at java.io.BufferedInputStream.read(BufferedInputStream.java:327)
at JavaPigz.main(JavaPigz.java:73)"
73th col is the inStream.read(...) statement. Basically I want to read 128kb bytes from stdin once and write it to the stdout and go back to read another 128kb chunk, so on and so forth. The same exception is also thrown to outStream.write()
I did some debugging and it looks BufferedInputStream buffers at most 64kb chunk once. Don't know if this is true. Thank you.
Edit: I also tried doing
InputStream inStream = new BufferedInputStream(System.in,BLOCKSIZE);
to specify the size of buffered chunk I want. But turns out it keeps giving size of 64kb
no matter what is specified
You're increasing your readPos (and writePos) in your loop. The subsequent reads are starting at that offset for inserting into your buffer, and attempting to write BLOCKSIZE bytes into it ... which won't fit, thus giving you an index out of bounds error.
The way you have that loop written, readPos and writePos should always be 0 especially since you're creating a new buffer every time. That being said ... you really don't want to do that, you want to re-use the buffer. It looks like you're just trying to read from the input stream and write it to the output stream ...
while ((bytesRead = inStream.read(buffer,readPos,BLOCKSIZE)) != -1) {
outStream.write(buffer,writePos,bytesRead);
}
your readPos and writePos correspond to the array ... not to the stream ...
set them 0 and leave them at 0
in your write call set param 3 to bytesRead instead of BLOCKSIZE

TCP transfer in Java is VERY slow

I am doing a program to saturate a link for performance testing in my networking lab, I tried different things, from changing Send and Receive buffers, creating a file and reading it, creating a long array and sending it through the socket all at once: OutputStream.write(byte[])
The array is 1000000 positions length, when I sniff the network traffic, according to the sniffer, the packets have "Data (1460 bytes)" which make me supose that I'm not sending byte by byte.
The bandwidth used is about 8% of the 100Mbps.
I post the relevant code as there is some interaction between client and server which I don't think is relevant:
Client:
int car=0;
do {
car=is.read();
//System.out.println(car);
contador++;
} while(car!=104);
Server:
byte dades[]=new byte[1000000];
FileInputStream fis=null;
try {
FileOutputStream fos = new FileOutputStream("1MB.txt");
fos.write(dades);
fos=null;
File f = new File("1MB.txt");
fis = new FileInputStream(f);
step=0;
correcte=true;
sck = srvSock.accept();
sck.setSendBufferSize(65535);
sck.setReceiveBufferSize(65535);
os = sck.getOutputStream();
is = sck.getInputStream();
}
...
BufferedInputStream bis = new BufferedInputStream(fis);
bis.read(dades);
for(int i=0;i<100;i++) {
os.write(dades);
}
In this case I put the last idea I had, to create a file with a million positions byte array and then read this file and write to the socket, before this idea I was sending the byte array.
Another thing which make me believe this is not a byte by byte sending is that in a quad core computer the client uses 25% CPU and uses around 8% of the bandwidth, and in an old computer which is single core (AMD Athlon) it uses 100% of the CPU and just 4% of the bandwidth. The server is not so CPU intensive.
Any ideas??? I feel a little lost right now...
Thanks!!!
Perhaps it's related to the fact that client reads data byte by byte, that can force flow control algorithm to limit transmission bandwidth:
int car=0;
do {
car=is.read();
//System.out.println(car);
contador++;
} while(car!=104);
Try to read data into array instead, or use BufferedInputStream:
byte[] buf = new byte[65536];
int size = 0;
boolean stop = false;
while (!stop && (size = is.read(buf)) != -1) {
for (int i = 0; i < size; i++) {
if (buf[i] == 104) {
stop = true;
break;
}
}
}

Categories