getting fix number of TCP packets in java - java

I am acquiring thousands of TCP packets. I read them one packet after one packet but I want to read them as whole of 128 packets after 128 packets. For the moment, I use
s = new Socket(ip, port);
byte[] buffer = new byte[some_length];
stream = s.getInputStream();
stream.read(buffer);
Precisely, each ordered sequence of 128 packets corresponds to one image (that will be reconstructed afterwards). By the way, the first byte of each TCP packet corresponds to a number between 1 and 128, so that I can use these numbers as landmarks.
Is there a way, each time I get the first byte of a packet set to 1, to read those packets by sequence of 128 without having to code a dedicated loop (this loop would call 128 times stream.read(buffer);) ?

You state in the comments that every packet is exact 2048 bytes long, while the amount of this number isn't important, important is that the length is fixed.
There are different methods of reading fixed length packets:
Using InputStream.read in a loop
A call to InputStream.read may not fill the buffer fully, it may fill only 1 byte, even if you requested more. To counter this, you need to read in a while loop.
public byte[] readImage(InputStream in, int imageLength) throw IOException{
byte[] out = new byte[imageLength];
int read;
for(int i = 0; read = in.read(out, i, imageLength - i); i += read)
if(read < 0)
throw new EOFException();
return out;
}
In the loop above, we are first allocating a byte array of the required size, then we are calling in.read with our byte array and the current index. This way, we are sure we never return a half read packet to our caller
Using DataInput
Instead of manually reinventing the wheel, you can also use DataInput.readFully to read the byte array fully. This is easy:
byte[] image = new byte[imagelength];
DataInput in = new DataInputStream(inStream);
in.readFully(image);

here's how I proceed
DataInputStream dis = new DataInputStream(stream);
byte[] buffer = new byte[len];
while(buffer[0] !=1){
dis.readFully(buffer);
}
byte[] tmpBuffer = new byte[len];
byte[] finalBuffer = new byte[nb_line * len];
int count_lines = 0;
while(true){
dis.readFully(tmpBuffer);
System.arraycopy(tmpBuffer, 1, finalBuffer, (count_lines + 1) * rows, rows);
count_lines++;
if(count_lines == 127)
break;
}

Related

Send int on socket as byte[] then recast to int not works

I'm trying to serialize Object between NIO SocketChannel and blocking IO Socket. Since I can't use Serializable/writeObject on NIO, I thought to write code to serialize object into an ByteArrayOutputStream then send array length followed by array.
Sender function is
public void writeObject(Object obj) throws IOException{
ByteArrayOutputStream serializedObj = new ByteArrayOutputStream();
ObjectOutputStream writer = new ObjectOutputStream(serializedObj);
writer.writeUnshared(obj);
ByteBuffer size = ByteBuffer.allocate(4).putInt(serializedObj.toByteArray().length);
this.getSocket().write(size);
this.getSocket().write(ByteBuffer.wrap(serializedObj.toByteArray()));
}
and receiver is:
public Object readObject(){
try {
//Leggi dimensione totale pacchetto
byte[] dimension = new byte[4];
int byteRead = 0;
while(byteRead < 4) {
byteRead += this.getInputStream().read(dimension, byteRead, 4 - byteRead);
}
int size = ByteBuffer.wrap(dimension).getInt(); /* (*) */
System.out.println(size);
byte[] object = new byte[size];
while(size > 0){
size -= this.getInputStream().read(object);
}
InputStream in = new ByteArrayInputStream(object, 0, object.length);
ObjectInputStream ois = new ObjectInputStream(in);
Object res = ois.readUnshared();
ois.close();
return res;
} catch (IOException | ClassNotFoundException e) {
return null;
}
}
The problem is that size (*) is always equals to -1393754107 when serializedObj.toByteArray().length in my test is 316.
I don't understand why casting not works properly.
this.getSocket().write(size);
this.getSocket().write(ByteBuffer.wrap(serializedObj.toByteArray()));
If the result of getSocket() is a SocketChannel in non-blocking mode, the problem is here. You aren't checking the result of write(). In non-blocking mode it can write less than the number of bytes remaining in the ByteBuffer; indeed it can write zero bytes.
So youu aren't writing all the data you think you're writing, so the other end overruns and reads the next length word as part of the data being written, and reads part of the next data as the next length word, and gets a wrong answer. I'm surprised it didn't barf earlier. In fact it probably did, but your deplorable practice of ignoring IOExceptions masked it. Don't do that. Log them.
So you need to loop until all requested data has been written, and if any write() returns zero you need to select on OP_WRITE until it fires, which adds a considerable complication into your code as you have to return to the select loop while remembering that there is an outstanding ByteBuffer with data remaining to be written. And when you get the OP_WRITE and the writes complete you have to deregister interest in OP_WRITE, as it's only of interest after a write() has returned zero.
NB There is no casting in your code.
The problem was write() returned 0 always. This happens because the buffer wasn't flipped before write().

How to send java byte[] on tcp socket to c++ char[] on server?

I want to send a byte[] array from a java client to a server that receives the data in C++. The byte array contains characters and integers that are converted to bytes (its a wave header). The server doesn't receive the values correctly. How can I send the byte[] so that the server socket can write it to a char[]? I am using the following code:
Client.java:
//Some example values in byte[]
byte[] bA = new byte[44];
bA[0]='R';
...
bA[4]=(byte)(2048 & 0xff);
...
bA[16] = 16;
....
//Write byte[] on socket
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
out.write(bA,0,44);
Server.cpp
int k = 0,n = 0;
char buffer[100];
ofstream wav("out.wav", ios::out | ios::binary);
while(k<44){//receive 44 values
memset(buffer ,0 , 100);
n = recv(sock , buffer , 100 , 0);
k += n;
buffer[99] = '\0';
wav.write(buffer,n);
}
One issue I see is if you receive 100 characters, you're corrupting the data with this line:
buffer[99] = '\0';
If there is a character other than NULL at that position, you've corrupted the data. Since the data is binary, there is no need to null terminate the buffer. Remove that line from your loop.
Instead, rely on the return value of recv to determine the number of characters to copy to the stream. Which brings up another point -- you're not checking if recv returns an error.

Input Stream only returning 1 byte

I am using java comm library to try accomplish a simple read/write to a serial port. I am able to successfully write to the port, and catch the return input from the input stream, but when I read from the input stream I am only able to read 1 byte (when I know there should be 11 returned)
I can write to the port successfully using Putty and am receiving the correct return String there. I am pretty new to Java, buffers and serial i/o and think maybe there is some obvious syntax or understanding of how data is returned to the InputStream. Could someone help me? Thanks!
case SerialPortEvent.DATA_AVAILABLE:
System.out.println("Data available..");
byte[] readBuffer = new byte[11];
try {
System.out.println("We trying here.");
while (inputStream.available() > 0) {
int numBytes = inputStream.read(readBuffer, 1, 11);
System.out.println("Number of bytes read:" + numBytes);
}
System.out.println(new String(readBuffer));
} catch (IOException e) {System.out.println(e);}
break;
}
This code returns the following output:
Data available..
We trying here.
Number of bytes read:1
U
As the documentation states
Reads up to len bytes of data from the input stream into an array of bytes. An attempt is made to read as many as len bytes, but a smaller number may be read.
This behavior is perfectly legal. I would also expect that a SerialPortEvent.DATA_AVAILABLE does not guarantee that all data is available. It's potentially just 1 byte and you get that event 11 times.
Things you can try:
1) Keep reading until you have all your bytes. E.g. wrap your InputStream into a DataInputStream and use readFully, that's the simplest way around the behavior of the regular read method. This might fail if the InputStream does not provide any more bytes and signals end of stream.
DataInputStream din = new DataInputStream(in);
byte[] buffer = new byte[11];
din.readFully(buffer);
// either results in an exception or 11 bytes read
2) read them as they come and append them to some buffer. Once you have all of them take the context of the buffer as result.
private StringBuilder readBuffer = new StringBuilder();
public void handleDataAvailable(InputStream in) throws IOException {
int value;
// reading just one at a time
while ((value = in.read()) != -1) {
readBuffer.append((char) value);
}
}
Some notes:
inputStream.read(readBuffer, 1, 11)
Indices start at 0 and if you want to read 11 bytes into that buffer you have to specify
inputStream.read(readBuffer, 0, 11)
It would otherwise try to put the 11th byte at the 12th index which will not work.

IndexOutOfBoundsException When read and write from Standard I/O

I'm new to Java and currently doing some experiments on it.
I wrote a little program that does read and write stream of std I/O but I
kept getting exceptions thrown for out of range. Here is my code
int BLOCKSIZE = 128*1024;
InputStream inStream = new BufferedInputStream(System.in);
OutputStream outStream = new BufferedOutputStream(System.out);
byte[] buffer = new byte[BLOCKSIZE];
int bytesRead = 0;
int writePos = 0;
int readPos = 0;
while ((bytesRead = inStream.read(buffer,readPos,BLOCKSIZE)) != -1) {
outStream.write(buffer,writePos,BLOCKSIZE);
readPos += bytesRead;
writePos += BLOCKSIZE;
buffer = new byte[BLOCKSIZE];
}
Here is the exception thrown:"Exception in thread "main" java.lang.IndexOutOfBoundsException
at java.io.BufferedInputStream.read(BufferedInputStream.java:327)
at JavaPigz.main(JavaPigz.java:73)"
73th col is the inStream.read(...) statement. Basically I want to read 128kb bytes from stdin once and write it to the stdout and go back to read another 128kb chunk, so on and so forth. The same exception is also thrown to outStream.write()
I did some debugging and it looks BufferedInputStream buffers at most 64kb chunk once. Don't know if this is true. Thank you.
Edit: I also tried doing
InputStream inStream = new BufferedInputStream(System.in,BLOCKSIZE);
to specify the size of buffered chunk I want. But turns out it keeps giving size of 64kb
no matter what is specified
You're increasing your readPos (and writePos) in your loop. The subsequent reads are starting at that offset for inserting into your buffer, and attempting to write BLOCKSIZE bytes into it ... which won't fit, thus giving you an index out of bounds error.
The way you have that loop written, readPos and writePos should always be 0 especially since you're creating a new buffer every time. That being said ... you really don't want to do that, you want to re-use the buffer. It looks like you're just trying to read from the input stream and write it to the output stream ...
while ((bytesRead = inStream.read(buffer,readPos,BLOCKSIZE)) != -1) {
outStream.write(buffer,writePos,bytesRead);
}
your readPos and writePos correspond to the array ... not to the stream ...
set them 0 and leave them at 0
in your write call set param 3 to bytesRead instead of BLOCKSIZE

Trying to upload in chunks

I am trying to accomplish a large file upload on a blackberry. I am succesfully able to upload a file but only if I read the file and upload it 1 byte at a time. For large files I think this is decreasing performance. I want to be able to read and write at something more 128 kb at a time. If i try to initialise my buffer to anything other than 1 then I never get a response back from the server after writing everything.
Any ideas why i can upload using only 1 byte at a time?
z.write(boundaryMessage.toString().getBytes());
DataInputStream fileIn = fc.openDataInputStream();
boolean isCancel = false;
byte[]b = new byte[1];
int num = 0;
int left = buffer;
while((fileIn.read(b)>-1))
{
num += b.length;
left = buffer - num * 1;
Log.info(num + "WRITTEN");
if (isCancel == true)
{
break;
}
z.write(b);
}
z.write(endBoundary.toString().getBytes());
It's a bug in BlackBerry OS that appeared in OS 5.0, and persists in OS 6.0. If you try using a multi-byte read before OS 5, it will work fine. OS5 and later produce the behavior you have described.
You can also get around the problem by creating a secure connection, as the bug doesn't manifest itself for secure sockets, only plain sockets.
Most input streams aren't guaranteed to fill a buffer on every read. (DataInputStream has a special method for this, readFully(), which will throw an EOFException if there aren't enough bytes left in the stream to fill the buffer.) And unless the file is a multiple of the buffer length, no stream will fill the buffer on the final read. So, you need to store the number of bytes read and use it during the write:
while(!isCancel)
{
int n = fileIn.read(b);
if (n < 0)
break;
num += n;
Log.info(num + "WRITTEN");
z.write(b, 0, n);
}
Your loop isn't correct. You should take care of the return value from read. It returns how many bytes that were actually read, and that isn't always the same as the buffer size.
Edit:
This is how you usually write loops that does what you want to do:
OutputStream z = null; //Shouldn't be null
InputStream in = null; //Shouldn't be null
byte[] buffer = new byte[1024 * 32];
int len = 0;
while ((len = in.read(buffer)) > -1) {
z.write(buffer, 0, len);
}
Note that you might want to use buffered streams instead of unbuffered streams.

Categories