In C#, I use the SerialPort Read function as so:
byte[] buffer = new byte[100000];
int bytesRead = serial.Read(buffer, 0, 100000);
In Processing, I use readBytes as so:
byte[] buffer = new byte[100000];
int bytesRead = serial.readBytes(buffer);
In Processing, I'm getting the incorrect byte values when I loop over the buffer array from the readBytes function, but when I just use the regular read function I get the proper values, but I can't grab the data into a byte array. What am I doing wrong in the Processing version of the code that's leading me to get the wrong values in the buffer array?
I print out the data the same way in both versions:
for(int i=0; i<bytesRead; i++){
println(buffer[i]);
}
C# Correct Output:
Processing Incorrect Output:
Java bytes are signed, so any value over 128 will overflow.
A quick solution is to do
int anUnsignedByte = (int) aSignedByte & 0xff;
to each of your bytes.
Related
I'm trying to get a grip on java.nio and got stuck on reading from a binary file, previously written
single line "on on on748".
I use try with resources so I'm sure that file and channel are ok.
On the bytebuffer ,declared and allocated 12 that being channel size.
Here the problem starts cause on my bytearray ,I can read it with a for each
and char casting and with a for i can't seem to get any method to adress the numbers.
I've tried a second buffer with .get(xx,8,2) but I don't know how to turn the byte[] array of 2 to an int.
try(FileInputStream file = new FileInputStream("data.dat");
FileChannel channel = file.getChannel()){
ByteBuffer buffer = ByteBuffer.allocate((12));
channel.read(buffer);
byte[] xx = buffer.array();
System.out.println(xx.length);
for (byte z:xx) {
System.out.println((char)z);
}
for (int i = 0; i < xx.length; i++) {
if (i<8)
System.out.print((char)xx[i]);
if (i>=8)
System.out.println((int)xx[i]);
}
How do we understand the defined length of the byte array?
For instance in this example we are defining here that the length of the byte array is 100.
What if the data that would have to be written to the byte array would be longer than 100 bytes?
The same for the result variable here. I don't understand how these lengths work and how to choose a proper length of a byte array for the needs if you don't know how big your data will be?
try {
// Encode a String into bytes
String inputString = "blahblahblah";
byte[] input = inputString.getBytes("UTF-8");
// Compress the bytes
**byte[] output = new byte[100];**
Deflater compresser = new Deflater();
compresser.setInput(input);
compresser.finish();
int compressedDataLength = compresser.deflate(output);
compresser.end();
// Decompress the bytes
Inflater decompresser = new Inflater();
decompresser.setInput(output, 0, compressedDataLength);
**byte[] result = new byte[100];**
int resultLength = decompresser.inflate(result);
decompresser.end();
// Decode the bytes into a String
String outputString = new String(result, 0, resultLength, "UTF-8");
} catch(java.io.UnsupportedEncodingException ex) {
// handle
} catch (java.util.zip.DataFormatException ex) {
// handle
}
And for this example, the byte array that is used here as input, is actually called a buffer, how do we understand it?
Here, when you call compresser.deflate(output) you cannot know the size needed for output unless you know how this method works. But this is not a problem since output is meant as a buffer.
So you should call deflate multiple times and insert output in another object like an OutputStream, like this:
byte[] buffer = new byte[1024];
while (!deflater.finished()) {
int count = deflater.deflate(buffer);
outputStream.write(buffer, 0, count);
}
Same goes for inflating.
By allocating 100 bytes to the byte array, JVM guarantees that a buffer large enough to hold 100 JVM defined bytes (i.e. 8 bits) is available to the caller. Any attempt to access the array with more than 100 bytes would result in exception e.g. ArrayIndexOutOfBoundException in case you directly try to access the array by array[101].
In case the code is written as your demo, the caller assumes the data length never exceeds 100.
I want to split a file into multiple chunks (in this case, trying lengths of 300) and base64 encode it, since loading the entire file to memory gives a negative array exception when base64 encoding it. I tried using the following code:
int offset = 0;
bis = new BufferedInputStream(new FileInputStream(f));
while(offset + 300 <= f.length()){
byte[] temp = new byte[300];
bis.skip(offset);
bis.read(temp, 0, 300);
offset += 300;
System.out.println(Base64.encode(temp));
}
if(offset < f.length()){
byte[] temp = new byte[(int) f.length() - offset];
bis.skip(offset);
bis.read(temp, 0, temp.length);
System.out.println(Base64.encode(temp));
}
At first it appears to be working, however, at one point it switches to just printing out "AAAAAAAAA" and fills up the entire console with it, and the new file is corrupted when decoded. What could be causing this error?
skip() "Skips over and discards n bytes of data from the input stream", and read() returns "the number of bytes read".
So, you read some bytes, skip some bytes, read some more, skip, .... eventually reaching EOF at which point read() returns -1, but you ignore that and use the content of temp which contains all 0's, that are then encoded to all A's.
Your code should be:
try (InputStream in = new BufferedInputStream(new FileInputStream(f))) {
int len;
byte[] temp = new byte[300];
while ((len = in.read(temp)) > 0)
System.out.println(Base64.encode(temp, 0, len));
}
This code reuses the single buffer allocated before the loop, so it will also cause much less garbage collection than your code.
If Base64.encode doesn't have a 3 parameter version, do this:
try (InputStream in = new BufferedInputStream(new FileInputStream(f))) {
int len;
byte[] temp = new byte[300];
while ((len = in.read(temp)) > 0) {
byte[] data;
if (len == temp.length)
data = temp;
else {
data = new byte[len];
System.arraycopy(temp, 0, data, 0, len);
}
System.out.println(Base64.encode(data));
}
}
Be sure to use a buffer size that is a multiple of 3 for encoding and a multiple of 4 for decoding when using chunks of data.
300 fulfills both, so that is already OK. Just as an info for those trying different buffer sizes.
Keep in mind, that reading from a stream into a buffer can in some cicumstances result in a buffer not being fully filled, even though the end of the stream was not yet reached. Might be possible when reading from an internet stream and a timeout occures.
You can heal that, but taking that into account would lead to much more complex coding, that would not be educational anymore.
I use Java 1.5 on an embedded Linux device and want to read a binary file with 2MB of int values. (now 4bytes Big Endian, but I can decide, the format)
Using DataInputStream via BufferedInputStream using dis.readInt()), these 500 000 calls needs 17s to read, but the file read into one big byte buffer needs 5 seconds.
How can i read that file faster into one huge int[]?
The reading process should not use more than additionally 512 kb.
This code below using nio is not faster than the readInt() approach from java io.
// asume I already know that there are now 500 000 int to read:
int numInts = 500000;
// here I want the result into
int[] result = new int[numInts];
int cnt = 0;
RandomAccessFile aFile = new RandomAccessFile("filename", "r");
FileChannel inChannel = aFile.getChannel();
ByteBuffer buf = ByteBuffer.allocate(512 * 1024);
int bytesRead = inChannel.read(buf); //read into buffer.
while (bytesRead != -1) {
buf.flip(); //make buffer ready for get()
while(buf.hasRemaining() && cnt < numInts){
// probably slow here since called 500 000 times
result[cnt] = buf.getInt();
cnt++;
}
buf.clear(); //make buffer ready for writing
bytesRead = inChannel.read(buf);
}
aFile.close();
inChannel.close();
Update: Evaluation of the answers:
On PC the Memory Map with IntBuffer approach was the fastest in my set up.
On the embedded device, without jit, the java.io DataiInputStream.readInt() was a bit faster (17s, vs 20s for the MemMap with IntBuffer)
Final Conclusion:
Significant speed up is easier to achieve via Algorithmic change. (Smaller file for init)
I don't know if this will be any faster than what Alexander provided, but you could try mapping the file.
try (FileInputStream stream = new FileInputStream(filename)) {
FileChannel inChannel = stream.getChannel();
ByteBuffer buffer = inChannel.map(FileChannel.MapMode.READ_ONLY, 0, inChannel.size());
int[] result = new int[500000];
buffer.order( ByteOrder.BIG_ENDIAN );
IntBuffer intBuffer = buffer.asIntBuffer( );
intBuffer.get(result);
}
You can use IntBuffer from nio package -> http://docs.oracle.com/javase/6/docs/api/java/nio/IntBuffer.html
int[] intArray = new int[ 5000000 ];
IntBuffer intBuffer = IntBuffer.wrap( intArray );
...
Fill in the buffer, by making calls to inChannel.read(intBuffer).
Once the buffer is full, your intArray will contain 500000 integers.
EDIT
After realizing that Channels only support ByteBuffer.
// asume I already know that there are now 500 000 int to read:
int numInts = 500000;
// here I want the result into
int[] result = new int[numInts];
// 4 bytes per int, direct buffer
ByteBuffer buf = ByteBuffer.allocateDirect( numInts * 4 );
// BIG_ENDIAN byte order
buf.order( ByteOrder.BIG_ENDIAN );
// Fill in the buffer
while ( buf.hasRemaining( ) )
{
// Per EJP's suggestion check EOF condition
if( inChannel.read( buf ) == -1 )
{
// Hit EOF
throw new EOFException( );
}
}
buf.flip( );
// Create IntBuffer view
IntBuffer intBuffer = buf.asIntBuffer( );
// result will now contain all ints read from file
intBuffer.get( result );
I ran a fairly careful experiment using serialize/deserialize, DataInputStream vs ObjectInputStream, both based on ByteArrayInputStream to avoid IO effects. For a million ints, readObject was about 20msec, readInt was about 116. The serialization overhead on a million-int array was 27 bytes. This was on a 2013-ish MacBook Pro.
Having said that, object serialization is sort of evil, and you have to have written the data out with a Java program.
Hi I am trying to implement the interface SourceStream in my application and overrides the method read(byte[],off,len) and I read the bytes from a server.But I want to convert those byte stream into String for that I used a string object by new String(byte[]) but it asks the initial byte in off and length of the bytes ie len as parameters..Why it is asking like that, as we contain only Strring(bye[])only. can any one help me...Thanks
If you just have a byte[] then you can create a new String via the String(byte[],int,int) constructor provided by the API.
In your case you would do
byte[] myBytes = ("Hello, World!").getBytes();
String myString = new String(myBytes, 0, myBytes.length);
System.out.println(myString);
EDIT:
Try something like this:
int readLength = (len > bufSize ? bufSize : len);
for (int i = 0; i < readLength; i++) {
b[off + i] = buffers[PBuf][PByte];
}
String metaSt = new String(b, 0, readLength);
Just provide 0 as the initial offset and yourArray.Length as the length and you're done. Quite why a method that takes just a byte array isn't provided is anyones guess - probably just to avoid 101 variations of the method.