How can read/write in a short buffer?
I'm trying to implement a BufferedReader and Writer for short values. Each times will be passed a short[] and will be read a short[].
But the java API doesn't have this interface, only byte[].
What's the best way to implement this feature?
Well, for your BufferedInputStream (not reader), you could try reading 2 bytes at the same time:
public synchronized int read(short[] s, int off, int len) throws IOException {
byte[] b = new byte[s.length * 2];
int read = read(b, off * 2, len * 2);
for (int i = 0; i < read; i+=2) {
int b1 = b[i];
int b2 = b[i+1];
s[i/2] = (short) ((b1 << 8) | b2);
}
return read / 2;
}
For your BufferedOutputStream (not writer), you could try the reverse operation for writing 2 bytes at the same time.
You could read/write the bytes and convert sets of two into shorts using ByteBuffer of length 2:
ByteBuffer put() to put the bytes into or putShort() when going the other way.
ByteBuffer.getShort() to convert back into shorts.
You could implement the Reader interface, and then extend the writer class to implement a writer that excepts short[].
Just wrap a DataOutputStream around a BufferedOutputStream, and implement a method writeShortArray(short[]) that calls writeShort() iteratively over the array argument. Similarly for input.
Related
I have an array byte[] arr;
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] arr = out.toByteArray();
How can I measure the data size in arr (if it was written to disk or transferred via network)?
Are below approaches are correct - they suppose that sizeof(byte) = 1B
int byteCount = out.size();
int byteMsgCount = arr.length;
Yes, by definition the size of a variable of type byte is one byte. So the length of your array is indeed array.length bytes.
out.size() will give you the same value, i.e. the number of bytes that you wrote into the output stream.
[Edit] From cricket_007 comment: if you look at the implementation of size and toByteArray
public synchronized byte toByteArray()[] {
return Arrays.copyOf(buf, count);
}
public synchronized int size() {
return count;
}
... so toByteArray basically copies the current output buffer, up to count bytes. So using size is a better solution.
I need to serialize an array of doubles to base64 in Java. I have following method from C#
public static string DoubleArrayToBase64( double[] dValues ) {
byte[] bytes = new byte[dValues.Length * sizeof( double )];
Buffer.BlockCopy( dValues, 0, bytes, 0, bytes.Length );
return Convert.ToBase64String( bytes );
}
How do I do that in Java? I tried
Byte[] bytes = new Byte[abundaceArray.length * Double.SIZE];
System.arraycopy(abundaceArray, 0, bytes, 0, bytes.length);
abundanceValues = Base64.encodeBase64String(bytes);
however this leads to an IndexOutofBoundsException.
How can I achieve this in Java?
EDIT:
Buffer.BlockCopy copies on byte level, the last paramter is number of bytes. System.arraycopy last parameter is number of elements to copy. So yes it should be abundaceArray.length but then a ArrayStoreException is thrown.
EDIT2:
The base64 string must be the same as the ine created with the c# code!
You get an ArrayStoreException when the array types on the method are not the same primitive, so double to byte will not work. Here is a workaround i patched up that seems to work. I do not know of any method in the java core that does automatic conversion from primitive to byte block :
public class CUSTOM {
public static void main(String[] args) {
double[] arr = new double[]{1.1,1.3};
byte[] barr = toByteArray(arr);
for(byte b: barr){
System.out.println(b);
}
}
public static byte[] toByteArray(double[] from) {
byte[] output = new byte[from.length*Double.SIZE/8]; //this is reprezented in bits
int step = Double.SIZE/8;
int index = 0;
for(double d : from){
for(int i=0 ; i<step ; i++){
long bits = Double.doubleToLongBits(d); // first transform to a primitive that allows bit shifting
byte b = (byte)((bits>>>(i*8)) & 0xFF); // bit shift and keep adding
int currentIndex = i+(index*8);
output[currentIndex] = b;
}
index++;
}
return output;
}
}
The Double.SIZE get 64 which is number of bits I suggest to initialize the array like this
Byte[] bytes = new Byte[abundaceArray.length * 8];
Not sure what this C# function does, but I suspect you should replace this line
System.arraycopy(abundaceArray, 0, bytes, 0, bytes.length);
with this
System.arraycopy(abundaceArray, 0, bytes, 0, abundaceArray.length);
I'm guessing you're using the apache commons Base64 class. That only has methods accepting an array of bytes (the primitive type), not Bytes (object wrapper around primitive type).
It's not clear what type your 'abundaceArray' is - whether it's doubles or Doubles.
Either way, you can't use System.arraycopy to copy between arrays of difference primitive types.
I think your best bet is to serialise your array object to a byte array, then base64 encode that.
eg:
ByteArrayOutputStream b = new ByteArrayOutputStream(); // to store output from serialization in a byte array
ObjectOutputStream o = new ObjectOutputStream(b); // to do the serialization
o.writeObject(abundaceArray); // arrays of primitive types are serializable
String abundanceValues = Base64.encodeBase64String(b.toByteArray());
There is of course an ObjectInputStream for going in the other direction at the other end.
I have connected by TCP to a socket which is constantly sending a large amount of data, which I need to read in. What I have so far is a byte buffer that is reading byte by byte in a while loop. But the test case I am using right now is about 3 MB, which takes a while to read when reading in byte by byte.
Here is my code for this explanation:
ByteBuffer buff = ByteBuffer.allocate(3200000);
while(true)
{
int b = in.read();
if(b == -1 || buff.remaining() == 0)
{
break;
}
buff.put((byte)b);
}
I know that byte buffers are not thread safe and I'm not sure if this could be made faster by possibly reading in multiple bytes at a time and then storing it in the buffer? What would be a way for me to speed this process up?
Use a bulk read instead of a single byte read.
byte[] buf = new byte[3200000];
int pos = 0;
while (pos < buf.length) {
int n = in.read(buf, pos, buf.length - pos);
if (n < 0)
break;
pos += n;
}
ByteBuffer buff = ByteBuffer.wrap(buf, 0, pos);
Instead of getting an InputStream from the socket, and filling a byte array to be wrapped, you can get the SocketChannel and read() directly to the ByteBuffer.
There are several ways.
Use Channels.newChannel() to get a channel from the input stream and use ReadableByteChannel.read(buffer).
Get the byte[] array from the buffer with buffer.array() and read directly into that with in.read(array). Make sure the BB really does have an array of course. If it's a direct byte buffer it won't, but in that case you shouldn't be doing all this at all, you should be using a SocketChannel, otherwise there is zero benefit.
Read into your own largeish byte array and then use a bulk put into the ByteBuffer, taking care to use the length returned by the read() method.
Don't do it. Make up your mind as to whether you want InputStreams or ByteBuffers and don't mix your programming metaphors.
I'm writing a simple client/server network application that sends and receives fixed size messages through a TCP socket.
So far, I've been using the getInputStream() and getOutputStream() methods of the Socket class to get the streams and then call the read(byte[] b, int off, int len) method of the InputStream class to read 60 bytes each time (which is the size of a message).
Later on, I read the Javadoc for that method:
public int read(byte[] b,
int off,
int len)
throws IOException
Reads up to len bytes of data from the input stream into an array of
bytes. An attempt is made to read as many as len bytes, but a smaller
number may be read. The number of bytes actually read is returned as
an integer.
I was wondering if there's any Java "out-of-the-box" solution to block until len bytes have been read, waiting forever if necessary.
I can obviously create a simple loop but I feel like I'm reinventing the wheel. Can you suggest me a clean and Java-aware solution?
Use DataInputStream.readFully. Its Javadocs directs the reader to the DataInput Javadocs Javadocs, which state:
Reads some bytes from an input stream and stores them into the buffer array b. The number of bytes read is equal to the length of b.
InputStream in = ...
DataInputStream dis = new DataInputStream( in );
byte[] array = ...
dis.readFully( array );
The simple loop is the way to go. Given the very small number of bytes you're exchanging, I guess it will need just one iteration to read everything, but if you want to make it correct, you have to loop.
a simple for one-liner will do the trick
int toread = 60;
byte[] buff;
for(int index=0;index<toread;index+=in.read(buff,index,toread-index));
but most of the time the only reason less bytes would be read is when the stream ends or the bytes haven't all been flushed on the other side
I think the correct version of ratchet freak's answer is this :
for (int index = 0; index < toRead && (read = inputStream.read(bytes, index, toRead-index))>0 ; index+= read);
it stops reading if read returns -1
I am coding some sort of packet which has different fields with different length in bytes.
So field1 is 2 byte long field2 is 3 bytes long field3 is 6 bytes long and when I addup these fields, I should be getting 11 bytes in length.
But I have no idea how I can declare something with this byte long.
Use an array:
byte[] byteArray = new byte[11];
How's about:
byte[] arr = new byte[11];
You could use a class to represent your packet:
public class Packet
{
public byte[] Field1, Field2, Field3;
public Packet(byte[] packetBytes)
{
ByteBuffer packet = ByteBuffer.wrap(packetBytes);
Field1 = new byte[2];
Field2 = new byte[3];
Field3 = new byte[6];
packet.get(Field1, 0, 2);
packet.get(Field2, 2, 3);
packet.get(Field3, 5, 6);
}
}
ByteBuffer is good for byte-manipulation.
I have found that java.nio.ByteBuffer is typically better for this sort of thing. It has nice methods for dealing with interpreting the bytes in the buffer. The docs are here.
import java.nio.ByteBuffer;
ByteBuffer buffer = ByteBuffer.allocate(11);
Check out the docs and look at the nice methods such as getInt() and getChar().
Java has a limited collection of primitive types, which all have a fixed size. You can see a list of them here. That means you can't decide how many bytes your variable will consist of.
Of course, as others have already mentioned, you can always create a new byte[11]. Note that Java's byte is signed, however. It goes from -128 to 127, not from 0 to 255.
I recommend the utility classes in Javolution for dealing with binary protocol streams such as this. They've come in handy for me several times when dealing with low-level binary streams.
You should probably design your code to separate the message you want to manipulate in java from the wire level format you need to read/write.
e.g. If you have a ScreenResolution concept, you could represent it in java with a ScreenResolution class:
public class ScreenResolution {
public int height;
public int width;
}
This class is easy to work with in Java. Transforming this to a packet that can be transmitted over a network/saved to a file, etc. according to some file format or protocol is another concern.
Say the height and width is to be laid out in 3 bytes each, with some ID and length for the "wire format", you make something like
public byte[] marshalScreenResolution(ScreenResolution obj) {
byte[] buf = new byte[9];
//length of this packet, 2 bytes
buf[0] = 0;
buf[1] = 9;
buf[2] = SCREENRESOLUTION_OPCODE;
//marshal the height/width , 3 least significant bytes.
buf[3] = (obj.height&0xff0000) >> 16;
buf[4] = (obj.height&0x00ff00) >> 8;
buf[5] = (obj.height&0x0000ff) ;
buf[6] = (obj.width&0xff0000) >> 16;
buf[7] = (obj.width&0x00ff00) >> 8;
buf[8] = (obj.width&0x0000ff) ;
return buf;
}
And you make a demarshalScreenResolution function for going from a packet to a ScreenResolution object. The point is you decouple the representation in java from the external representation, and you assemble the fields in the external representation using bytes + some basic bit fiddling.