To convert an int into a byte array, I'm using the following code:
int a = 128;
byte[] b = convertIntValueToByteArray(a);
private static byte[] convertIntValueToByteArray(int intValue){
BigInteger bigInteger = BigInteger.valueOf(intValue);
byte[] origByteArray = bigInteger.toByteArray();
byte[] noSignByteArray = new byte[bigInteger.bitLength()/8];
if(bigInteger.bitLength()%8!=0){
noSignByteArray = origByteArray;
}else{
System.arraycopy(origByteArray,1,noSignByteArray,0,noSignByteArray.length);
}
return noSignByteArray;
}
There are two things which I'm attempting to do.
1)I need to know the number of bytes (rounded up to the closes byte) of the original integer. However, I don't need the additional bit that is added for the sign bit when I call the toByteArray() method. This is the reason why I have the helper method. So in this example, if I don't have the helper method, when I convert 128 to a byte array I get the length to be 2 octets because of the sign bit but I'm only expecting it to be one octet.
2)I need the positive representation of the number. In this example, if I attempt to print the first element in array b, I get -128. However, the numbers I will be using will be positive numbers only so what I actually want is 128. I'm limited to using a byte array. Is there a way to accomplish this?
Updated Post
Thank you for the responses. I haven't found the exact answer I was looking for so I'll attempt to give more details. Ultimately, I want to write values of different types over a data output stream. In this post, I'd like to clarify what happens when ints are written to a data output stream. I've come across two scenarios.
1)
DataOutputStream os = new DataOutputStream(this.socket.getOutputStream());
byte[] b = BigInteger.valueOf(128).toByteArray();
os.write(b);
2)
DataOutputStream os = new DataOutputStream(this.socket.getOutputStream());
os.write(128);
In the first scenario, when the bytes are read from a data input stream, it seems that the first element in the byte array is a 0 to represent the msb and the second element in the array contains the number -128. However, since the msb is 0 we would be able to determine that it is intended to be a positive number. In the second scenario, there is no msb and the only element present in the byte array read from the input stream is -128. I was expecting the write() method of the data output stream to convert the int into the byte array in the same manner as the toByteArray() method does on a BigInteger object. However, this doesn't seem to be the case as the msb is not present. So my question is, how in the second scenario are we supposed to know that 128 is supposed to be a positive number and not a negative one if there is no msb.
As you probably already know
In an octet, the pattern 10000000 can be interpreted as either 128 or -128, depending on the, um, outside interpretation
Java's byte type interprets octects as values in -128...127 only.
If you are building an application in which the entire world consists of nonnegative integers only, then you could simply do all of your work under the assumption that the byte value -128 will mean 128 and -127 will mean 129 and ... and -1 will mean 255. This is certainly doable but it takes work.
Dealing with the notion of an "unsigned byte" like this is normally done by expanding the byte into a short or int with the higher order bits all set to zero and then performing arithmetic or displaying your values. You will need to decide whether such an approach is more to your liking than just representing 128 as two octets in your array.
I think the following code might be sufficient.
In java int is a twos-complements binary number:
-1 = 111...111
ones complement = 000...000; + 1 =
1 = 000...001
So that about the sign bit I do not understand. Be it, that you could do Math.abs(n).
A byte ranges from -128 to 127, but the interpretation is a matter of masking, as below.
public static void main(String[] args) {
int n = 128;
byte[] bytes = intToFlexBytes(n);
for (byte b: bytes)
System.out.println("byte " + (((int)b) & 0xFF));
}
public static byte[] intToFlexBytes(int n) {
// Convert int to byte[4], via a ByteBuffer:
byte[] bytes = new byte[4];
ByteBuffer bb = ByteBuffer.allocateDirect(4);
bb.asIntBuffer().put(n);
bb.position(0);
bb.get(bytes);
// Leading bytes with 0:
int i = 0;
while (i < 4 && bytes[i] == 0)
++i;
// Shorten bytes array if needed:
if (i != 0) {
byte[] shortenedBytes = new byte[4 - i];
for (int j = i; j < 4; ++j) {
shortenedBytes[j - i] = bytes[j]; // System.arrayCopy not needed.
}
bytes = shortenedBytes;
}
return bytes;
}
To answer your first question—how many bytes are required to represent a nonnegative integer using an unsigned representation—consider the following functions I wrote in Common Lisp.
(defconstant +bits-per-byte+ 8)
(defun bit-length (n)
(check-type n (integer 0) "a nonnegative integer")
(if (zerop n)
1
(1+ (floor (log n 2)))))
(defun bytes-for-bits (n)
(check-type n (integer 1) "a positive integer")
(values (ceiling n +bits-per-byte+)))
These highlight the mathematical underpinnings of the problem: namely, the logarithm tells you how many powers of two (as provided by bits) it takes to dominate a given nonnegative integer, adjusted to be a step function with floor, and the number of bytes it takes to hold that number of bits again as a step function, this time adjusted with ceiling.
Note that the number zero is intolerable as input to a logarithm function, so we avoid it explicitly. You may observe that the bit-length function could also be written with a slight transformation of the core expression:
(defun bit-length-alt (n)
(check-type n (integer 0) "a nonnegative integer")
(values (ceiling (log (1+ n) 2))))
Unfortunately, as the logarithm of one is always zero, regardless of the base, this version says that the integer zero can be represented by zero bits, which isn't the answer we want.
For your second goal, you can use the functions I've defined above to allocate the required number of bytes, and incrementally set the bits you need, ignoring sign. It's hard to tell if you're having trouble getting the proper bits set in the byte vector, or whether your problem is in interpreting the bits in way that avoids treating the high bit as a sign bit (that is, two's complement representation). Please elaborate what kind of push you need to get you moving again.
Related
The essence of the task is this, I encode the bytes of the file, 1 byte of the source file = 4 bytes of the encrypted one. For example, the encoded byte is 3125890409. In byte representation, this is [186, 81, 77, 105]. For decryption, I must present this array as a single number. How can I first convert these 4 numbers to binary, and then to decimal and assign BigIntger? I thought to do it like this:
for(int i = 0; i < fileData2.length; i+=4) {
BigInteger message = BigInteger.valueOf(fileData2[i]);
BigInteger message2 = BigInteger.valueOf(fileData2[i + 1]);
BigInteger message3 = BigInteger.valueOf(fileData2[i + 2]);
BigInteger message4 = BigInteger.valueOf(fileData2[i + 3]);
}
And then translate each into binary, but it looks too complicated, and what if you need to do not 4 bytes, but 8 bytes and higher. How can it be implemented faster?
Do not bother with BigInteger or BigDecimals. Instead think of your original byte as one byte (or 8 bits), and the encoded/encrypted values as 4 bytes (32 bits). If you just shove them into a byte[] you can choose an arbitrary size - be it 4, 8, or 13. Lots of flexibility there.
This approach makes it also easier to read/write the data, as you might simply serialize the bytes into a stream//read the data from a stream, as it feels quite natural to read and write the low indexes of the array first.
Once done with that, all you have to focus on is your function to turn 8 bits into 32 bits and vice versa.
4 bytes fit in an int. Though a negative one here. Use a long otherwise. The byte order seems to be big endian, most significant byte first - it is an odd number as is the last byte.
byte[] fileData = {(byte)186, 81, 77, 105};
int n = new BigInteger(fileData).intValue();
int n = ByteBuffer.wrap(fileData).byteOrder(ByteOrder.BIG_ENDIAN).getInt();
long nn = n & 0xFF_FF_FF_FFL;
BIG_ENDIAN is the default, so the byteOrder is not needed here.
A byte is the smallest numeric datatype java offers but yesterday I came in contact with bytestreams for the first time and at the beginning of every package a marker byte is send which gives further instructions on how to handle the package. Every bit of the byte has a specific meaning so I am in need to entangle the byte into it's 8 bits.
You probably could convert the byte to a boolean array or create a switch for every case but that can't certainly be the best practice.
How is this possible in java why are there no bit datatypes in java?
Because there is no bit data type that exists on the physical computer. The smallest allotment you can allocate on most modern computers is a byte which is also known as an octet or 8 bits. When you display a single bit you are really just pulling that first bit out of the byte with arithmetic and adding it to a new byte which still is using an 8 bit space. If you want to put bit data inside of a byte you can but it will be stored as a at least a single byte no matter what programming language you use.
You could load the byte into a BitSet. This abstraction hides the gory details of manipulating single bits.
import java.util.BitSet;
public class Bits {
public static void main(String[] args) {
byte[] b = new byte[]{10};
BitSet bitset = BitSet.valueOf(b);
System.out.println("Length of bitset = " + bitset.length());
for (int i=0; i<bitset.length(); ++i) {
System.out.println("bit " + i + ": " + bitset.get(i));
}
}
}
$ java Bits
Length of bitset = 4
bit 0: false
bit 1: true
bit 2: false
bit 3: true
You can ask for any bit, but the length tells you that all the bits past length() - 1 are set to 0 (false):
System.out.println("bit 75: " + bitset.get(75));
bit 75: false
Have a look at java.util.BitSet.
You might use it to interpret the byte read and can use the get method to check whether a specific bit is set like this:
byte b = stream.read();
final BitSet bitSet = BitSet.valueOf(new byte[]{b});
if (bitSet.get(2)) {
state.activateComponentA();
} else {
state.deactivateComponentA();
}
state.setFeatureBTo(bitSet.get(1));
On the other hand, you can create your own bitmask easily and convert it to a byte array (or just byte) afterwards:
final BitSet output = BitSet.valueOf(ByteBuffer.allocate(1));
output.set(3, state.isComponentXActivated());
if (state.isY){
output.set(4);
}
final byte w = output.toByteArray()[0];
How is this possible in java why are there no bit datatypes in java?
There are no bit data types in most languages. And most CPU instruction sets have few (if any) instructions dedicated to adressing single bits. You can think of the lack of these as a trade-off between (language or CPU) complexity and need.
Manipulating a single bit can be though of as a special case of manipulating multiple bits; and languages as well as CPU's are equipped for the latter.
Very common operations like testing, setting, clearing, inverting as well as exclusive or are all supported on the integer primitive types (byte, short/char, int, long), operating on all bits of the type at once. By chosing the parameters appropiately you can select which bits to operate on.
If you think about it, a byte array is a bit array where the bits are grouped in packages of 8. Adressing a single bit in the array is relatively simple using logical operators (AND &, OR |, XOR ^ and NOT ~).
For example, testing if bit N is set in a byte can be done using a logical AND with a mask where only the bit to be tested is set:
public boolean testBit(byte b, int n) {
int mask = 1 << n; // equivalent of 2 to the nth power
return (b & mask) != 0;
}
Extending this to a byte array is no magic either, each byte consists of 8 bits, so the byte index is simply the bit number divided by 8, and the bit number inside that byte is the remainder (modulo 8):
public boolean testBit(byte[] array, int n) {
int index = n >>> 3; // divide by 8
int mask = 1 << (n & 7); // n modulo 8
return (array[index] & mask) != 0;
}
Here is a sample, I hope useful for you!
DatagramSocket socket = new DatagramSocket(6160, InetAddress.getByName("0.0.0.0"));
socket.setBroadcast(true);
while (true) {
byte[] recvBuf = new byte[26];
DatagramPacket packet = new DatagramPacket(recvBuf, recvBuf.length);
socket.receive(packet);
String bitArray = toBitArray(recvBuf);
System.out.println(Integer.parseInt(bitArray.substring(0, 8), 2)); // convert first byte binary to decimal
System.out.println(Integer.parseInt(bitArray.substring(8, 16), 2)); // convert second byte binary to decimal
}
public static String toBitArray(byte[] byteArray) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < byteArray.length; i++) {
sb.append(String.format("%8s", Integer.toBinaryString(byteArray[i] & 0xFF)).replace(' ', '0'));
}
return sb.toString();
}
if I set two different bits in a Bitset
BitSet x= new BitSet(8);
x.set(0);//.........Case1
x.set(7);//.........Case2
In which case I m setting the most significant bit?
A bit set is not a huge number. It's a set (technically, a vector/list/infinite array) of bits. There is not even a method of BitSet converting it to a number.
Concerning the internal representation - that is implementation dependent. While an implementation could choose to store bit 0 as the least significant bit of the first integer in its internal array, that is not set in stone. I think the Sun implementation does this (except it uses an array of longs, not ints).
There is, however, a natural bijection between bitSets and integers. A bit set is intexed from 0 upwards, and any non-negative integer can be represented as a bitSet uniquely in a natural way as a binary number with the least significant bit stored as a bit 0. Under this bijection, bit 7 is more significant than bit 0, but for every bit in the bit set, each next bit is even more significant.
The LSB is index 0.
Example:
Let's create the character 'a' (binary 0110 0001).
Please note: Adding the bits left-to-right, translates in running down from index 7 to 0.
BitSet bitSet = new BitSet(8);
bitSet.set(7, false);
bitSet.set(6, true);
bitSet.set(5, true);
bitSet.set(4, false);
bitSet.set(3, false);
bitSet.set(2, false);
bitSet.set(1, false);
bitSet.set(0, true);
// let's convert it to a byte[]
byte[] array = bitSet.toByteArray();
// and let's convert that byte[] to text now.
String someText = new String(array, Charsets.US_ASCII);
// this will print an 'a'
System.out.println(someText);
Which is the same as (JDK7+):
System.out.println((char)0b01100001);
While the most significant bit is purely subjective for a BitSet, but setting both ends you could say one of them is likely to be the most significant, but you could say which one it was. ;)
If you want to set the most (and least) significant bit of a byte you can do
byte b = (byte) ((1 << 7) | (1 << 0));
or
byte b = 0;
b |= 1 << 0;
b |= 1 << 7;
What's a nice, readable way of getting the byte representation (i.e. a byte[]) of an int, but only using 3 bytes (instead of 4)? I'm using Hadoop/Hbase and their Bytes utility class has a toBytes function but that will always use 4 bytes.
Ideally, I'd also like a nice, readable way of encoding to as few bytes as possible, i.e. if the number fits in one byte then only use one.
Please note that I'm storing this in a byte[], so I know the length of the array and thus variable length encoding is not necessary. This is about finding an elegant way to do the cast.
A general solution for this is impossible.
If it were possible, you could apply the function iteratively to obtain unlimited compression of data.
Your domain might have some constraints on the integers that allow them to be compressed to 24-bits. If there are such constraints, please explain them in the question.
A common variable size encoding is to use 7 bits of each byte for data, and the high bit as a flag to indicate when the current byte is the last.
You can predict the number of bytes needed to encode an int with a utility method on Integer:
int n = 4 - Integer.numberOfLeadingZeros(x) / 8;
byte[] enc = new byte[n];
while (n-- > 0)
enc[n] = (byte) ((x >>> (n * 8)) & 0xFF);
Note that this will encode 0 as an empty array, and other values in little-endian format. These aspects are easily modified with a few more operations.
If you need to represent the whole 2^32 existing 4-byte integers, you need to chose between:
fixed-size representation, using 4 bytes always; or
variable-size representation, using at least 5 bytes for some numbers.
Take a look on how UTF-8 encodes the Unicode charactes, you might get some insights. (you use some short prefix to describe how many bytes must be read for that unicode character, then you read that many bytes and interpret them).
Try using ByteBuffer. You can even set little endian mode if required:
int exampleInt = 0x11FFFFFF;
ByteBuffer buf = ByteBuffer.allocate(Integer.SIZE / Byte.SIZE);
final byte[] threeByteBuffer = new byte[3];
buf.putInt(exampleInt);
buf.position(1);
buf.get(threeByteBuffer);
Or the shortest signed, Big Endian:
BigInteger bi = BigInteger.valueOf(exampleInt);
final byte[] shortestSigned = bi.toByteArray();
Convert your int to a 4 bytes array, and iterate it, if every high order byte is zero then remove it from array.
Something like:
byte[] bytes = toBytes(myInt);
int neededBytes = 4;
for (;neededBytes > 1; i--) {
if (bytes[neededBytes - 1] != 0) {
break;
}
}
byte[] result = new byte[neededBytes];
// then just use array copy to copy first neededBytes to result.
You can start with something like this:
byte[] Convert(int i)
{ // warning: untested
if (i == 0)
return new byte[0];
if (i > 0 && i < 256)
return new byte[]{(byte)i};
if (i > 0 && i < 256 * 256)
return new byte[]{(byte)i, (byte)(i >> 8)};
if (i > 0 && i < 256 * 256 * 256)
return new byte[]{(byte)i, (byte)(i >> 8), (byte)(i >> 16)};
return new byte[]{(byte)i, (byte)(i >> 8), (byte)(i >> 16), (byte)(i >> 24)};
}
You'll need to decide if you want to be little-endian or big-endian. Note that negative numbers are encoded in 4 bytes.
If i understand right that you really, desperately want to save space, even at expense of arcane bit shuffling: any array type is an unecessary luxury because you cannot use less than one whole byte for the length = addressing space 256 while you know that at most 4 will be needed. So i would reserve 4 bits for the length and sign flag and cram the rest aligned to that number of bytes. You might even save one more byte if your MSB is less than 128. The sign flag i see useful for ability to represent negative numbers in less than 4 bytes too. Better have the bit there every time (even for positive numbers) than overhead of 4 bytes for representing -1.
Anyway, this all is a thin water until you make some statistics on your data set, how many integers are actually compressible and whether the compression overhead is worth the effort.
I have a byte[] that I've read from a file, and I want to get an int from two bytes in it. Here's an example:
byte[] bytes = new byte[] {(byte)0x00, (byte)0x2F, (byte)0x01, (byte)0x10, (byte)0x6F};
int value = bytes.getInt(2,4); //This method doesn't exist
This should make value equal to 0x0110, or 272 in decimal. But obviously, byte[].getInt() doesn't exist. How can I accomplish this task?
The above array is just an example. Actual values are unknown to me.
You should just opt for the simple:
int val = ((bytes[2] & 0xff) << 8) | (bytes[3] & 0xff);
You could even write your own helper function getBytesAsWord (byte[] bytes, int start) to give you the functionality if you didn't want the calculations peppering your code but I think that would probably be overkill.
Try:
public static int getInt(byte[] arr, int off) {
return arr[off]<<8 &0xFF00 | arr[off+1]&0xFF;
} // end of getInt
Your question didn't indicate what the two args (2,4) meant. 2 and 4 don't make sense in your example as indices in the array to find ox01 and 0x10, I guessed you wanted to take two consecutive element, a common thing to do, so I used off and off+1 in my method.
You can't extend the byte[] class in java, so you can't have a method bytes.getInt, so I made a static method that uses the byte[] as the first arg.
The 'trick' to the method is that you bytes are 8 bit signed integers and values over 0x80 are negative and would be sign extended (ie 0xFFFFFF80 when used as an int). That is why the '&0xFF' masking is needed. the '<<8' shifts the more significant byte 8 bits left.
The '|' combines the two values -- just as '+' would. The order of the operators is important because << has highest precedence, followed by & followed by | -- thus no parentheses are needed.
Here's a nice simple reliable way.
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(4);
// by choosing big endian, high order bytes must be put
// to the buffer before low order bytes
byteBuffer.order(ByteOrder.BIG_ENDIAN);
// since ints are 4 bytes (32 bit), you need to put all 4, so put 0
// for the high order bytes
byteBuffer.put((byte)0x00);
byteBuffer.put((byte)0x00);
byteBuffer.put((byte)0x01);
byteBuffer.put((byte)0x10);
byteBuffer.flip();
int result = byteBuffer.getInt();
Alternatively, you could use:
int val = (bytes[2] << 8) + bytes[3]
You can use ByteBuffer. It has the getInt method you are searching for and many other useful methods
The Google Base16 class is from Guava-14.0.1.
new BigInteger(com.google.common.io.BaseEncoding.base16().encode(bytesParam),16).longValue();