I have a function like so:
static int byteArrayToInt(byte[] bytes) {
return bytes[0] << 24 | (bytes[1] & 0xFF) << 16 | (bytes[2] & 0xFF) << 8 | (bytes[3] & 0xFF);
}
Which should convert a byteArray of 4 bytes to an int.
The byte array in hexBinary is:E0C38881
And the expected output should be: 3770910849
But I am getting: -524056447
What do I need to do to fix this?
3770910849 is higher than Integer.MAX_VALUE. If you require a positive value, use long instead of int.
For example :
static long byteArrayToInt(byte[] bytes) {
return (long)((bytes[0] << 24) | (bytes[1] & 0xFF) << 16 | (bytes[2] & 0xFF) << 8 | (bytes[3] & 0xFF)) & 0xffffffffL;
}
This is what I used to get it working:
static long getLong(byte[] buf){
long l = ((buf[0] & 0xFFL) << 24) |
((buf[1] & 0xFFL) << 16) |
((buf[2] & 0xFFL) << 8) |
((buf[3] & 0xFFL) << 0) ;
return l;
}
Related
I'm using Kryo to deserialize a class originally serialized in Spark. Kryo writes all of its primitives in BigEndian format, but when I try to deserialize the values on another machine, the value is being returned as if it were LittleEndian.
Underlying method in Kryo:
public int readInt () throws KryoException {
require(4); // Does a basic positionality check that passes in this case
byte[] buffer = this.buffer;
int p = this.position;
this.position = p + 4;
return buffer[p] & 0xFF //
| (buffer[p + 1] & 0xFF) << 8 //
| (buffer[p + 2] & 0xFF) << 16 //
| (buffer[p + 3] & 0xFF) << 24;
}
This returns the value 0x70000000. But when my program (in Scala) uses Kryo's readByte method:
public byte readByte () throws KryoException {
if (position == limit) require(1);
return buffer[position++];
}
and reads the bytes individually, like this:
val a = input.readByte()
val b = input.readByte()
val c = input.readByte()
val d = input.readByte()
val x = (a & 0xFF) << 24 | (b & 0xFF) << 16 | (c & 0xFF) << 8 | d & 0xFF
Then I get 0x70 for x. I don't understand what's happening here. Is it some kind of conversion issue between Scala and Java, or something to do with Kryo and the underling byte array?
The code you wrote:
val a = input.readByte()
val b = input.readByte()
val c = input.readByte()
val d = input.readByte()
val x = (a & 0xFF) << 24 | (b & 0xFF) << 16 | (c & 0xFF) << 8 | d & 0xFF
is converting bytes to int in the wrong way. If you closely inspect the readInt() method you'll see that you've switched the order.
val x = (a & 0xFF) | (b & 0xFF) << 8 | (c & 0xFF) << 16 | d & 0xFF << 24;
would be the correct way to write this.
This question already has answers here:
How to convert a byte array to its numeric value (Java)?
(9 answers)
Closed 5 years ago.
I am trying to convert the following array to long number.
my expected results: 153008 ( I am not sure if it's decimal or hexa)
my actual results (what I am getting): 176
this is what I did, what am I doing wrong?
byte bytesArray [] = { -80,85,2,0,0,0,0,0};
long Num = (bytesArray[7] <<56 |
bytesArray[6] & 0xFF << 48 |
bytesArray[5] & 0xFF << 40 |
bytesArray[4] & 0xFF << 32 |
bytesArray[3] & 0xFF << 24 |
bytesArray[2] & 0xFF << 16 |
bytesArray[1] & 0xFF << 8 |
bytesArray[0] & 0xFF << 0 );
Add brackets like this:
long num = (bytesArray[7] << 56 |
(bytesArray[6] & 0xFF) << 48 |
(bytesArray[5] & 0xFF) << 40 |
(bytesArray[4] & 0xFF) << 32 |
(bytesArray[3] & 0xFF) << 24 |
(bytesArray[2] & 0xFF) << 16 |
(bytesArray[1] & 0xFF) << 8 |
(bytesArray[0] & 0xFF) << 0 );
Otherwise you do << on 0xFF so it becomes really big, before you do bytesArray[x] & [large number] which always evaluates to 0.
Result 153008, hence success!
I have a hexBinary of 4 bytes as follows:
FFFFFFC4
It should return something big but the following function just gives -60:
public static int byteArrayToInt(byte[] b)
{
return b[3] & 0xFF |
(b[2] & 0xFF) << 8 |
(b[1] & 0xFF) << 16 |
(b[0] & 0xFF) << 24;
}
Why it doesn't work? Am I doing something wrong?
The primitive type int is 32-bits long and the most significative bit is the sign. The value FFFFFFC4 has the MSB set to 1, which represents a negative number.
You can get "something big" by using long instead of int:
public static long byteArrayToInt(byte[] b)
{
return (((long) b[3]) & 0xFF) |
(((long) b[2]) & 0xFF) << 8 |
(((long) b[1]) & 0xFF) << 16 |
(((long) b[0]) & 0xFF) << 24;
}
public static int liEndVal (Byte[] mem) {
return (mem[0] & 0xFF)
| ((mem[1] & 0xFF) << 8)
| ((mem[2] & 0xFF) << 16)
| ((mem[3] & 0xFF) << 24);
}
How can I modify this method so that when my input is for example 45 A2 BD 8A the little endian integer output will not be a negative integer? I don't understand why does it keeps on returning the two complement integer.
When mem[3] > 0x7F, the returned int will be negative, since the max value of int is 0x7FFFFFFF. If you want a positive returned value, return a long.
public static long liEndVal (Byte[] mem) {
return (mem[0] & 0xFF)
| ((mem[1] & 0xFF) << 8)
| ((mem[2] & 0xFF) << 16)
| (((long)mem[3] & 0xFF) << 24);
}
Because in that representation, the (signed) integer is negative. Looks like you need an unsigned int.
I think the answer here is probably actually that you shouldn't mind that the answer is negative: just treat it as unsigned, and the signedness of the output as unimportant. You cannot eliminate the possibility of negative output, but I think you're wrong that it matters.
I have a byte array of 64 in size. I am receiving 64 bytes of data from UsbConnection.bulkTRansfer(). I want to check whether I received "Sync" packet or not. "Sync" is a long constant with value 4006390527l. Here's my code.
byte[] buffer = new byte[64];
bytesReceived += usbConnection.bulkTransfer(usbEndpointIn, buffer, 64, 2000);
String l=Base64.encodeToString(buffer,0);
long ll=Long.parseLong(l);
if(C.SYNC_PAD_TO_HOST == ll) {
Log.d(TAG,"SyncReceived");//This is the Sync
gotSync=true;
System.arraycopy(buffer, 0, rcvbuf, 0, buffer.length);
}
I am getting very weird results. Never does the if condition becomes true. Whats wrong here.
There are a few issues here. A USB-Sync for full-speed is 32-bits. An int is capable of containing the data, but not as an unsigned integer. The only reason your code stores it as a long is to store values of 0x80000000 to 0xFFFFFFFF as a positive number. However, only the least significant 32-bits of the long are used.
To calculate the first little-endian unsigned 32-bit number in the stream and store it as a long, use:
long ll = (buffer[0] & 0xFF)
| ((buffer[1] & 0xFF) << 8)
| ((buffer[2] & 0xFF) << 16)
| ((buffer[3] & 0xFFL) << 24);
Here's a breakdown of what's happening:
Your Sync packet in hex is 0x17E1444C. USB transmits this value using little-endian, which means the least significant byte is sent first. Over the wire, the bytes come in this order:
4C 44 E1 17
To break-down the steps:
long ll = buffer[0] & 0xFF;
// ll == 0x4C & 0x000000FF
// == (long) 0x0000004C
// == 0x000000000000004C
ll |= (buffer[1] && 0xFF) << 8;
// ll == 0x000000000000004C | ((0x44 & 0x000000FF) << 8)
// == 0x000000000000004C | (0x00000044 << 8)
// == 0x000000000000004C | 0x00004400
// == 0x000000000000004C | (long) 0x00004400
// == 0x000000000000004C | 0x0000000000004400
// == 0x000000000000004C
ll |= (buffer[2] & 0xFF) << 16;
// ll == 0x000000000000444C | ((0xE1 & 0x000000FF) << 16)
// == 0x000000000000444C | (0x000000E1 << 16)
// == 0x000000000000444C | 0x00E10000
// == 0x000000000000444C | (long) 0x00E10000
// == 0x000000000000444C | 0x0000000000E10000
// == 0x0000000000E1444C
That last inclusion illustrates why we use & 0xFF. Without using this bitwise AND, here's what happens without using & 0xFF:
ll |= buffer[2] << 16;
// ll == 0x000000000000444C | ((int) 0xE1) << 16)
// == 0x000000000000444C | (0xFFFFFFE1 << 16)
// == 0x000000000000444C | 0xFFE10000
// == 0x000000000000444C | (long) 0xFFE10000
// == 0x000000000000444C | 0xFFFFFFFFFFE10000
// == 0xFFFFFFFFFFE1444C
This is because E1 exceeds the maximum positive byte (0x7F), so it's treated as a negative number. When cast directly to int, the negative sign is maintained. To avoid this behaviour, we cast it to int by using a full 8-bit AND.
Now back to the process. The last byte:
ll |= (buffer[3] & 0xFFL) << 24;
// ll == 0x0000000000E1444C | ((0x17 & 0x00000000000000FF) << 24)
// == 0x0000000000E1444C | (0x0000000000000017 << 24)
// == 0x0000000000E1444C | 0x0000000017000000
// == 0x0000000017E1444C
You'll notice the last bitwise AND performed above is using a long version of 0xFF. This is because a left-shift of 24 bits (or higher) has the potential of making a negative int if the least significant byte exceeds the maximum positive byte (0x7F). Imagine instead of 17 that the last byte is A7. Here's what happens when using using & 0xFF instead of & 0xFFL:
ll |= (buffer[3] & 0xFF) << 24;
// ll == 0x0000000000E1444C | ((0xA7 & 0x000000FF) << 24)
// == 0x0000000000E1444C | (0x000000A7 << 24)
// == 0x0000000000E1444C | 0xA7000000
// == 0x0000000000E1444C | (long) 0xA7000000
// == 0x0000000000E1444C | 0xFFFFFFA7000000
// == 0xFFFFFFFFA7E1444C