converting bytes array into long [duplicate] - java

This question already has answers here:
How to convert a byte array to its numeric value (Java)?
(9 answers)
Closed 5 years ago.
I am trying to convert the following array to long number.
my expected results: 153008 ( I am not sure if it's decimal or hexa)
my actual results (what I am getting): 176
this is what I did, what am I doing wrong?
byte bytesArray [] = { -80,85,2,0,0,0,0,0};
long Num = (bytesArray[7] <<56 |
bytesArray[6] & 0xFF << 48 |
bytesArray[5] & 0xFF << 40 |
bytesArray[4] & 0xFF << 32 |
bytesArray[3] & 0xFF << 24 |
bytesArray[2] & 0xFF << 16 |
bytesArray[1] & 0xFF << 8 |
bytesArray[0] & 0xFF << 0 );

Add brackets like this:
long num = (bytesArray[7] << 56 |
(bytesArray[6] & 0xFF) << 48 |
(bytesArray[5] & 0xFF) << 40 |
(bytesArray[4] & 0xFF) << 32 |
(bytesArray[3] & 0xFF) << 24 |
(bytesArray[2] & 0xFF) << 16 |
(bytesArray[1] & 0xFF) << 8 |
(bytesArray[0] & 0xFF) << 0 );
Otherwise you do << on 0xFF so it becomes really big, before you do bytesArray[x] & [large number] which always evaluates to 0.
Result 153008, hence success!

Related

None of the 3 bytes to integer examples work

Background
I am taking 8, 16, 24 or 32 bit audio data and converting them to integers, but BigInteger cannot be recycled and using it will waste lot of memory so I created this class to fix the memory consumption. And seems like ByteBuffer will do the job well, except if the input is 3 bytes long.
I have never done any bit or byte operations, so I am completely lost here.
Issue
None of the examples that I found on stackoverflow on 3 bytes to int do not give the wanted result. Check the bytesToInt3 method.
Question
Is there something obvious that I am doing completely wrong?
Is the return new BigInteger(byte[] data).intValue(); really the only solution to this?
Code
import java.math.BigInteger;
import java.nio.ByteBuffer;
class BytesToInt {
// HELP
private static int bytes3ToInt(byte[] data) {
// none below seem to work, even if I swap first and last bytes
// these examples are taken from stackoverflow
//return (data[2] & 0xFF) | ((data[1] & 0xFF) << 8) | ((data[0] & 0x0F) << 16);
//return ((data[2] & 0xF) << 16) | ((data[1] & 0xFF) << 8) | (data[0] & 0xFF);
//return ((data[2] << 28) >>> 12) | (data[1] << 8) | data[0];
//return (data[0] & 255) << 16 | (data[1] & 255) << 8 | (data[2] & 255);
return (data[2] & 255) << 16 | (data[1] & 255) << 8 | (data[0] & 255);
// Only thing that works, but wastes memory
//return new BigInteger(data).intValue();
}
public static void main(String[] args) {
// Test with -666 example number
byte[] negativeByteArray3 = new byte[] {(byte)0xff, (byte)0xfd, (byte)0x66};
testWithData(negativeByteArray3);
}
private static void testWithData(byte[] data) {
// Compare our converter to BigInteger
// Which we know gives wanted result
System.out.println("Converter = " + bytes3ToInt(data));
System.out.println("BigInteger = " + new BigInteger(data).intValue());
}
}
Output
Converter = 6749695
BigInteger = -666
full code here http://ideone.com/qu9Ulw
First of all, your indices are wrong. It's not 2, 1, 0 but 0, 1, 2.
Secondly the problem is that the sign isn't being extended, so even though it would work for positive values, negative values show wrong.
If you don't mask the highest (of 24bits) byte, it will sign extend properly, filling the highest (of 32bits) byte with 0x00 for positive values or 0xFF for negative values.
return (data[0] << 16) | (data[1] & 255) << 8 | (data[2] & 255);

Java byte array to int function giving negative number

I have a function like so:
static int byteArrayToInt(byte[] bytes) {
return bytes[0] << 24 | (bytes[1] & 0xFF) << 16 | (bytes[2] & 0xFF) << 8 | (bytes[3] & 0xFF);
}
Which should convert a byteArray of 4 bytes to an int.
The byte array in hexBinary is:E0C38881
And the expected output should be: 3770910849
But I am getting: -524056447
What do I need to do to fix this?
3770910849 is higher than Integer.MAX_VALUE. If you require a positive value, use long instead of int.
For example :
static long byteArrayToInt(byte[] bytes) {
return (long)((bytes[0] << 24) | (bytes[1] & 0xFF) << 16 | (bytes[2] & 0xFF) << 8 | (bytes[3] & 0xFF)) & 0xffffffffL;
}
This is what I used to get it working:
static long getLong(byte[] buf){
long l = ((buf[0] & 0xFFL) << 24) |
((buf[1] & 0xFFL) << 16) |
((buf[2] & 0xFFL) << 8) |
((buf[3] & 0xFFL) << 0) ;
return l;
}

Java - converting byte[] to int not giving result

I have a hexBinary of 4 bytes as follows:
FFFFFFC4
It should return something big but the following function just gives -60:
public static int byteArrayToInt(byte[] b)
{
return b[3] & 0xFF |
(b[2] & 0xFF) << 8 |
(b[1] & 0xFF) << 16 |
(b[0] & 0xFF) << 24;
}
Why it doesn't work? Am I doing something wrong?
The primitive type int is 32-bits long and the most significative bit is the sign. The value FFFFFFC4 has the MSB set to 1, which represents a negative number.
You can get "something big" by using long instead of int:
public static long byteArrayToInt(byte[] b)
{
return (((long) b[3]) & 0xFF) |
(((long) b[2]) & 0xFF) << 8 |
(((long) b[1]) & 0xFF) << 16 |
(((long) b[0]) & 0xFF) << 24;
}

How can I modify this little endian method so that it won't return a negative integer?

public static int liEndVal (Byte[] mem) {
return (mem[0] & 0xFF)
| ((mem[1] & 0xFF) << 8)
| ((mem[2] & 0xFF) << 16)
| ((mem[3] & 0xFF) << 24);
}
How can I modify this method so that when my input is for example 45 A2 BD 8A the little endian integer output will not be a negative integer? I don't understand why does it keeps on returning the two complement integer.
When mem[3] > 0x7F, the returned int will be negative, since the max value of int is 0x7FFFFFFF. If you want a positive returned value, return a long.
public static long liEndVal (Byte[] mem) {
return (mem[0] & 0xFF)
| ((mem[1] & 0xFF) << 8)
| ((mem[2] & 0xFF) << 16)
| (((long)mem[3] & 0xFF) << 24);
}
Because in that representation, the (signed) integer is negative. Looks like you need an unsigned int.
I think the answer here is probably actually that you shouldn't mind that the answer is negative: just treat it as unsigned, and the signedness of the output as unimportant. You cannot eliminate the possibility of negative output, but I think you're wrong that it matters.

How to compare contents of 64 byte array with long, in java?

I have a byte array of 64 in size. I am receiving 64 bytes of data from UsbConnection.bulkTRansfer(). I want to check whether I received "Sync" packet or not. "Sync" is a long constant with value 4006390527l. Here's my code.
byte[] buffer = new byte[64];
bytesReceived += usbConnection.bulkTransfer(usbEndpointIn, buffer, 64, 2000);
String l=Base64.encodeToString(buffer,0);
long ll=Long.parseLong(l);
if(C.SYNC_PAD_TO_HOST == ll) {
Log.d(TAG,"SyncReceived");//This is the Sync
gotSync=true;
System.arraycopy(buffer, 0, rcvbuf, 0, buffer.length);
}
I am getting very weird results. Never does the if condition becomes true. Whats wrong here.
There are a few issues here. A USB-Sync for full-speed is 32-bits. An int is capable of containing the data, but not as an unsigned integer. The only reason your code stores it as a long is to store values of 0x80000000 to 0xFFFFFFFF as a positive number. However, only the least significant 32-bits of the long are used.
To calculate the first little-endian unsigned 32-bit number in the stream and store it as a long, use:
long ll = (buffer[0] & 0xFF)
| ((buffer[1] & 0xFF) << 8)
| ((buffer[2] & 0xFF) << 16)
| ((buffer[3] & 0xFFL) << 24);
Here's a breakdown of what's happening:
Your Sync packet in hex is 0x17E1444C. USB transmits this value using little-endian, which means the least significant byte is sent first. Over the wire, the bytes come in this order:
4C 44 E1 17
To break-down the steps:
long ll = buffer[0] & 0xFF;
// ll == 0x4C & 0x000000FF
// == (long) 0x0000004C
// == 0x000000000000004C
ll |= (buffer[1] && 0xFF) << 8;
// ll == 0x000000000000004C | ((0x44 & 0x000000FF) << 8)
// == 0x000000000000004C | (0x00000044 << 8)
// == 0x000000000000004C | 0x00004400
// == 0x000000000000004C | (long) 0x00004400
// == 0x000000000000004C | 0x0000000000004400
// == 0x000000000000004C
ll |= (buffer[2] & 0xFF) << 16;
// ll == 0x000000000000444C | ((0xE1 & 0x000000FF) << 16)
// == 0x000000000000444C | (0x000000E1 << 16)
// == 0x000000000000444C | 0x00E10000
// == 0x000000000000444C | (long) 0x00E10000
// == 0x000000000000444C | 0x0000000000E10000
// == 0x0000000000E1444C
That last inclusion illustrates why we use & 0xFF. Without using this bitwise AND, here's what happens without using & 0xFF:
ll |= buffer[2] << 16;
// ll == 0x000000000000444C | ((int) 0xE1) << 16)
// == 0x000000000000444C | (0xFFFFFFE1 << 16)
// == 0x000000000000444C | 0xFFE10000
// == 0x000000000000444C | (long) 0xFFE10000
// == 0x000000000000444C | 0xFFFFFFFFFFE10000
// == 0xFFFFFFFFFFE1444C
This is because E1 exceeds the maximum positive byte (0x7F), so it's treated as a negative number. When cast directly to int, the negative sign is maintained. To avoid this behaviour, we cast it to int by using a full 8-bit AND.
Now back to the process. The last byte:
ll |= (buffer[3] & 0xFFL) << 24;
// ll == 0x0000000000E1444C | ((0x17 & 0x00000000000000FF) << 24)
// == 0x0000000000E1444C | (0x0000000000000017 << 24)
// == 0x0000000000E1444C | 0x0000000017000000
// == 0x0000000017E1444C
You'll notice the last bitwise AND performed above is using a long version of 0xFF. This is because a left-shift of 24 bits (or higher) has the potential of making a negative int if the least significant byte exceeds the maximum positive byte (0x7F). Imagine instead of 17 that the last byte is A7. Here's what happens when using using & 0xFF instead of & 0xFFL:
ll |= (buffer[3] & 0xFF) << 24;
// ll == 0x0000000000E1444C | ((0xA7 & 0x000000FF) << 24)
// == 0x0000000000E1444C | (0x000000A7 << 24)
// == 0x0000000000E1444C | 0xA7000000
// == 0x0000000000E1444C | (long) 0xA7000000
// == 0x0000000000E1444C | 0xFFFFFFA7000000
// == 0xFFFFFFFFA7E1444C

Categories