I am trying to do left and right shifting of the ASCII values but my method (show below) is giving the correct values for 0, but where it has to show 1 it gives me output like this:
the values of asciiValue in getLeastbit function 98 shift 0
temp value0
the values of asciiValue in getLeastbit function 97 shift -2147483648
temp value1
What is the problem as I am not able to resolve it.
int getleastbit(int asciiValue) {
int temp;
temp = asciiValue << 31;
//System.out.println("temp value for checking"+temp);
System.out.println("the values of asciiValue in getLeastbit function "+asciiValue+" shift "+temp);
temp = temp >>> 31;
System.out.println("temp value"+temp);
return temp;
}
The output is correct. -2147483648 is 1000 0000 0000 0000 0000 0000 0000 0000 in 32-bit binary (Java's int's format). You end up with the LSB (least significant bit) in the position of the MSB (most significant bit).
You do a 31 byte left shift. As you know, a left shift operation basically doubles the number you entered for each shifting operation. Eg. 1 << 1 = 2, 2 << 1 = 4, etc. You can do a small program for testing why it gives for 97 a negative value:
int a = 97;
for (int i = 0; i < 31; i++) {
a = a << 1;
System.out.println(a);
}
You will see (some of ) the following values: 194, 388, 776, ..., 1627389952, -1040187392, -2080374784, 134217728, ..., -2147483648. Since your number is 97 we know that 31 shifting operations will generate a number greater than Integer.MAX_VALUE, so overflow will occur. In this case, the shifting operation will behave like expected, the most significant byte is discarded and a new 0 is added as the least significant byte. Since you do 31 shifts and your number was odd, you will have 1 followed by 31 zeros, which is a negative integer value. So if you wish to see if you have 0 or 1 as the last byte, if you have this negative value, always you have a 1, else is 0 for the last byte on the original number.
While shifting 31 bits (<< and >>) we are left with:
asciiValue << 31
0000 0000 0000 0000 0000 0000 0000 0000
But when we shift using unsigned right shift:
asciivalue >>> 31
-2147483648 is 1000 0000 0000 0000 0000 0000 0000 0000
On a side note when you shift an integer with the << or >> operator and the shift distance is greater than or equal to 32, you take the shift distance mod 32. It means you mask off all but the low order 5 bits of the shift distance.
For example (i >> 32) == i, for every integer i. You might expect it to shift the entire number off to the right, returning 0 for positive inputs and -1 for negative inputs, but it doesn't; it simply returns i, because (i << (32 & 0x1f)) == (i << 0) == i.
public static void main(String[] args) {
int i= 40;
System.out.println(i>>31);
System.out.println(i>>32);
System.out.println(i<<31);
System.out.println(i<<32);
}
Output:
0
40
0
40
Related
i have binary file with unsigned shorts. I need to take unsigned value from this bytes to primitive short in JAVA. Byte order is Little Endian. I trying in this way:
byte[] bytes = new byte[frameLength];
for(int i = 0; i < fh.nFrames; i++) {
raf.readFully(bytes);
for(int j = 2; j < frameLength/2; j++) {
short s;
s = (short) (bytes[2*j + 1] << 8 | bytes[2 * j]);
System.out.println(s);
System.out.println(Integer.toBinaryString(s));
}
}
Example:
case 1(ok):
unsigned short : 8237
in hex(little endian): 2D 20
in binary(big endian): 0010 0000 0010 1101
and System.out.println(Integer.toBinaryString(s)) give us
10000000101101. It's correct.
case 2(not ok)
unsigned short: 384
in hex(little endian): 80 01
in binary(big endian): 0000 0001 1000 0000
and System.out.println(Integer.toBinaryString(s)) give us
11111111111111111111111110000000.
System.out.println(s) give us -128. It's not correct.
How i can get 384 value from this?
Someone has idea why it doesn't working?
Java converts everything to int before doing integer computations, and as bytes are signed, you get a sign extension. You must force bytes to stay in [0; 255] range by bitwise and-ing them with 0xFF:
s = (short) ((bytes[2*j + 1] & 0xFF) << 8 | (bytes[2 * j] & 0xFF));
I'm confused with bitwise operators. Whenever I do 99(0110 0011) << 2, the answer is 396. My understanding with left shifts is that add two 0s on the right side. So my answer would be 140(1000 1100) and not 396. Why is the answer 396 for 99 << 2 in Java?
You are only showing 8 bits, but an int is 32 bits.
byte 8 bits
short 16 bits
int 32 bits
long 64 bits
Integer calculations in Java are coerced to int or long, so even if your 99 value was a byte, the result of ((byte)99) << 2 is still an int.
0110 0011 = 99 (byte)
0000 0000 0000 0000 0000 0001 1000 1100 = 396 (int)
Now, you can always cast it back to a byte, which will discard all high-order bits:
(byte)(99 << 2) = (byte)0b10001100 = (byte)0x8C = -116
Or you can discard the high-order bits while keeping it an int:
(99 << 2) & 0xFF = 0b10001100 = 0x0000008C = 140
Because a Java int is a signed 32-bit quantity (not 8 bits) and a bitwise left shift by 2 (<< 2) is the same as * 4. You can see this like
int i = 99;
System.out.printf(" %s (%d)%n", Integer.toBinaryString(i), i);
i <<= 2;
System.out.printf("%s (%d)%n", Integer.toBinaryString(i), i);
Output is
1100011 (99)
110001100 (396)
Learning JAVA, i was trying to test the upper limit of while loop which goes on incrementing an int.Please see the program below :
public class Test {
public static int a(){
int a = 10;
while(a > 9 )
++a;
return a;
}
public static void main(String[] argc) {
Test t = new Test();
int k = t.a();
System.out.println("k = "+(1 * k));
}
}
I am aware that 32 bits range is from -2,147,483,648 to 2,147,483,647, so on the basis of that, i was expecting output as 2,147,483,647 but instead i am getting :
k = -2147483648
I even tried
System.out.println("k = "+(1 * k/2));
but still output is :
k = -1073741824
Question :
Why is solution negative when it should be positive?
You are incrementing your a int by 1 until it reaches 1 + Integer.MAX_VALUE, which shifts its value to -2147483648 == Integer.MIN_VALUE.
Here's your loop commented:
// "infinite" loop as a is assigned value 10
while(a > 9)
// when a reaches Integer.MAX_VALUE, it is still incremented by 1
++a;
// loop condition now false, as value for a has shifted to -2147483648
return a;
What is happening is called integer overflow.
Maximum 32-bit integer value in binary is:
0111 1111 1111 1111 1111 1111 1111 1111
When you add 1 to this number you get:
1000 0000 0000 0000 0000 0000 0000 0000
This is the twos compliment, or -2,147,483,648. Since any negative number is less than 9, the while loop exits.
You increment the value until it reaches the positive limit and it becomes all bits but the sign bit becomes 1.
0x7FFFFFFF = 01111111 11111111 11111111 11111111
This is binary representation of 2147483647, which is INT_MAX. When you increment it by one once again, it becomes
0x80000000 = 10000000 00000000 00000000 00000000
which is equal to INT_MIN, -2147483648.
Now,
2147483647 is greater than 9 so your loop continues. One more increment and oops, suddenly it is -2147483648 which is smaller than 9. This is the point where your loop condition fails.
If we look at Oracle docs on int values We can find out that:
The operators that work on the int primitive value do not indicate overflow or underflow
The results are specified by the language and independent of the JVM version as folows:
Integer.MAX_VALUE + 1 is the same as Integer.MIN_VALUE
Integer.MIN_VALUE - 1 is the same as Integer.MAX_VALUE
The following code is to convert an int to Bytes array.
I know the int i is right shifted 24, 16, 8 times and ANDED with 0xFF but what I can't understand is why these numbers were used?
private static byte[] intToBytes(int i)
// split integer i into 4 byte array
{
// map the parts of the integer to a byte array
byte[] integerBs = new byte[4];
integerBs[0] = (byte) ((i >>> 24) & 0xFF);
integerBs[1] = (byte) ((i >>> 16) & 0xFF);
integerBs[2] = (byte) ((i >>> 8) & 0xFF);
integerBs[3] = (byte) (i & 0xFF);
// for (int j=0; j < integerBs.length; j++)
// System.out.println(" integerBs[ " + j + "]: " + integerBs[j]);
return integerBs;
} // end of intToBytes()
Ok lets pretend you have a 32 bit binary number:
00001111 00000111 00000011 00000001
One byte is equivalent to 8 bits and therefore the number above is comprised of 4 bytes.
To separate these bytes out we need to perform a series of shift and and mask operations.
For instance to get the first byte (00001111) we do the following:
00001111 00000111 00000011 00000001 (original)
00000000 00000000 00000000 00001111 (shifted 24 spaces to the right)
Now we do not want those 3 bytes of zeros infront so we use an 8-bit mask (0xFF) and perform an AND operation between our 32 bit resulting number and the mask.
For example:
00000000 00000000 00000000 00001111
&& 11111111
-----------------------------------
00001111 (the first byte)
Now you can imagine how to get the second byte (only shift 16 bits to the right). The whole purpose is to get the 8 bits you want in the first 8 positions and use the mask to get rid of the garbage infront.
A 32-bit integer consists of four bytes:
byte 0 starts at bit 0;
byte 1 starts at bit 8;
byte 2 starts at bit 16;
byte 3 starts at bit 24.
I hope this explains where 8, 16 and 24 come from (they are multiples of eight, which is the width of a byte in bits).
Finally, it is worth noting that
integerBs[3] = (byte) (i & 0xFF);
is the same as
integerBs[2] = (byte) ((i >>> 0) & 0xFF);
This is the missing zero.
As an int consists of four bytes, you can "reach" every byte in the int by shifting a multiple of 8 bits = 1 byte.
In order to gain the first byte, you shift the int by 24 bits = 3 bytes, the second byte by shifting it 16 bits = 2 bytes and so on...
The masking & 0xFF serves the purpose preventing overflows and so you take only the byte you want.
To visualize it
31 0
| |
11111111111111111111111111111111
Right shift by 24 equals
31 0
| |
00000000000000000000000011111111
masking it using & 0xFF gives you the 8 bits from 0 to 7.
Some integer:
1111 1001 1010 1001 1010 1001 1010 1001 1010
Shifted to the right 24 bits:
1111 1001 1010
Anded with 0xFF:
1111 1001 1010
0000 1111 1111
0000 1001 1010
...which is just the 4th byte.
The integer:
1111 1001 1010 1001 1010 1001 1010 1001 1010
Shifted to the right 16 bits:
1111 1001 1010 1001 1010
Anded with 0xFF:
1111 1001 1010 1001 1010
0000 0000 0000 1111 1111
0000 0000 0000 1001 1010
...which is just the 3rd byte.
Etc...
I understand that (2 * i == (i ^( i - 1) + 1) in Java will let me find if a number is a power of two. But can someone explain why this works?
2*i == (i ^ (i-1)) + 1
Basically, if i was a a power of 2, it would have a single 1 in its bit pattern. If you subtract 1 from that, all the lower bits of that 1 bit become 1, and that power-of-two bit will become 0. Then you do an XOR on the bits, which produces an all 1 bit pattern. You add 1 to that, and you get the next power of 2.
Remember XOR truth table:
1 ^ 1 = 0
1 ^ 0 = 1
0 ^ 1 = 1
0 ^ 0 = 0
Example:
Let's say i is 256, which is this bit pattern.
100000000 = 2^8 = 256
100000000 - 1 = 011111111 = 2^7 + 2^6 + ... + 2^0 = 255
100000000 ^ 011111111 = 111111111 = = 2^8 + 2^7 + ... + 2^0 = 511
111111111 + 1 = 1000000000 = 2^9 = 512 = 2*i
Here's an example when you are not presented with a power of 2
i = 100 = 2^6 + 2^5 + 2^2
0110 0100
0110 0100 - 1 = 99 = 2^6 + 2^5 + 2^1 + 2^0 = 0110 0011
0110 0100 ^ 0110 0011 = 0000 0111 = 2^2 + 2^1 + 2^0 = 7
0000 0111 + 1 = 000 1000 = 2^3 = 8 != (2*i)
Simplified Version
Also, there's a modified version of this check to determine if some positive, unsigned integer is a power of 2.
(i & (i-1)) == 0
Basically, same rationale
If i is a power of 2, it has a single 1 bit in its bit representation. If you subtract 1 from it, the 1 bit becomes 0, and all the lower bits become 1. Then AND will produce an all 0 bit-pattern.
The important bit is the i^(i-1) (I'm assuming this is a small typo in the question). Suppose i is a power of 2. Then its binary expansion is a 1 followed by many zeroes. i-1 is a number where that leading 1 is replaced by a zero and all the zeroes are replaced by ones. So the result of the XOR is a string of 1's that's the same number of bits as i.
On the other hand, if i isn't a power of 2, subtracting 1 from it won't flip all of those bits - the xor then identifies which bits didn't carry from one place to the next when you subtracted 1. There'll be a zero in the result of the xor, so when you add the 1, it won't carry into the next bit position.