Learning JAVA, i was trying to test the upper limit of while loop which goes on incrementing an int.Please see the program below :
public class Test {
public static int a(){
int a = 10;
while(a > 9 )
++a;
return a;
}
public static void main(String[] argc) {
Test t = new Test();
int k = t.a();
System.out.println("k = "+(1 * k));
}
}
I am aware that 32 bits range is from -2,147,483,648 to 2,147,483,647, so on the basis of that, i was expecting output as 2,147,483,647 but instead i am getting :
k = -2147483648
I even tried
System.out.println("k = "+(1 * k/2));
but still output is :
k = -1073741824
Question :
Why is solution negative when it should be positive?
You are incrementing your a int by 1 until it reaches 1 + Integer.MAX_VALUE, which shifts its value to -2147483648 == Integer.MIN_VALUE.
Here's your loop commented:
// "infinite" loop as a is assigned value 10
while(a > 9)
// when a reaches Integer.MAX_VALUE, it is still incremented by 1
++a;
// loop condition now false, as value for a has shifted to -2147483648
return a;
What is happening is called integer overflow.
Maximum 32-bit integer value in binary is:
0111 1111 1111 1111 1111 1111 1111 1111
When you add 1 to this number you get:
1000 0000 0000 0000 0000 0000 0000 0000
This is the twos compliment, or -2,147,483,648. Since any negative number is less than 9, the while loop exits.
You increment the value until it reaches the positive limit and it becomes all bits but the sign bit becomes 1.
0x7FFFFFFF = 01111111 11111111 11111111 11111111
This is binary representation of 2147483647, which is INT_MAX. When you increment it by one once again, it becomes
0x80000000 = 10000000 00000000 00000000 00000000
which is equal to INT_MIN, -2147483648.
Now,
2147483647 is greater than 9 so your loop continues. One more increment and oops, suddenly it is -2147483648 which is smaller than 9. This is the point where your loop condition fails.
If we look at Oracle docs on int values We can find out that:
The operators that work on the int primitive value do not indicate overflow or underflow
The results are specified by the language and independent of the JVM version as folows:
Integer.MAX_VALUE + 1 is the same as Integer.MIN_VALUE
Integer.MIN_VALUE - 1 is the same as Integer.MAX_VALUE
Related
I'm confused with bitwise operators. Whenever I do 99(0110 0011) << 2, the answer is 396. My understanding with left shifts is that add two 0s on the right side. So my answer would be 140(1000 1100) and not 396. Why is the answer 396 for 99 << 2 in Java?
You are only showing 8 bits, but an int is 32 bits.
byte 8 bits
short 16 bits
int 32 bits
long 64 bits
Integer calculations in Java are coerced to int or long, so even if your 99 value was a byte, the result of ((byte)99) << 2 is still an int.
0110 0011 = 99 (byte)
0000 0000 0000 0000 0000 0001 1000 1100 = 396 (int)
Now, you can always cast it back to a byte, which will discard all high-order bits:
(byte)(99 << 2) = (byte)0b10001100 = (byte)0x8C = -116
Or you can discard the high-order bits while keeping it an int:
(99 << 2) & 0xFF = 0b10001100 = 0x0000008C = 140
Because a Java int is a signed 32-bit quantity (not 8 bits) and a bitwise left shift by 2 (<< 2) is the same as * 4. You can see this like
int i = 99;
System.out.printf(" %s (%d)%n", Integer.toBinaryString(i), i);
i <<= 2;
System.out.printf("%s (%d)%n", Integer.toBinaryString(i), i);
Output is
1100011 (99)
110001100 (396)
I am trying to do left and right shifting of the ASCII values but my method (show below) is giving the correct values for 0, but where it has to show 1 it gives me output like this:
the values of asciiValue in getLeastbit function 98 shift 0
temp value0
the values of asciiValue in getLeastbit function 97 shift -2147483648
temp value1
What is the problem as I am not able to resolve it.
int getleastbit(int asciiValue) {
int temp;
temp = asciiValue << 31;
//System.out.println("temp value for checking"+temp);
System.out.println("the values of asciiValue in getLeastbit function "+asciiValue+" shift "+temp);
temp = temp >>> 31;
System.out.println("temp value"+temp);
return temp;
}
The output is correct. -2147483648 is 1000 0000 0000 0000 0000 0000 0000 0000 in 32-bit binary (Java's int's format). You end up with the LSB (least significant bit) in the position of the MSB (most significant bit).
You do a 31 byte left shift. As you know, a left shift operation basically doubles the number you entered for each shifting operation. Eg. 1 << 1 = 2, 2 << 1 = 4, etc. You can do a small program for testing why it gives for 97 a negative value:
int a = 97;
for (int i = 0; i < 31; i++) {
a = a << 1;
System.out.println(a);
}
You will see (some of ) the following values: 194, 388, 776, ..., 1627389952, -1040187392, -2080374784, 134217728, ..., -2147483648. Since your number is 97 we know that 31 shifting operations will generate a number greater than Integer.MAX_VALUE, so overflow will occur. In this case, the shifting operation will behave like expected, the most significant byte is discarded and a new 0 is added as the least significant byte. Since you do 31 shifts and your number was odd, you will have 1 followed by 31 zeros, which is a negative integer value. So if you wish to see if you have 0 or 1 as the last byte, if you have this negative value, always you have a 1, else is 0 for the last byte on the original number.
While shifting 31 bits (<< and >>) we are left with:
asciiValue << 31
0000 0000 0000 0000 0000 0000 0000 0000
But when we shift using unsigned right shift:
asciivalue >>> 31
-2147483648 is 1000 0000 0000 0000 0000 0000 0000 0000
On a side note when you shift an integer with the << or >> operator and the shift distance is greater than or equal to 32, you take the shift distance mod 32. It means you mask off all but the low order 5 bits of the shift distance.
For example (i >> 32) == i, for every integer i. You might expect it to shift the entire number off to the right, returning 0 for positive inputs and -1 for negative inputs, but it doesn't; it simply returns i, because (i << (32 & 0x1f)) == (i << 0) == i.
public static void main(String[] args) {
int i= 40;
System.out.println(i>>31);
System.out.println(i>>32);
System.out.println(i<<31);
System.out.println(i<<32);
}
Output:
0
40
0
40
What is the fastest way to calculate all possible length-r combinations of n possible elements without resorting to brute force techniques or anything that requires STL?
While working on an Apriori algorithm for my final project in my data structures class, I developed an interesting solution that uses bit-shifting and recursion, which i will share in an answer below for anyone who is interested. However, is this the fastest way of achieving this (without using any common libraries)?
I ask more out of curiosity than anything else, as the algorithm i currently have works just fine for my purposes.
Here is the algorithm that i developed to solve this problem. It currently just outputs each combination as a series of ones and zeros, but can be easily adapted to create data sets based on an array of possible elements.
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum - bitVal) << 1;
n += bitVal ? 1 : 0;
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
How it works:
This function treats each combination of elements as a sequence of ones and zeros, which can then be expressed with respect to a set of possible elements (but is not in this particular example).
For example, the results of 3C2 (all combinations of length-2 from a set of 3 possible elements) can be expressed as 011, 110, and 101. If the set of all possible elements is {A, B, C}, then the results can be expressed with respect to this set as {B, C}, {A, B}, and {A, C}.
For this explanation, i will be calculating 5C3 (all length-3 combinations composed of 5 possible elements).
This function accepts 3 arguments, all of which are unsigned integers:
The first parameter is the smallest possible integer whose binary representation has a number of 1s equal to the length of the combinations we're creating. This is out starting value for generating combinations. For 5C3, this would be 00111b, or 7 in decimal.
The second parameter is the value of highest bit that is set to 1 in the starting number. This is the first bit that will be subtracted when creating the combinations. For 5C3, this is the third bit from the right, which has a value of 4.
The third parameter is the value of the nth bit from the right, where n is the number of possible elements that we are combining. This number will be bitwise-anded with the combinations we create to check whether the left-most bit of the combination is a 1 or a 0. For 5C3, we will use the 5th bit from the right, which is 10000b, or 16 in decimal.
Here are the actual steps that the function performs:
Calculate startNum - bitVal, bit-shift one space to the left, and add 1 if bitVal is not 0.
For the first iteration, the result should be the same as startNum. This is so that we can print out the first combination (which is equal to startNum) within the function so we don't have to do it manually ahead of time. The math for this operation occurs as follows:
00111 - 00100 = 00011
00011 << 1 = 00110
00110 + 1 = 00111
The result of the previous calculation is a new combination. Do something with this data.
We are going to be printing the result to the console. This is done using a for-loop whose variable starts out equal to the number of bits we are working with (calculated by taking log2 of the testNum and adding 1; log2(16) + 1 = 4 + 1 = 5) and ends at 0. Each iteration, we bit-shift right by i-1 and print the right-most bit by and-ing the result with 1. Here is the math:
i=5:
00111 >> 4 = 00000
00000 & 00001 = 0
i=4:
00111 >> 3 = 00000
00000 & 00001 = 0
i=3:
00111 >> 2 = 00001
00001 & 00001 = 1
i=2:
00111 >> 1 = 00011
00011 & 00001 = 1
i=1:
00111 >> 0 = 00111
00111 & 00001 = 1
output: 00111
If the left-most bit of n (the result of the calculation in step 1) is 0 and n is not equal to startNum, we recurse with n as the new startNum.
Obviously this will be skipped on the first iteration, as we have already shown that n is equal to startNum. This becomes important in subsequent iterations, which we will see later.
If bitVal is greater than 0 and less than testNum, recurse with the current iteration's original startNum as the first argument. Second argument is bitVal shifted right by 1 (same thing as integer division by 2).
We now recurse with the new bitVal set to the value of the next bit to the right of the current bitVal. This next bit is what will be subtracted in the next iteration.
Continue to recurse until bitVal becomes equal to zero.
Because bitVal is bit-shifted right by one in the second recursive call, we will eventually reach a point when bitVal equals 0. This algorithm expands as a tree, and when bitVal equals zero and the left-most bit is 1, we return to one layer up from our current position. Eventually, this cascades all the way back the the root.
In this example, the tree has 3 subtrees and 6 leaf nodes. I will now step through the first subtree, which consists of 1 root node and 3 leaf nodes.
We will start at the last line of the first iteration, which is
if (bitVal)
r_nCr(startNum, bitVal >> 1, testNum);
So we now enter the second iteration with startNum=00111(7), bitVal = 00010(2), and testNum = 10000(16) (this number never changes).
Second Iteration
Step 1:
n = 00111 - 00010 = 00101 // Subtract bitVal
n = 00101 << 1 = 01010 // Shift left
n = 01010 + 1 = 01011 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 0 and n is not equal to startNum, so we recurse with n as the new startNum. We now enter the third iteration with startNum=01011(11), bitVal = 00010(2), and testNum = 10000(16).
Third Iteration
Step 1:
n = 01011 - 00010 = 01001 // Subtract bitVal
n = 01001 << 1 = 10010 // Shift left
n = 10010 + 1 = 10011 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1. We now enter the fourth iteration with startNum=01011(11), bitVal = 00001(1), and testNum = 10000(16).
Fourth Iteration
Step 1:
n = 01011 - 00001 = 01010 // Subtract bitVal
n = 01010 << 1 = 10100 // Shift left
n = 10100 + 1 = 10101 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1. We now enter the fifth iteration with startNum=01011(11), bitVal = 00000(0), and testNum = 10000(16).
Fifth Iteration
Step 1:
n = 01011 - 00000 = 01011 // Subtract bitVal
n = 01011 << 1 = 10110 // Shift left
n = 10110 + 0 = 10110 // bitVal is 0, so add 0
// Because bitVal = 0, nothing is subtracted or added; this step becomes just a straight bit-shift left by 1.
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is 0, so do not recurse.
Return to Second Iteration
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1.
This will continue on until bitVal = 0 for the first level of the tree and we return to the first iteration, at which point we will return from the function entirely.
Here is a simple diagram showing the function's tree-like expansion:
And here is a more complicated diagram showing the function's thread of execution:
Here is an alternate version using bitwise-or in place of addition and bitwise-xor in place of subtraction:
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum ^ bitVal) << 1;
n |= (bitVal != 0);
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
What about this?
#include <stdio.h>
#define SETSIZE 3
#define NELEMS 7
#define BYTETOBINARYPATTERN "%d%d%d%d%d%d%d%d"
#define BYTETOBINARY(byte) \
(byte & 0x80 ? 1 : 0), \
(byte & 0x40 ? 1 : 0), \
(byte & 0x20 ? 1 : 0), \
(byte & 0x10 ? 1 : 0), \
(byte & 0x08 ? 1 : 0), \
(byte & 0x04 ? 1 : 0), \
(byte & 0x02 ? 1 : 0), \
(byte & 0x01 ? 1 : 0)
int main()
{
unsigned long long x = (1 << SETSIZE) -1;
unsigned long long N = (1 << NELEMS) -1;
while(x < N)
{
printf ("x: "BYTETOBINARYPATTERN"\n", BYTETOBINARY(x));
unsigned long long a = x & -x;
unsigned long long y = x + a;
x = ((y & -y) / a >> 1) + y - 1;
}
};
It should print 7C3.
Can someone explain why the following program printing output as 7
public class Test{
public static void main(String []args){
int i =1;
int j =2;
int k= 5;
System.out.println(i|j|k);
}
}
I would like to know how the OR operation happens in java int.
That is the bitwise-OR operator in Java. Last 8 bits for simplicity:
1 = 00000001
2 = 00000010
5 = 00000101
============
7 = 00000111 // 1 where the corresponding bit is set in any of the above numbers
These values have the bit values:
1 -> 0001
2 -> 0010
5 -> 0101
when you bitwise-OR them yogether you get:
0111
which is 7
I understand that (2 * i == (i ^( i - 1) + 1) in Java will let me find if a number is a power of two. But can someone explain why this works?
2*i == (i ^ (i-1)) + 1
Basically, if i was a a power of 2, it would have a single 1 in its bit pattern. If you subtract 1 from that, all the lower bits of that 1 bit become 1, and that power-of-two bit will become 0. Then you do an XOR on the bits, which produces an all 1 bit pattern. You add 1 to that, and you get the next power of 2.
Remember XOR truth table:
1 ^ 1 = 0
1 ^ 0 = 1
0 ^ 1 = 1
0 ^ 0 = 0
Example:
Let's say i is 256, which is this bit pattern.
100000000 = 2^8 = 256
100000000 - 1 = 011111111 = 2^7 + 2^6 + ... + 2^0 = 255
100000000 ^ 011111111 = 111111111 = = 2^8 + 2^7 + ... + 2^0 = 511
111111111 + 1 = 1000000000 = 2^9 = 512 = 2*i
Here's an example when you are not presented with a power of 2
i = 100 = 2^6 + 2^5 + 2^2
0110 0100
0110 0100 - 1 = 99 = 2^6 + 2^5 + 2^1 + 2^0 = 0110 0011
0110 0100 ^ 0110 0011 = 0000 0111 = 2^2 + 2^1 + 2^0 = 7
0000 0111 + 1 = 000 1000 = 2^3 = 8 != (2*i)
Simplified Version
Also, there's a modified version of this check to determine if some positive, unsigned integer is a power of 2.
(i & (i-1)) == 0
Basically, same rationale
If i is a power of 2, it has a single 1 bit in its bit representation. If you subtract 1 from it, the 1 bit becomes 0, and all the lower bits become 1. Then AND will produce an all 0 bit-pattern.
The important bit is the i^(i-1) (I'm assuming this is a small typo in the question). Suppose i is a power of 2. Then its binary expansion is a 1 followed by many zeroes. i-1 is a number where that leading 1 is replaced by a zero and all the zeroes are replaced by ones. So the result of the XOR is a string of 1's that's the same number of bits as i.
On the other hand, if i isn't a power of 2, subtracting 1 from it won't flip all of those bits - the xor then identifies which bits didn't carry from one place to the next when you subtracted 1. There'll be a zero in the result of the xor, so when you add the 1, it won't carry into the next bit position.