I found this code online. But, I am unable to get the logic behind the following code:
public static int add(int a, int b) {
if (b == 0) return a;
int sum = a ^ b; // add without carrying
System.out.println("sum is : "+sum);
int carry = (a & b) << 1; // carry, but don’t add
return add(sum, carry); // recurse
}
Let's look at an example (using 8 bits for simplicity)
a = 10010110
b = 00111101
a^b is the xor, which gives 1 for places where there is a 1 in one number and 0 in the other. In our example:
a^b = 10101011
Since 0 + 0 = 0, 0 + 1 = 1 and 1 + 0 = 1, the only columns left to deal with are the ones that have a 1 in both of the numbers. In our example, a^b is short by whatever the answer to
00010100
+ 00010100
is. In binary, 1 + 1 = 10, so the answer to the above sum is
00101000
or (a & b) << 1. Therefore the sum of a^b and (a & b) << 1 is the same as a + b.
So, assuming the process is guaranteed to terminate, the answer will be correct. But the process will terminate because each time we call sum recursively the second parameter has at least one more 0 at the end, due to the bit shift <<. Therefore, we are guaranteed to eventually end up with the second argument consisting entirely of 0s, so that the line if (b == 0) return a; can end the process and give us an answer.
Consider, as an example, 5+7:
5 = 101 (Base 2)
7 = 111 (Base 2)
Now consider adding the two (base 2) digits:
0+0 = 0 = 0 carry 0
1+0 = 1 = 1 carry 0
0+1 = 1 = 1 carry 0
1+1 = 10 = 0 carry 1
The sum (without carrying) of A+B is A^B and the carry is A&B; and when you carry a number it is shifted one digit to the left (hence (A&B)<<1).
So:
5 = 101 (Base 2)
7 = 111 (Base 2)
5^7 = 010 (sum without carrying)
5&7 = 101 (the carry shifted left)
Then we can recurse to add the carry:
A = 010
B = 1010
A^B = 1000 (sum without carrying)
A&B = 0010 (the carry shifted left)
Then we can recurse again as we still have more to carry:
A' = 1000
B' = 100 (without the leading zeros)
A'^B' = 1100 (sum without carrying)
A'&B' = 0000 (the carry shifted left)
Now there is nothing to carry - so we can stop and the answer is 1100 (base 2) = 12 (base 10).
The algorithm is just implementing decimal addition as (longhand) binary addition using the ors to add and the bitshifted ands to find the carry and will recurse until there is nothing more to carry (which will always occur as the bitshift appends another zero to the carry each time so with each recursion at least one more bit will not generate a carry value each time).
We are adding converting the integers to bits and using bitwise operators .
EXOR i.e ^ : 0 ^0 and 1 ^1 =0 , other cases give 1.
AND i.e & 1^1 =1 , ..other cases give 0.
<< or left shift . i.e shift left and append a 0 bit : 0010 becomes 0100
eg.
add(2,3)
2= 0010
3=0011
exor both : to get initial sum : 0001
carry : a &b = 0010
Left shift by 1 bit : 0100 i.e 4
add(1,4)
exor both : 0001 0100 and u get 0101 i.e 5
carry = 0000 <<1 i.e 0000 ..
since carry is 0 , it stops addition and returns previous sum
This is the table for addition:
+ 0 1
-- --
0 | 0 1
1 | 1 10
▲
If you ignore the carry bit ▲ you'll see that it's the same as the XOR table:
^ 0 1
-- --
0 | 0 1
1 | 1 0
So if you combine two numbers with bitwise XOR you get bit-by-bit addition without carry.
Now, what is the carry? It's a bit that's only there when both inputs are 1.
You can get that with AND:
& 0 1
-- --
0 | 0 0
1 | 0 1
But it needs to be added to the sum after being shifted one position to the left, because it's "carried" over, hence the (a & b) << 1
So you can compute the addition without carry and the carry itself. How do you add them together without using addition? Simple! By recursing on this very definition of addition!
See #pbabcdefp's answer on why the recursion always terminates.
Related
As part of a serial data protocol decoder, I must decode data that has sync bits inserted (bits that are not part of the data and are always '1'). I need to remove the sync bits and assemble the data by shifting the remaining bits left. Each 32-bit word has a different pattern of sync bits. I know what the patterns are, but I cannot come up with a generalized was of removing the sync bits.
For example, I might have a bit pattern like this (just showing 12 bits for example):
0 1 1 1 1 0 0 1 1 0 1 1
I know that some of those bits are sync bits, specifically those that are '1' in this mask:
0 0 1 1 0 0 0 0 1 0 0 1
The resulting data should be those data bits with a '0' in the corresponding mask, shifted to remove the sync bits, padded right with zeros. The mask above could be understood as "take first 2 bits, skip next 2 bits, take next 4 bits, skip next bit, take 2 bits, skip 1 bit".
E.g I should end up with:
0 1 1 0 0 1 0 1 0 0 0 0
Trying to do this in Java but I don't see any bit mask/shift operations that would make this work.
Best method I could come up with (does not left align the results, but that is OK for my purposes):
private static final int MSB_ONLY = 0x80000000;
private static int squeezeBits(int data, int mask) {
int v = 0;
for (int i=0; i<32; i++) {
if ((mask & MSB_ONLY) != MSB_ONLY) {
// There is a 0 in the mask, so we want this data bit
v = (v << 1) | ((data & MSB_ONLY) >>> 31);
}
else {
// Throw bit away
}
mask = mask << 1;
data = data << 1;
}
return v;
}
I have been given this sample code for some exercises, and it shows how to find whether an integer is odd or even.
int x = 4;
if ( (x & 1) == 0 )
{
System.out.println("even");
}
else
{
System.out.println("odd");
}
But I dont understand why you do ' x & 1 '. What's the purpose of that?
In the binary representation of a number, any number with its least significant bit set to 0 is even. It would also be helpful to know what the & operator does.
For example 5 = 0101 (binary) and 1 = 0001 (binary). In this case, it compares 0101 with 0001.
You compare it bitwise, so the first bit would be 1 & 0 = 0. The second bit is 0 & 0 = 0. The third bit is 0 & 0 = 0. The last bit is 1 & 1 = 1.
So 5 & 1 = 0001, which is 1 in decimal. 1 == 0 evaluates to false for x = 5.
For all other even numbers, the least significant digit is 0, so any even number & 1 will always evaluate to 0.
That is because & performs a bitwise AND operation:
if ( (x & 1) == 0 )
Your code is as good as saying, print "Odd" if last binary digit of x is 1.
And it will work because all odd numbers will always have 1 as their last binary digit.
Consider this:
1 is 0001 in binary.
2 is 0010 in binary.
When (0001 & 0010) only those positions with both matched with 1 will remained as 1, which means:
0001 & 0010 gives you 0000 (0) // 1 & 2 = 0
Look at this pattern:
0001 & 0001 = 1 //1 & 1 = 1 (is odd)
0010 & 0001 = 0 //2 & 1 = 0 (is even)
0011 & 0001 = 1 //3 & 1 = 1 (is odd)
0100 & 0001 = 0 //4 & 1 = 0 (is even)
0101 & 0001 = 1 //5 & 1 = 1 (is odd)
0110 & 0001 = 0 //6 & 1 = 0 (is even)
It's a bitwise AND operation between the binary representation of the two numbers. Odd numbers always have their 1 bit set. Even numbers do not.
So, the ampersand AND == 0 is true for even numbers, but not for odd ones.
http://www.tutorialspoint.com/java/java_bitwise_operators_examples.htm
It evaluates the variable's binary value
Let's say x = 6 (110 in binary) and y = 7 (111)
Since we know that 1&0=0 and 1&1=1 (or true&false=false and true&true=true)
x & 1 == 0 // evaluates to true if x is even because
110
&001
----
000
y & 1 == 0 // evaluates to false because
111
&001
----
001
The LSB of a binary number is holding the information about Parity,
any odd numbers has LBS==1 and any even has LSB==0
so when you do a bitwise and you are multiplying bit by bit
against 1, the porpouse of this is to clear all other bits but leaving the LSB just like it is (that is why multpling by 1)
A binary number can be easily identified if it's odd or even just by looking at least significant bit, wheather it is set or not(1 or 0). If least significant bit is 1 then that's an odd number else it's an even.
Just check (number % 10) if true, its odd number, else even number.
I know that ^ is the xor operator in Java. But I couldn't understand it in the following context.
int step = 0;
...
step ^=1;
Source: Google Code Jam 2014 (Participant's answer)
File Link : here
it goes under assignment operator category like
+= -= *= /= %= &= ^= |= <<= >>= >>>=
means
^= bitwise exclusive OR and assignment operator
step ^=1; as same as step = step ^ 1;
^ stands for XOR operator.
a ^= b is equivalent to a = a ^ b
step ^=1 means step = step xor 1. Similar to step += 1 which gets evaluated to step = step + 1
So ^= is short hand xor operator.
So xor table says:
operand1 operand2 output
0 0 0
0 1 1
1 0 1
1 1 0
so if my step is 1, then 1 xor 1 would be 0.
From Java Tutorials,
^
Assume integer variable A holds 60 and variable B holds 13 then:
Binary XOR Operator copies the bit if it is set in one operand but not both.(A ^ B) will give 49 which is 0011 0001
In your case it is,
step = step^1
and in result you get step=1
http://www.tutorialspoint.com/java/java_basic_operators.htm
As others have pointed out, step ^=1 flips the least significant bit of step. This makes even numbers get 1 bigger, and odd numbers get 1 smaller.
Examples:
0 --> 1
1 --> 0
7 --> 6
6 --> 7
-3 --> -4
Just a simple of boolean true/false that is very efficient would be nice. Should I use recursion, or is there some better way to determine it?
From here:
Determining if an integer is a power of 2
unsigned int v; // we want to see if v is a power of 2
bool f; // the result goes here
f = (v & (v - 1)) == 0;
Note that 0 is incorrectly considered a power of 2 here. To remedy
this, use:
f = v && !(v & (v - 1));
Why does this work? An integer power of two only ever has a single bit set. Subtracting 1 has the effect of changing that bit to a zero and all the bits below it to one. AND'ing that with the original number will always result in all zeros.
An Integer power of two will be a 1 followed by one or more zero's
i.e.
Value value -1 (binary)
10 2 1
100 4 11
1000 8 111
10000 16 1111
as mitch said
(value & (value-1)) == 0
when value is a power of 2 (but not for any other number apart from 1 / 0 and 1 is normally regarded as 2 raised to the power of zero).
For mitch's solution, where numbers > 0 that are not powers of 2 i.e.
value value - 1 V & (v-1)
1000001 1000000 1000000
1000010 1000001 1000000
1000100 1000011 1000000
1001000 1000111 1000000
1000011 1000010 1000010
1000101 1000100 1000100
1000111 1000110 1000110
and never zero.
Subtracting 1 from a number reverses the bits up unto and including the first 1; for power's of two's there is only one '1' so Value & (Value-1) == 0, for other numbers second and subsequent 1's are left un-affected.
Zero will need to be excluded
Another possible solution (probably slightly slower) is
A & (-A) == A
Powers of 2:
A -A
00001 & 11111 = 00001
00010 & 11110 = 00010
00100 & 11100 = 00100
Some other numbers:
A -A
00011 & 11101 = 00001
00101 & 11011 = 00001
Again you need to exclude 0 as well
To solve this problem, I did
Write the number in binary; you will see that a power of 2 has only a single one in it
Fiddle with the various operators / boolean at boolean level and see what works
doing this, I found the following also work:
A & (-A) == A
(not A) | (not A + 1) == -1 /* boolean not of a & (a-1) == 0 */
Not sure whether you mean efficient in terms of computation speed, or in terms of lines of code. But you could try value == Integer.highestOneBit(value). Don't forget to exclude zero if you need to.
In java when you do
a % b
If a is negative, it will return a negative result, instead of wrapping around to b like it should. What's the best way to fix this? Only way I can think is
a < 0 ? b + a : a % b
It behaves as it should a % b = a - a / b * b; i.e. it's the remainder.
You can do (a % b + b) % b
This expression works as the result of (a % b) is necessarily lower than b, no matter if a is positive or negative. Adding b takes care of the negative values of a, since (a % b) is a negative value between -b and 0, (a % b + b) is necessarily lower than b and positive. The last modulo is there in case a was positive to begin with, since if a is positive (a % b + b) would become larger than b. Therefore, (a % b + b) % b turns it into smaller than b again (and doesn't affect negative a values).
As of Java 8, you can use Math.floorMod(int x, int y) and Math.floorMod(long x, long y). Both of these methods return the same results as Peter's answer.
Math.floorMod( 2, 3) = 2
Math.floorMod(-2, 3) = 1
Math.floorMod( 2, -3) = -1
Math.floorMod(-2, -3) = -2
For those not using (or not able to use) Java 8 yet, Guava came to the rescue with IntMath.mod(), available since Guava 11.0.
IntMath.mod( 2, 3) = 2
IntMath.mod(-2, 3) = 1
One caveat: unlike Java 8's Math.floorMod(), the divisor (the second parameter) cannot be negative.
In number theory, the result is always positive. I would guess that this is not always the case in computer languages because not all programmers are mathematicians. My two cents, I would consider it a design defect of the language, but you can't change it now.
=MOD(-4,180) = 176
=MOD(176, 180) = 176
because 180 * (-1) + 176 = -4 the same as 180 * 0 + 176 = 176
Using the clock example here, http://mathworld.wolfram.com/Congruence.html
you would not say duration_of_time mod cycle_length is -45 minutes, you would say 15 minutes, even though both answers satisfy the base equation.
Java 8 has Math.floorMod, but it is very slow (its implementation has multiple divisions, multiplications, and a conditional). Its possible that the JVM has an intrinsic optimized stub for it, however, which would speed it up significantly.
The fastest way to do this without floorMod is like some other answers here, but with no conditional branches and only one slow % op.
Assuming n is positive, and x may be anything:
int remainder = (x % n); // may be negative if x is negative
//if remainder is negative, adds n, otherwise adds 0
return ((remainder >> 31) & n) + remainder;
The results when n = 3:
x | result
----------
-4| 2
-3| 0
-2| 1
-1| 2
0| 0
1| 1
2| 2
3| 0
4| 1
If you only need a uniform distribution between 0 and n-1 and not the exact mod operator, and your x's do not cluster near 0, the following will be even faster, as there is more instruction level parallelism and the slow % computation will occur in parallel with the other parts as they do not depend on its result.
return ((x >> 31) & (n - 1)) + (x % n)
The results for the above with n = 3:
x | result
----------
-5| 0
-4| 1
-3| 2
-2| 0
-1| 1
0| 0
1| 1
2| 2
3| 0
4| 1
5| 2
If the input is random in the full range of an int, the distribution of both two solutions will be the same. If the input clusters near zero, there will be too few results at n - 1 in the latter solution.
Here is an alternative:
a < 0 ? b-1 - (-a-1) % b : a % b
This might or might not be faster than that other formula [(a % b + b) % b]. Unlike the other formula, it contains a branch, but uses one less modulo operation. Probably a win if the computer can predict a < 0 correctly.
(Edit: Fixed the formula.)