How do I use bitwise operators to accomplish this? - java

int a = 0b1011011011;
int b = 0b1000110110;
int c = 0b0101010101;
int d = (a & b) ^ c; //Wrong
Intended value ofd is 0b1010011110
I need to write d so that when the bit of c is 1, the corresponding bit in the result is the corresponding bit in b, but when the bit of c is 0, the corresponding bit in the result is the corresponding bit in a.
I've been stuck on this for awhile now and I can't seem to come up with something in one line.

I had this earlier but didn't see your edit.
int d = (c & b)^(~c & a) ;
q = c & b yields b when c is 1 regardless of b.
p = ~c & a yields a when c is 0 regardless of a.
q ^ p simply preserves those bits exclusive to a or b and 0 otherwise.

I get the feeling that this is for homework, but I'll answer anyway, since it's hard to explain this without giving away the answer.
Consider two simpler questions. Forget about the multiple bits and pretend a, b, c, and d are only one bit (since this is a bitwise operation, the logic will not change):
When c is 1, d = b. When c is 0, d = 0.
When c and b are both 1, d ends up being 1. If b or c is 0, d is 0.
This means that d = b & c.
When c is 0, d = a. When c is 1, d = 0.
This is very similar to case #1, except c is flipped and a is replaced with b.
Therefore, we can replace b with a, and c with ~c to get this solution.
This means that d = a & ~c.
Now for your original question: if we take those two simpler examples, we can see that it is impossible for both of them to be 1. So if we want both rules to apply, we can just put an | between them, giving us:
d = (b & c) | (a & ~c).

I need to write d so that when the bit of c is 1, the corresponding bit in the result is the corresponding bit in b,
when c == 1 d = b
but when the bit of c is 0, the corresponding bit in the result is the corresponding bit in a.
when c == 0 d = a
This sounds like a job for bit masking!
I know you give these as your test data:
int a = 0b1011011011;
int b = 0b1000110110;
int c = 0b0101010101;
int d = 0b1010011110;
But this test data is just as good a test and easier to read. All I've done is rearrange the bit columns so that c doesn't change so often:
int a = 0b11011_01101;
int b = 0b10101_00110;
int c = 0b00000_11111;
int d = 0b11011_00110;
Since Java 7 we can also use underscores in numeric literals to make them a little easier on the eyes.
It should now be easy to see that c controls where d is copied from, a or b. Easy to read data is important too.
And now, some bit masks
assertTrue(b & c ==0b00000_00110);
assertTrue(a & ^c==0b11011_00000);
Or them together and you get:
int d = 0b11011_00110;
assertTrue(d == (b & c) | (a & ^c));
Try that with either data set. Should still work.
You could xor them together as well. Doesn't matter because the c mask negation already excludes the potential for 1's on both sides. I chose or simply out of a sense of tradition.

Related

Karatsuba algorithm splitting number in 3 strings

Im trying to code the Karatsuba algorithm with some changes. Instead of splitting each number into 2 strings, I want to split it into 3.
For example:
Common Karatsuba -> first number would goes into A and B. Second number would goes into C and D.
A B
x C D
resulting...
ac ad+bc bd
In order to implement it through recursive calls, majority of the codes do something like:
e = karatsuba(a,c)
f = karatsuba(b,d)
g = karatsuba(a+b,c+d)
h = g-f-e //that's to get the 'ad+bc', since (a+b)(c+d) = ac+ad+bc+bd, then removing 'f' and 'e', we get just ad+bc
and finally...
return e*10ˆn + h*10^(n/2) + f;
Karatsuba I want to implement -> first number would goes into A, B and C. Second number would goes into D, E and F.
A B C
x D E F
resulting...
ad ae+bd af+be+cd cf+ce cf
However I have no idea how I can implement such thing through recursive calls, cause it seems much more complicated than the common way.
Please help.

Algorithm for adding two numbers to reach a value

I have a homework assignment that asks of me to check, for any three numbers, a,b,c such that 0<=a,b,c<=10^16, if I can reach c by adding a and b to each other. The trick is, with every addition, their value changes, so if we add a to b, we would then have the numbers a and a+b, instead of a and b. Because of this, I realized it's not a simple linear equation.
In order for this to be possible, the target number c, must be able to be represented in the form:
c = xa + yb
Through some testing, I figured out that the values of x and y, can't be equal, nor can both of them be even, in order for me to be able to reach the number c. Keeping this in mind, along with some special cases involving a,b or c to be equal to zero.
Any ideas?
EDIT:
It's not Euclid's Algorithm, it's not a diophantine equation, maybe I have mislead you with the statement that c = xa + yc. Even though they should satisfy this statement, it's not enough for the assignment at hand.
Take a=2, b=3, c=10 for example. In order to reach c, you would need to add a to b or b to a in the first step, and then in the second step you'd get either : a = 2, b = 5 or a = 5, b = 3, and if you keep doing this, you will never reach c. Euclid's algorithm will provide the output yes, but it's clear that you can't reach 10, by adding 2 and 3 to one another.
Note: To restate the problem, as I understand it: Suppose you're given nonnegative integers a, b, and c. Is it possible, by performing a sequence of zero or more operations a = a + b or b = b + a, to reach a point where a + b == c?
OK, after looking into this further, I think you can make a small change to the statement you made in your question:
In order for this to be possible, the target number c, must be able to
be represented in the form:
c = xa + yb
where GCD(x,y) = 1.
(Also, x and y need to be nonnegative; I'm not sure if they may be 0 or not.)
Your original observation, that x may not equal y (unless they're both 1) and that x and y cannot both be even, are implied by the new condition GCD(x,y) = 1; so those observations were correct, but not strong enough.
If you use this in your program instead of the test you already have, it may make the tests pass. (I'm not guaranteeing anything.) For a faster algorithm, you can use Extended Euclid's Algorithm as suggested in the comments (and Henry's answer) to find one x0 and y0; but if GCD(x0,y0) ≠ 1, you'd have to try other possibilities x = x0 + nb, y = y0 - na, for some n (which may be negative).
I don't have a rigorous proof. Suppose we constructed the set S of all pairs (x,y) such that (1,1) is in S, and if (x,y) is in S then (x,x+y) and (x+y,y) are in S. It's obvious that (1,n) and (n,1) are in S for all n > 1. Then we can try to figure out, for some m and n > 1, how could the pair (m,n) get into S? If m < n, this is possible only if (m, n-m) was already in S. If m > n, it's possible only if (m-n, n) was already in S. Either way, when you keep subtracting the smaller number from the larger, what you get is essentially Euclid's algorithm, which means you'll hit a point where your pair is (g,g) where g = GCD(m,n); and that pair is in S only if g = 1. It appears to me that the possible values for x and y in the above equation for the target number c are exactly those which are in S. Still, this is partly based on intuition; more work would be needed to make it rigorous.
If we forget for a moment that x and y should be positive, the equation c = xa + yb has either no or infinitely many solutions. When c is not a multiple of gcd(a,b) there is no solution.
Otherwise, calling gcd(a,b) = t use the extended euclidean algorithm to find d and e such that t = da + eb. One solution is then given by c = dc/t a + ec/t b.
It is clear that 0 = b/t a - a/t b so more solutions can be found by adding a multiple f of that to the equation:
c = (dc + fb)/t a + (ec - af)/t b
When we now reintroduce the restriction that x and y must be positive or zero, the question becomes to find values of f that make x = (dc + fb)/t and y = (ec - af)/t both positive or zero.
If dc < 0 try the smallest f that makes dc + fb >= 0 and see if ec - af is also >=0.
Otherwise try the largest f (a negative number) that makes ec - af >= 0 and check if dc + fb >= 0.
import java.util.*;
import java.math.BigInteger;
public class Main
{
private static boolean result(long a, long b, long c)
{
long M=c%(a+b);
return (M%b == 0) || (M%a == 0);
}
}
Idea:c=xa+by, because either x or y is bigger we can write the latter equation in one of two forms:
c=x(a+b)+(y-x)b,
c=y(a+b)+(x-y)a
depending on who is bigger, so by reducing c by a+b each time, c eventually becomes:
c=(y-x)b or c=(x-y)b, so c%b or c%a will evaluate to 0.

What is the fastest method for specific bit operations between two bytes?

I have two java byte variables, lets say
a = 00010011
b = 01101101 (in binary form)
Suppose that I have a third byte
c = 11001000
where its bits will work as an indicator to select between two operations (XOR/XNOR).
e.g. if c[i] = 1 then I select to XOR a[i]^b[i] and if c[i] = 0 I select to XNOR these values.
In this example the resulted byte will be
d = 01001001
What is the fastest method in Java to achieve such a result?
How about
d = a ^ b ^ ~c;
or
d = ~(a ^ b ^ c);
or
d = ~a ^ b ^ c;
The ^ has the property of flipping bits set to 1 and leaving bits set to 0. If you use ~ to flip that value you get flip for 0 and unchanged for 1.
Don't know whether it is the fastest, which I assume is a silly question, as it's a bitwise operation only, but this will work:
(a XOR b) XNOR c
which is same as:
~(a ^ b ^ c)

Java inline int swap. Why does this only work in Java

I was asked to write a swap without using temp variables or using xor and i came up with this.
In Java, this works, but in C/C++ this does not work.
I was under the impression that this would always work since the value of 'a' on the left side of the '|' would be stored in a register and then the assignment to 'a' would occur negating the effect on the assigned value for 'b'.
int a = 5;
int b = -13;
b = a | (0 & (a = b));
You are modifying a variable and reading its value without an intervening sequence point.
b = a + 0 * (a = b);
// reading a's value modifying a
This is undefined behavior. You have no right to any expectations on what the code will do.
The C/C++ compiler optimizes the expression 0 * (a = b) to simply 0 which turns your code fragment into:
int a = 5;
int b = -13;
b = a;
In C/C++, assigment are performed in the order of expression. In Java, assignments occur last regardless of other expressions.
e.g. This increments in C but does nothing in Java.
a = a++;

How does the Euclidean Algorithm work?

I just found this algorithm to compute the greatest common divisor in my lecture notes:
public static int gcd( int a, int b ) {
while (b != 0) {
final int r = a % b;
a = b;
b = r;
}
return a;
}
So r is the remainder when dividing b into a (get the mod). Then b is assigned to a, and the remainder is assigned to b, and a is returned. I can't for the life of my see how this works!
And then, apparently this algorithm doesn't work for all cases, and this one must then be used:
public static int gcd( int a, int b ) {
final int gcd;
if (b != 0) {
final int q = a / b;
final int r = a % b; // a == r + q * b AND r == a - q * b.
gcd = gcd( b, r );
} else {
gcd = a;
}
return gcd;
}
I don't understand the reasoning behind this. I generally get recursion and am good at Java but this is eluding me. Help please?
The Wikipedia article contains an explanation, but it's not easy to find it immediately (also, procedure + proof don't always answer the question "why it works").
Basically it comes down to the fact that for two integers a, b (assuming a >= b), it is always possible to write a = bq + r where r < b.
If d=gcd(a,b) then we can write a=ds and b=dt. So we have ds = qdt + r. Since the left hand side is divisible by d, the right hand side must also be divisible by d. And since qdt is divisible by d, the conclusion is that r must also be divisible by d.
To summarise: we have a = bq + r where r < b and a, b and r are all divisible by gcd(a,b).
Since a >= b > r, we have two cases:
If r = 0 then a = bq, and so b divides both b and a. Hence gcd(a,b)=b.
Otherwise (r > 0), we can reduce the problem of finding gcd(a,b) to the problem of finding gcd(b,r) which is exactly the same number (as a, b and r are all divisible by d).
Why is this a reduction? Because r < b. So we are dealing with numbers that are definitely smaller. This means that we only have to apply this reduction a finite number of times before we reach r = 0.
Now, r = a % b which hopefully explains the code you have.
They're equivalent. First thing to notice is that q in the second program is not used at all. The other difference is just iteration vs. recursion.
As to why it works, the Wikipedia page linked above is good. The first illustration in particular is effective to convey intuitively the "why", and the animation below then illustrates the "how".
given that 'q' is never used, I don't see a difference between your plain iterative function, and the recursive iterative function... both do
gdc(first number, second number)
as long as (second number > 0) {
int remainder = first % second;
gcd = try(second as first, remainder as second);
}
}
Barring trying to apply this to non-integers, under which circumstances does this algorithm fail?
(also see http://en.wikipedia.org/wiki/Euclidean_algorithm for lots of detailed info)
Here is an interesting blog post: Tominology.
Where a lot of the intuition behind the Euclidean Algorithm is discussed, it is implemented in JavaScript, but I believe that if one want's there is no difficult to convert the code to Java.
Here is a very useful explanation that I found.
For those too lazy to open it, this is what it says :
Consider the example when you had to find the GCD of (3084,1424). Lets assume that d is the GCD. Which means d | 3084 and d | 1424 (using the symbol '|' to say 'divides').
It follows that d | (3084 - 1424). Now we'll try to reduce these numbers which are divisible by d (in this case 3084 and 1024) as much as possible, so that we reach 0 as one of the numbers. Remember that GCD (a, 0) is a.
Since d | (3084 - 1424), it follows that d | ( 3084 - 2(1424) )
which means d | 236.
Hint : (3084 - 2*1424 = 236)
Now forget about the initial numbers, we just need to solve for d, and we know that d is the greatest number that divides 236, 1424 and 3084. So we use the smaller two numbers to proceed because it'll converge the problem towards 0.
d | 1424 and d | 236 implies that d | (1424 - 236).
So, d | ( 1424 - 6(236) ) => d | 8.
Now we know that d is the greatest number that divides 8, 236, 1424 and 3084. Taking the smaller two again, we have
d | 236 and d | 8, which implies d | (236 - 8).
So, d | ( 236 - 29(8) ) => d | 4.
Again the list of numbers divisible by d increases and converges (the numbers are getting smaller, closer to 0). As it stands now, d is the greatest number that divides 4, 8, 236, 1424, 3084.
Taking same steps,
d | 8 and d | 4 implies d | (8-4).
So, d | ( 8 - 2(4) ) => d | 0.
The list of numbers divisible by d is now 0, 4, 8, 236, 1484, 3084.
GCD of (a, 0) is always a. So, as soon as you have 0 as one of the two numbers, the other number is the gcd of original two and all those which came in between.
This is exactly what your code is doing. You can recognize the terminal condition as GCD (a, 0) = a.
The other step is to find the remainder of the two numbers, and choose that and the smaller of the previous two as the new numbers.

Categories