Related
int a = 0b1011011011;
int b = 0b1000110110;
int c = 0b0101010101;
int d = (a & b) ^ c; //Wrong
Intended value ofd is 0b1010011110
I need to write d so that when the bit of c is 1, the corresponding bit in the result is the corresponding bit in b, but when the bit of c is 0, the corresponding bit in the result is the corresponding bit in a.
I've been stuck on this for awhile now and I can't seem to come up with something in one line.
I had this earlier but didn't see your edit.
int d = (c & b)^(~c & a) ;
q = c & b yields b when c is 1 regardless of b.
p = ~c & a yields a when c is 0 regardless of a.
q ^ p simply preserves those bits exclusive to a or b and 0 otherwise.
I get the feeling that this is for homework, but I'll answer anyway, since it's hard to explain this without giving away the answer.
Consider two simpler questions. Forget about the multiple bits and pretend a, b, c, and d are only one bit (since this is a bitwise operation, the logic will not change):
When c is 1, d = b. When c is 0, d = 0.
When c and b are both 1, d ends up being 1. If b or c is 0, d is 0.
This means that d = b & c.
When c is 0, d = a. When c is 1, d = 0.
This is very similar to case #1, except c is flipped and a is replaced with b.
Therefore, we can replace b with a, and c with ~c to get this solution.
This means that d = a & ~c.
Now for your original question: if we take those two simpler examples, we can see that it is impossible for both of them to be 1. So if we want both rules to apply, we can just put an | between them, giving us:
d = (b & c) | (a & ~c).
I need to write d so that when the bit of c is 1, the corresponding bit in the result is the corresponding bit in b,
when c == 1 d = b
but when the bit of c is 0, the corresponding bit in the result is the corresponding bit in a.
when c == 0 d = a
This sounds like a job for bit masking!
I know you give these as your test data:
int a = 0b1011011011;
int b = 0b1000110110;
int c = 0b0101010101;
int d = 0b1010011110;
But this test data is just as good a test and easier to read. All I've done is rearrange the bit columns so that c doesn't change so often:
int a = 0b11011_01101;
int b = 0b10101_00110;
int c = 0b00000_11111;
int d = 0b11011_00110;
Since Java 7 we can also use underscores in numeric literals to make them a little easier on the eyes.
It should now be easy to see that c controls where d is copied from, a or b. Easy to read data is important too.
And now, some bit masks
assertTrue(b & c ==0b00000_00110);
assertTrue(a & ^c==0b11011_00000);
Or them together and you get:
int d = 0b11011_00110;
assertTrue(d == (b & c) | (a & ^c));
Try that with either data set. Should still work.
You could xor them together as well. Doesn't matter because the c mask negation already excludes the potential for 1's on both sides. I chose or simply out of a sense of tradition.
I'm currently working with Algorithms Fourth Edition by Robert Sedgewick and Kevin Wayne and I'm stuck on one of the exercises. I know that there's someone who's posted the solutions to many of the exercises from this book on GitHub, but I want to do them on my own so that I can understand and learn from them.
I've been scanning the book for how to calculate the maximum/minimum number of probes required to build a hash-table (exercise 3.4.12), but I can't find any methods/functions/formulas which shows how to move forward when dealing with such a problem.
The exercise:
Suppose that the keys A through G, with the hash values given below, are inserted in some order into an initially empty table of size 7 (= m) using a linear-probing (when there's a collision ew just check the next entry in the table by incrementing index) table (with no resizing for this problem)
key = "A B C D E F G"
hash = "2 0 0 4 4 4 2"
Which of the following could not possibly result from inserting these keys?
a) E F G A C B D
b) C E B G F D A
c) B D F A C E G
d) C G B A D E F
e) F G B D A C E
f) G E C A D B F
I found this website http://orion.lcg.ufrj.br/Dr.Dobbs/books/book2/algo02e5.htm , but I couldn't understand much of it. I got stumped on what to do when I tried to solve this, first I thought I had misunderstood the question, or that I simply couldn't find the formula on how to solve the exercise.
How do I calculate the maximum and minimum number of probes that could be required to build a table of size m?
I don't think there's a specific formula for this question.
You need to look at each order, and decide whether that order is possible to achieve.
Let's look at (a) together: E F G A C B D.
If we replace the keys with their hash values, we'll get 4 4 2 2 0 0 4. Now the important part - note that only G is located in "right" bucket, all other values have been shifted due to the linear probing. Is that possible?
Say we've inserted G first and it was written to index 2. What now? If we insert a key with hash 0 or 4, it will be placed in cell 0 or 4 without probing, which is not what we want. The only option is to insert A now, which also has a hash value of 2. Where will it go? Index 3 of course due to the linear probing. So far, so good, our array looks like this: X X G A X X X
What now? We're left only with keys whose hash is 0 or 4, and once we insert one such key, it will be placed in cell 0 or 4. But in the pattern we're being aseked about, all values with hashes 0 and 4 aren't in the "right" cell. Therefore, this pattern is not possible.
Now let's look at (b). The order is C E B G F D A which translates to 0 4 0 2 4 4 2. How can we create this pattern?
Let's insert C: C X X X X X X
What now? Let's insert F: C X X X F X X. Now let's insert D. Due to linear probing, we'll end up with C X X X F D X. What now? We're stuck. It doesn't matter what we insert now, we won't be able to reach C E B G F D A. Is there anything we can change about our previous inserts to achieve a different result? No. Therefore, (b) is also impossible.
It seems like non of the choices are possible.
a) E F G A C B D (4 4 2 2 0 0 4) .
possible insertion order : G,A (the rest is not possible)
b) C E B G F D A (0 4 0 2 4 4 2) .
possible insertion order : C, F, D (the rest is not possible)
c) B D F A C E G (0 4 4 2 0 4 2) .
possible insertion order : B (the rest is not possible)
d) C G B A D E F (0 2 0 2 4 4 4) .
possible insertion order : C, D, E, F (the rest is not possible)
e) F G B D A C E (4 2 0 4 2 0 4) .
possible insertion order : no possible insertion sequence
f) G E C A D B F (2 4 0 2 4 0 4) .
possible insertion order : D (the rest is not possible)
I have a homework assignment that asks of me to check, for any three numbers, a,b,c such that 0<=a,b,c<=10^16, if I can reach c by adding a and b to each other. The trick is, with every addition, their value changes, so if we add a to b, we would then have the numbers a and a+b, instead of a and b. Because of this, I realized it's not a simple linear equation.
In order for this to be possible, the target number c, must be able to be represented in the form:
c = xa + yb
Through some testing, I figured out that the values of x and y, can't be equal, nor can both of them be even, in order for me to be able to reach the number c. Keeping this in mind, along with some special cases involving a,b or c to be equal to zero.
Any ideas?
EDIT:
It's not Euclid's Algorithm, it's not a diophantine equation, maybe I have mislead you with the statement that c = xa + yc. Even though they should satisfy this statement, it's not enough for the assignment at hand.
Take a=2, b=3, c=10 for example. In order to reach c, you would need to add a to b or b to a in the first step, and then in the second step you'd get either : a = 2, b = 5 or a = 5, b = 3, and if you keep doing this, you will never reach c. Euclid's algorithm will provide the output yes, but it's clear that you can't reach 10, by adding 2 and 3 to one another.
Note: To restate the problem, as I understand it: Suppose you're given nonnegative integers a, b, and c. Is it possible, by performing a sequence of zero or more operations a = a + b or b = b + a, to reach a point where a + b == c?
OK, after looking into this further, I think you can make a small change to the statement you made in your question:
In order for this to be possible, the target number c, must be able to
be represented in the form:
c = xa + yb
where GCD(x,y) = 1.
(Also, x and y need to be nonnegative; I'm not sure if they may be 0 or not.)
Your original observation, that x may not equal y (unless they're both 1) and that x and y cannot both be even, are implied by the new condition GCD(x,y) = 1; so those observations were correct, but not strong enough.
If you use this in your program instead of the test you already have, it may make the tests pass. (I'm not guaranteeing anything.) For a faster algorithm, you can use Extended Euclid's Algorithm as suggested in the comments (and Henry's answer) to find one x0 and y0; but if GCD(x0,y0) ≠ 1, you'd have to try other possibilities x = x0 + nb, y = y0 - na, for some n (which may be negative).
I don't have a rigorous proof. Suppose we constructed the set S of all pairs (x,y) such that (1,1) is in S, and if (x,y) is in S then (x,x+y) and (x+y,y) are in S. It's obvious that (1,n) and (n,1) are in S for all n > 1. Then we can try to figure out, for some m and n > 1, how could the pair (m,n) get into S? If m < n, this is possible only if (m, n-m) was already in S. If m > n, it's possible only if (m-n, n) was already in S. Either way, when you keep subtracting the smaller number from the larger, what you get is essentially Euclid's algorithm, which means you'll hit a point where your pair is (g,g) where g = GCD(m,n); and that pair is in S only if g = 1. It appears to me that the possible values for x and y in the above equation for the target number c are exactly those which are in S. Still, this is partly based on intuition; more work would be needed to make it rigorous.
If we forget for a moment that x and y should be positive, the equation c = xa + yb has either no or infinitely many solutions. When c is not a multiple of gcd(a,b) there is no solution.
Otherwise, calling gcd(a,b) = t use the extended euclidean algorithm to find d and e such that t = da + eb. One solution is then given by c = dc/t a + ec/t b.
It is clear that 0 = b/t a - a/t b so more solutions can be found by adding a multiple f of that to the equation:
c = (dc + fb)/t a + (ec - af)/t b
When we now reintroduce the restriction that x and y must be positive or zero, the question becomes to find values of f that make x = (dc + fb)/t and y = (ec - af)/t both positive or zero.
If dc < 0 try the smallest f that makes dc + fb >= 0 and see if ec - af is also >=0.
Otherwise try the largest f (a negative number) that makes ec - af >= 0 and check if dc + fb >= 0.
import java.util.*;
import java.math.BigInteger;
public class Main
{
private static boolean result(long a, long b, long c)
{
long M=c%(a+b);
return (M%b == 0) || (M%a == 0);
}
}
Idea:c=xa+by, because either x or y is bigger we can write the latter equation in one of two forms:
c=x(a+b)+(y-x)b,
c=y(a+b)+(x-y)a
depending on who is bigger, so by reducing c by a+b each time, c eventually becomes:
c=(y-x)b or c=(x-y)b, so c%b or c%a will evaluate to 0.
I am trying to generate every possible unique combination from an array, but its not as straightforward as generating all combinations..... Eg. I have an array {a,b,c,d,e,f} ... my result should be like this...
ab, cd, ef
abc, def
ac, bd, ef
abcf, ed
....etc
...... basically in every result set all elements of the array should be included .... Also 'ab' is the same as'ba' and 'abcd' is the same as 'dcba' or 'cbda' .... The position does not matter .... and no repetition allowed ... 'aaa' or 'aa' is not valid ... would be grateful if someone could provide a solution for this problem ....
I suggest that you build all possible unique sets of set sizes. Then insert all possible values in all possible orders. For example, with 5 possible values, you have the set sizes:
1 1 1 1 1
1 1 1 2
1 2 2
1 1 3
1 4
2 3
5
Now, put the actual values in to the sets. For the first set of set sizes, we get:
a, b, c, d, e
That isn't very interesting because all the sets are the same size, so skip to the third set of set sizes. Here, we fill the sets and then shift them, giving us:
a, bc, de
b, cd, ea
c, de, ab
d, ea, bc
e, ab, cd
This isn't a full solution, but I've split the problem in two and I think you can take it from there.
String [] new_array=new String[N];
array.length=length.of("befdac");
for(int i=0;i < array.length;i++) //this is first digit
{
for(int j=0;j < array.length;j++) //this is second digit
{
if(i==j)continue;
................ same with other digits
{
if((i==j)||(i==k)) continue;
// start counting in this most inner block
new_array[i][j][k][l]...[last_digit]=byte(i+65)+byte(j+65)+byte(k+65)+.....+byte(last_digit+65);
//65=a, 66=b,......
}
}
}
new_array[][][][]...[] will be your code i didnt tried . best thing u do it yourself. this is not optimized. just made for first answer to get some points
I just found this algorithm to compute the greatest common divisor in my lecture notes:
public static int gcd( int a, int b ) {
while (b != 0) {
final int r = a % b;
a = b;
b = r;
}
return a;
}
So r is the remainder when dividing b into a (get the mod). Then b is assigned to a, and the remainder is assigned to b, and a is returned. I can't for the life of my see how this works!
And then, apparently this algorithm doesn't work for all cases, and this one must then be used:
public static int gcd( int a, int b ) {
final int gcd;
if (b != 0) {
final int q = a / b;
final int r = a % b; // a == r + q * b AND r == a - q * b.
gcd = gcd( b, r );
} else {
gcd = a;
}
return gcd;
}
I don't understand the reasoning behind this. I generally get recursion and am good at Java but this is eluding me. Help please?
The Wikipedia article contains an explanation, but it's not easy to find it immediately (also, procedure + proof don't always answer the question "why it works").
Basically it comes down to the fact that for two integers a, b (assuming a >= b), it is always possible to write a = bq + r where r < b.
If d=gcd(a,b) then we can write a=ds and b=dt. So we have ds = qdt + r. Since the left hand side is divisible by d, the right hand side must also be divisible by d. And since qdt is divisible by d, the conclusion is that r must also be divisible by d.
To summarise: we have a = bq + r where r < b and a, b and r are all divisible by gcd(a,b).
Since a >= b > r, we have two cases:
If r = 0 then a = bq, and so b divides both b and a. Hence gcd(a,b)=b.
Otherwise (r > 0), we can reduce the problem of finding gcd(a,b) to the problem of finding gcd(b,r) which is exactly the same number (as a, b and r are all divisible by d).
Why is this a reduction? Because r < b. So we are dealing with numbers that are definitely smaller. This means that we only have to apply this reduction a finite number of times before we reach r = 0.
Now, r = a % b which hopefully explains the code you have.
They're equivalent. First thing to notice is that q in the second program is not used at all. The other difference is just iteration vs. recursion.
As to why it works, the Wikipedia page linked above is good. The first illustration in particular is effective to convey intuitively the "why", and the animation below then illustrates the "how".
given that 'q' is never used, I don't see a difference between your plain iterative function, and the recursive iterative function... both do
gdc(first number, second number)
as long as (second number > 0) {
int remainder = first % second;
gcd = try(second as first, remainder as second);
}
}
Barring trying to apply this to non-integers, under which circumstances does this algorithm fail?
(also see http://en.wikipedia.org/wiki/Euclidean_algorithm for lots of detailed info)
Here is an interesting blog post: Tominology.
Where a lot of the intuition behind the Euclidean Algorithm is discussed, it is implemented in JavaScript, but I believe that if one want's there is no difficult to convert the code to Java.
Here is a very useful explanation that I found.
For those too lazy to open it, this is what it says :
Consider the example when you had to find the GCD of (3084,1424). Lets assume that d is the GCD. Which means d | 3084 and d | 1424 (using the symbol '|' to say 'divides').
It follows that d | (3084 - 1424). Now we'll try to reduce these numbers which are divisible by d (in this case 3084 and 1024) as much as possible, so that we reach 0 as one of the numbers. Remember that GCD (a, 0) is a.
Since d | (3084 - 1424), it follows that d | ( 3084 - 2(1424) )
which means d | 236.
Hint : (3084 - 2*1424 = 236)
Now forget about the initial numbers, we just need to solve for d, and we know that d is the greatest number that divides 236, 1424 and 3084. So we use the smaller two numbers to proceed because it'll converge the problem towards 0.
d | 1424 and d | 236 implies that d | (1424 - 236).
So, d | ( 1424 - 6(236) ) => d | 8.
Now we know that d is the greatest number that divides 8, 236, 1424 and 3084. Taking the smaller two again, we have
d | 236 and d | 8, which implies d | (236 - 8).
So, d | ( 236 - 29(8) ) => d | 4.
Again the list of numbers divisible by d increases and converges (the numbers are getting smaller, closer to 0). As it stands now, d is the greatest number that divides 4, 8, 236, 1424, 3084.
Taking same steps,
d | 8 and d | 4 implies d | (8-4).
So, d | ( 8 - 2(4) ) => d | 0.
The list of numbers divisible by d is now 0, 4, 8, 236, 1484, 3084.
GCD of (a, 0) is always a. So, as soon as you have 0 as one of the two numbers, the other number is the gcd of original two and all those which came in between.
This is exactly what your code is doing. You can recognize the terminal condition as GCD (a, 0) = a.
The other step is to find the remainder of the two numbers, and choose that and the smaller of the previous two as the new numbers.