Algorithm for adding two numbers to reach a value - java

I have a homework assignment that asks of me to check, for any three numbers, a,b,c such that 0<=a,b,c<=10^16, if I can reach c by adding a and b to each other. The trick is, with every addition, their value changes, so if we add a to b, we would then have the numbers a and a+b, instead of a and b. Because of this, I realized it's not a simple linear equation.
In order for this to be possible, the target number c, must be able to be represented in the form:
c = xa + yb
Through some testing, I figured out that the values of x and y, can't be equal, nor can both of them be even, in order for me to be able to reach the number c. Keeping this in mind, along with some special cases involving a,b or c to be equal to zero.
Any ideas?
EDIT:
It's not Euclid's Algorithm, it's not a diophantine equation, maybe I have mislead you with the statement that c = xa + yc. Even though they should satisfy this statement, it's not enough for the assignment at hand.
Take a=2, b=3, c=10 for example. In order to reach c, you would need to add a to b or b to a in the first step, and then in the second step you'd get either : a = 2, b = 5 or a = 5, b = 3, and if you keep doing this, you will never reach c. Euclid's algorithm will provide the output yes, but it's clear that you can't reach 10, by adding 2 and 3 to one another.

Note: To restate the problem, as I understand it: Suppose you're given nonnegative integers a, b, and c. Is it possible, by performing a sequence of zero or more operations a = a + b or b = b + a, to reach a point where a + b == c?
OK, after looking into this further, I think you can make a small change to the statement you made in your question:
In order for this to be possible, the target number c, must be able to
be represented in the form:
c = xa + yb
where GCD(x,y) = 1.
(Also, x and y need to be nonnegative; I'm not sure if they may be 0 or not.)
Your original observation, that x may not equal y (unless they're both 1) and that x and y cannot both be even, are implied by the new condition GCD(x,y) = 1; so those observations were correct, but not strong enough.
If you use this in your program instead of the test you already have, it may make the tests pass. (I'm not guaranteeing anything.) For a faster algorithm, you can use Extended Euclid's Algorithm as suggested in the comments (and Henry's answer) to find one x0 and y0; but if GCD(x0,y0) ≠ 1, you'd have to try other possibilities x = x0 + nb, y = y0 - na, for some n (which may be negative).
I don't have a rigorous proof. Suppose we constructed the set S of all pairs (x,y) such that (1,1) is in S, and if (x,y) is in S then (x,x+y) and (x+y,y) are in S. It's obvious that (1,n) and (n,1) are in S for all n > 1. Then we can try to figure out, for some m and n > 1, how could the pair (m,n) get into S? If m < n, this is possible only if (m, n-m) was already in S. If m > n, it's possible only if (m-n, n) was already in S. Either way, when you keep subtracting the smaller number from the larger, what you get is essentially Euclid's algorithm, which means you'll hit a point where your pair is (g,g) where g = GCD(m,n); and that pair is in S only if g = 1. It appears to me that the possible values for x and y in the above equation for the target number c are exactly those which are in S. Still, this is partly based on intuition; more work would be needed to make it rigorous.

If we forget for a moment that x and y should be positive, the equation c = xa + yb has either no or infinitely many solutions. When c is not a multiple of gcd(a,b) there is no solution.
Otherwise, calling gcd(a,b) = t use the extended euclidean algorithm to find d and e such that t = da + eb. One solution is then given by c = dc/t a + ec/t b.
It is clear that 0 = b/t a - a/t b so more solutions can be found by adding a multiple f of that to the equation:
c = (dc + fb)/t a + (ec - af)/t b
When we now reintroduce the restriction that x and y must be positive or zero, the question becomes to find values of f that make x = (dc + fb)/t and y = (ec - af)/t both positive or zero.
If dc < 0 try the smallest f that makes dc + fb >= 0 and see if ec - af is also >=0.
Otherwise try the largest f (a negative number) that makes ec - af >= 0 and check if dc + fb >= 0.

import java.util.*;
import java.math.BigInteger;
public class Main
{
private static boolean result(long a, long b, long c)
{
long M=c%(a+b);
return (M%b == 0) || (M%a == 0);
}
}
Idea:c=xa+by, because either x or y is bigger we can write the latter equation in one of two forms:
c=x(a+b)+(y-x)b,
c=y(a+b)+(x-y)a
depending on who is bigger, so by reducing c by a+b each time, c eventually becomes:
c=(y-x)b or c=(x-y)b, so c%b or c%a will evaluate to 0.

Related

Karatsuba algorithm splitting number in 3 strings

Im trying to code the Karatsuba algorithm with some changes. Instead of splitting each number into 2 strings, I want to split it into 3.
For example:
Common Karatsuba -> first number would goes into A and B. Second number would goes into C and D.
A B
x C D
resulting...
ac ad+bc bd
In order to implement it through recursive calls, majority of the codes do something like:
e = karatsuba(a,c)
f = karatsuba(b,d)
g = karatsuba(a+b,c+d)
h = g-f-e //that's to get the 'ad+bc', since (a+b)(c+d) = ac+ad+bc+bd, then removing 'f' and 'e', we get just ad+bc
and finally...
return e*10ˆn + h*10^(n/2) + f;
Karatsuba I want to implement -> first number would goes into A, B and C. Second number would goes into D, E and F.
A B C
x D E F
resulting...
ad ae+bd af+be+cd cf+ce cf
However I have no idea how I can implement such thing through recursive calls, cause it seems much more complicated than the common way.
Please help.

JAVA - return array elements between two points

I've been trying to make a code that goes through all array elements that are between two specified points, but I am stuck.
Let's suppose it's an array like that:
int[][] new_array = new int[100][100];
And how do I get all elements that are in straight line between let's say
new_array[17][2];
and
new_array[5][90];
This is what I want to achieve:
Let's imagine that your array is a first quadrant of a cartesian coordinates system. With a first column lying on axis Y and last row lying on axis X.
Having that assumption you could find a function that describes a straight line between any of two points in your array.
You need to solve the function:
y = ax + b
It's a standard linear function. You have two points, solving that you'll find your equation (values of a and b).
When you know equation you need to evaluate points in the array for each x value. Doing that you'll find all y values that are below/on/above the line.
Following #Marcin Pietraszek's answer the function can be obtained this way:
Given the two points (a,b) and (c,d) the straight line that passes through both points is given by
a + K * (x - a) = c AND b + K (y - b) = d
where K is a scalar number.
And this resolves to:
y = ( (d - b) * x - (d - b) * a + (c - a) * b ) / (c - a)
So any point (x, y) that meets this condition will be on the straight line.
You will need go through the matrix, checking one by one to see which points meet the condition.
If you want only the point inside the segment, then aditionally you need to check the boundaries.

Find a sum equal or greater than given target using only numbers from set

Example 1:
Shop selling beer, available packages are 6 and 10 units per package. Customer inputs 26 and algorithm replies 26, because 26 = 10 + 10 + 6.
Example 2:
Selling spices, available packages are 0.6, 1.5 and 3. Target value = 5. Algorithm returns value 5.1, because it is the nearest greater number than target possible to achieve with packages (3, 1.5, 0.6).
I need a Java method that will suggest that number.
Simmilar algorithm is described in Bin packing problem, but it doesn't suit me.
I tried it and when it returned me the number smaller than target I was runnig it once again with increased target number. But it is not efficient when number of packages is huge.
I need almost the same algorithm, but with the equal or greater nearest number.
Similar question: Find if a number is a possible sum of two or more numbers in a given set - python.
First let's reduce this problem to integers rather than real numbers, otherwise we won't get a fast optimal algorithm out of this. For example, let's multiply all numbers by 100 and then just round it to the next integer. So say we have item sizes x1, ..., xn and target size Y. We want to minimize the value
k1 x1 + ... + kn xn - Y
under the conditions
(1) ki is a non-positive integer for all n ≥ i ≥ 1
(2) k1 x1 + ... + kn xn - Y ≥ 0
One simple algorithm for this would be to ask a series of questions like
Can we achieve k1 x1 + ... + kn xn = Y + 0?
Can we achieve k1 x1 + ... + kn xn = Y + 1?
Can we achieve k1 x1 + ... + kn xn = Y + z?
etc. with increasing z
until we get the answer "Yes". All of these problems are instances of the Knapsack problem with the weights set equal to the values of the items. The good news is that we can solve all those at once, if we can establish an upper bound for z. It's easy to show that there is a solution with z ≤ Y, unless all the xi are larger than Y, in which case the solution is just to pick the smallest xi.
So let's use the pseudopolynomial dynamic programming approach to solve Knapsack: Let f(i,j) be 1 iif we can reach total item size j with the first i items (x1, ..., xi). We have the recurrence
f(0,0) = 1
f(0,j) = 0 for all j > 0
f(i,j) = f(i - 1, j) or f(i - 1, j - x_i) or f(i - 1, j - 2 * x_i) ...
We can solve this DP array in O(n * Y) time and O(Y) space. The result will be the first j ≥ Y with f(n, j) = 1.
There are a few technical details that are left as an exercise to the reader:
How to implement this in Java
How to reconstruct the solution if needed. This can be done in O(n) time using the DP array (but then we need O(n * Y) space to remember the whole thing).
You want to solve the integer programming problem min(ct) s.t. ct >= T, c >= 0 where T is your target weight, and c is a non-negative integer vector specifying how much of each package to purchase, and t is the vector specifying the weight of each package. You can either solve this with dynamic programming as pointed out by another answer, or, if your weights and target weight are too large then you can use general integer programming solvers, which have been highly optimized over the years to give good speed performance in practice.

How does the Euclidean Algorithm work?

I just found this algorithm to compute the greatest common divisor in my lecture notes:
public static int gcd( int a, int b ) {
while (b != 0) {
final int r = a % b;
a = b;
b = r;
}
return a;
}
So r is the remainder when dividing b into a (get the mod). Then b is assigned to a, and the remainder is assigned to b, and a is returned. I can't for the life of my see how this works!
And then, apparently this algorithm doesn't work for all cases, and this one must then be used:
public static int gcd( int a, int b ) {
final int gcd;
if (b != 0) {
final int q = a / b;
final int r = a % b; // a == r + q * b AND r == a - q * b.
gcd = gcd( b, r );
} else {
gcd = a;
}
return gcd;
}
I don't understand the reasoning behind this. I generally get recursion and am good at Java but this is eluding me. Help please?
The Wikipedia article contains an explanation, but it's not easy to find it immediately (also, procedure + proof don't always answer the question "why it works").
Basically it comes down to the fact that for two integers a, b (assuming a >= b), it is always possible to write a = bq + r where r < b.
If d=gcd(a,b) then we can write a=ds and b=dt. So we have ds = qdt + r. Since the left hand side is divisible by d, the right hand side must also be divisible by d. And since qdt is divisible by d, the conclusion is that r must also be divisible by d.
To summarise: we have a = bq + r where r < b and a, b and r are all divisible by gcd(a,b).
Since a >= b > r, we have two cases:
If r = 0 then a = bq, and so b divides both b and a. Hence gcd(a,b)=b.
Otherwise (r > 0), we can reduce the problem of finding gcd(a,b) to the problem of finding gcd(b,r) which is exactly the same number (as a, b and r are all divisible by d).
Why is this a reduction? Because r < b. So we are dealing with numbers that are definitely smaller. This means that we only have to apply this reduction a finite number of times before we reach r = 0.
Now, r = a % b which hopefully explains the code you have.
They're equivalent. First thing to notice is that q in the second program is not used at all. The other difference is just iteration vs. recursion.
As to why it works, the Wikipedia page linked above is good. The first illustration in particular is effective to convey intuitively the "why", and the animation below then illustrates the "how".
given that 'q' is never used, I don't see a difference between your plain iterative function, and the recursive iterative function... both do
gdc(first number, second number)
as long as (second number > 0) {
int remainder = first % second;
gcd = try(second as first, remainder as second);
}
}
Barring trying to apply this to non-integers, under which circumstances does this algorithm fail?
(also see http://en.wikipedia.org/wiki/Euclidean_algorithm for lots of detailed info)
Here is an interesting blog post: Tominology.
Where a lot of the intuition behind the Euclidean Algorithm is discussed, it is implemented in JavaScript, but I believe that if one want's there is no difficult to convert the code to Java.
Here is a very useful explanation that I found.
For those too lazy to open it, this is what it says :
Consider the example when you had to find the GCD of (3084,1424). Lets assume that d is the GCD. Which means d | 3084 and d | 1424 (using the symbol '|' to say 'divides').
It follows that d | (3084 - 1424). Now we'll try to reduce these numbers which are divisible by d (in this case 3084 and 1024) as much as possible, so that we reach 0 as one of the numbers. Remember that GCD (a, 0) is a.
Since d | (3084 - 1424), it follows that d | ( 3084 - 2(1424) )
which means d | 236.
Hint : (3084 - 2*1424 = 236)
Now forget about the initial numbers, we just need to solve for d, and we know that d is the greatest number that divides 236, 1424 and 3084. So we use the smaller two numbers to proceed because it'll converge the problem towards 0.
d | 1424 and d | 236 implies that d | (1424 - 236).
So, d | ( 1424 - 6(236) ) => d | 8.
Now we know that d is the greatest number that divides 8, 236, 1424 and 3084. Taking the smaller two again, we have
d | 236 and d | 8, which implies d | (236 - 8).
So, d | ( 236 - 29(8) ) => d | 4.
Again the list of numbers divisible by d increases and converges (the numbers are getting smaller, closer to 0). As it stands now, d is the greatest number that divides 4, 8, 236, 1424, 3084.
Taking same steps,
d | 8 and d | 4 implies d | (8-4).
So, d | ( 8 - 2(4) ) => d | 0.
The list of numbers divisible by d is now 0, 4, 8, 236, 1484, 3084.
GCD of (a, 0) is always a. So, as soon as you have 0 as one of the two numbers, the other number is the gcd of original two and all those which came in between.
This is exactly what your code is doing. You can recognize the terminal condition as GCD (a, 0) = a.
The other step is to find the remainder of the two numbers, and choose that and the smaller of the previous two as the new numbers.

Bitwise Multiply and Add in Java

I have the methods that do both the multiplication and addition, but I'm just not able to get my head around them. Both of them are from external websites and not my own:
public static void bitwiseMultiply(int n1, int n2) {
int a = n1, b = n2, result=0;
while (b != 0) // Iterate the loop till b==0
{
if ((b & 01) != 0) // Logical ANDing of the value of b with 01
{
result = result + a; // Update the result with the new value of a.
}
a <<= 1; // Left shifting the value contained in 'a' by 1.
b >>= 1; // Right shifting the value contained in 'b' by 1.
}
System.out.println(result);
}
public static void bitwiseAdd(int n1, int n2) {
int x = n1, y = n2;
int xor, and, temp;
and = x & y;
xor = x ^ y;
while (and != 0) {
and <<= 1;
temp = xor ^ and;
and &= xor;
xor = temp;
}
System.out.println(xor);
}
I tried doing a step-by-step debug, but it really didn't make much sense to me, though it works.
What I'm possibly looking for is to try and understand how this works (the mathematical basis perhaps?).
Edit: This is not homework, I'm just trying to learn bitwise operations in Java.
Let's begin by looking the multiplication code. The idea is actually pretty clever. Suppose that you have n1 and n2 written in binary. Then you can think of n1 as a sum of powers of two: n2 = c30 230 + c29 229 + ... + c1 21 + c0 20, where each ci is either 0 or 1. Then you can think of the product n1 n2 as
n1 n2 =
n1 (c30 230 + c29 229 + ... + c1 21 + c0 20) =
n1 c30 230 + n1 c29 229 + ... + n1 c1 21 + n1 c0 20
This is a bit dense, but the idea is that the product of the two numbers is given by the first number multiplied by the powers of two making up the second number, times the value of the binary digits of the second number.
The question now is whether we can compute the terms of this sum without doing any actual multiplications. In order to do so, we're going to need to be able to read the binary digits of n2. Fortunately, we can do this using shifts. In particular, suppose we start off with n2 and then just look at the last bit. That's c0. If we then shift the value down one position, then the last bit is c0, etc. More generally, after shifting the value of n2 down by i positions, the lowest bit will be ci. To read the very last bit, we can just bitwise AND the value with the number 1. This has a binary representation that's zero everywhere except the last digit. Since 0 AND n = 0 for any n, this clears all the topmost bits. Moreover, since 0 AND 1 = 0 and 1 AND 1 = 1, this operation preserves the last bit of the number.
Okay - we now know that we can read the values of ci; so what? Well, the good news is that we also can compute the values of the series n1 2i in a similar fashion. In particular, consider the sequence of values n1 << 0, n1 << 1, etc. Any time you do a left bit-shift, it's equivalent to multiplying by a power of two. This means that we now have all the components we need to compute the above sum. Here's your original source code, commented with what's going on:
public static void bitwiseMultiply(int n1, int n2) {
/* This value will hold n1 * 2^i for varying values of i. It will
* start off holding n1 * 2^0 = n1, and after each iteration will
* be updated to hold the next term in the sequence.
*/
int a = n1;
/* This value will be used to read the individual bits out of n2.
* We'll use the shifting trick to read the bits and will maintain
* the invariant that after i iterations, b is equal to n2 >> i.
*/
int b = n2;
/* This value will hold the sum of the terms so far. */
int result = 0;
/* Continuously loop over more and more bits of n2 until we've
* consumed the last of them. Since after i iterations of the
* loop b = n2 >> i, this only reaches zero once we've used up
* all the bits of the original value of n2.
*/
while (b != 0)
{
/* Using the bitwise AND trick, determine whether the ith
* bit of b is a zero or one. If it's a zero, then the
* current term in our sum is zero and we don't do anything.
* Otherwise, then we should add n1 * 2^i.
*/
if ((b & 1) != 0)
{
/* Recall that a = n1 * 2^i at this point, so we're adding
* in the next term in the sum.
*/
result = result + a;
}
/* To maintain that a = n1 * 2^i after i iterations, scale it
* by a factor of two by left shifting one position.
*/
a <<= 1;
/* To maintain that b = n2 >> i after i iterations, shift it
* one spot over.
*/
b >>>= 1;
}
System.out.println(result);
}
Hope this helps!
It looks like your problem is not java, but just calculating with binary numbers. Start of simple:
(all numbers binary:)
0 + 0 = 0 # 0 xor 0 = 0
0 + 1 = 1 # 0 xor 1 = 1
1 + 0 = 1 # 1 xor 0 = 1
1 + 1 = 10 # 1 xor 1 = 0 ( read 1 + 1 = 10 as 1 + 1 = 0 and 1 carry)
Ok... You see that you can add two one digit numbers using the xor operation. With an and you can now find out whether you have a "carry" bit, which is very similar to adding numbers with pen&paper. (Up to this point you have something called a Half-Adder). When you add the next two bits, then you also need to add the carry bit to those two digits. Taking this into account you can get a Full-Adder. You can read about the concepts of Half-Adders and Full-Adders on Wikipedia:
http://en.wikipedia.org/wiki/Adder_(electronics)
And many more places on the web.
I hope that gives you a start.
With multiplication it is very similar by the way. Just remember how you did multiplying with pen&paper in elementary school. Thats what is happening here. Just that it's happening with binary numbers and not with decimal numbers.
EXPLANATION OF THE bitwiseAdd METHOD:
I know this question was asked a while back but since no complete answer has been given regarding how the bitwiseAdd method works here is one.
The key to understanding the logic encapsulated in bitwiseAdd is found in the relationship between addition operations and xor and and bitwise operations. That relationship is defined by the following equation (see appendix 1 for a numeric example of this equation):
x + y = 2 * (x&y)+(x^y) (1.1)
Or more simply:
x + y = 2 * and + xor (1.2)
with
and = x & y
xor = x ^ y
You might have noticed something familiar in this equation: the and and xor variables are the same as those defined at the beginning of bitwiseAdd. There is also a multiplication by two, which in bitwiseAdd is done at the beginning of the while loop. But I will come back to that later.
Let me also make a quick side note about the '&' bitwise operator before we proceed further. This operator basically "captures" the intersection of the bit sequences against which it is applied. For example, 9 & 13 = 1001 & 1101 = 1001 (=9). You can see from this result that only those bits common to both bit sequences are copied to the result. It derives from this that when two bit sequences have no common bit, the result of applying '&' on them yields 0. This has an important consequence on the addition-bitwise relationship which shall become clear soon
Now the problem we have is that equation 1.2 uses the '+' operator whereas bitwiseAdd doesn't (it only uses '^', '&' and '<<'). So how do we make the '+' in equation 1.2 somehow disappear? Answer: by 'forcing' the and expression to return 0. And the way we do that is by using recursion.
To demonstrate this I am going to recurse equation 1.2 one time (this step might be a bit challenging at first but if needed there's a detailed step by step result in appendix 2):
x + y = 2*(2*and & xor) + (2*and ^ xor) (1.3)
Or more simply:
x + y = 2 * and[1] + xor[1] (1.4)
with
and[1] = 2*and & xor,
xor[1] = 2*and ^ xor,
[1] meaning 'recursed one time'
There's a couple of interesting things to note here. First you noticed how the concept of recursion sounds close to that of a loop, like the one found in bitwiseAdd in fact. This connection becomes even more obvious when you consider what and[1] and xor[1] are: they are the same expressions as the and and xor expressions defined INSIDE the while loop in bitwiseAdd. We also note that a pattern emerges: equation 1.4 looks exactly like equation 1.2!
As a result of this, doing further recursions is a breeze, if one keeps the recursive notation. Here we recurse equation 1.2 two more times:
x + y = 2 * and[2] + xor[2]
x + y = 2 * and[3] + xor[3]
This should now highlight the role of the 'temp' variable found in bitwiseAdd: temp allows to pass from one recursion level to the next.
We also notice the multiplication by two in all those equations. As mentioned earlier this multiplication is done at the begin of the while loop in bitwiseAdd using the and <<= 1 statement. This multiplication has a consequence on the next recursion stage since the bits in and[i] are different from those in the and[i] of the previous stage (and if you recall the little side note I made earlier about the '&' operator you probably see where this is going now).
The general form of equation 1.4 now becomes:
x + y = 2 * and[x] + xor[x] (1.5)
with x the nth recursion
FINALY:
So when does this recursion business end exactly?
Answer: it ends when the intersection between the two bit sequences in the and[x] expression of equation 1.5 returns 0. The equivalent of this in bitwiseAdd happens when the while loop condition becomes false. At this point equation 1.5 becomes:
x + y = xor[x] (1.6)
And that explains why in bitwiseAdd we only return xor at the end!
And we are done! A pretty clever piece of code this bitwiseAdd I must say :)
I hope this helped
APPENDIX:
1) A numeric example of equation 1.1
equation 1.1 says:
x + y = 2(x&y)+(x^y) (1.1)
To verify this statement one can take a simple example, say adding 9 and 13 together. The steps are shown below (the bitwise representations are in parenthesis):
We have
x = 9 (1001)
y = 13 (1101)
And
x + y = 9 + 13 = 22
x & y = 9 & 13 = 9 (1001 & 1101 = 1001)
x ^ y = 9^13 = 4 (1001 ^ 1101 = 0100)
pluging that back into equation 1.1 we find:
9 + 13 = 2 * 9 + 4 = 22 et voila!
2) Demonstrating the first recursion step
The first recursion equation in the presentation (equation 1.3) says that
if
x + y = 2 * and + xor (equation 1.2)
then
x + y = 2*(2*and & xor) + (2*and ^ xor) (equation 1.3)
To get to this result, we simply took the 2* and + xor part of equation 1.2 above and applied the addition/bitwise operands relationship given by equation 1.1 to it. This is demonstrated as follow:
if
x + y = 2(x&y) + (x^y) (equation 1.1)
then
[2(x&y)] + (x^y) = 2 ([2(x&y)] & (x^y)) + ([2(x&y)] ^ (x^y))
(left side of equation 1.1) (after applying the addition/bitwise operands relationship)
Simplifying this with the definitions of the and and xor variables of equation 1.2 gives equation 1.3's result:
[2(x&y)] + (x^y) = 2*(2*and & xor) + (2*and ^ xor)
with
and = x&y
xor = x^y
And using that same simplification gives equation 1.4's result:
2*(2*and & xor) + (2*and ^ xor) = 2*and[1] + xor[1]
with
and[1] = 2*and & xor
xor[1] = 2*and ^ xor
[1] meaning 'recursed one time'
Here is another approach for Multiplication
/**
* Multiplication of binary numbers without using '*' operator
* uses bitwise Shifting/Anding
*
* #param n1
* #param n2
*/
public static void multiply(int n1, int n2) {
int temp, i = 0, result = 0;
while (n2 != 0) {
if ((n2 & 1) == 1) {
temp = n1;
// result += (temp>>=(1/i)); // To do it only using Right shift
result += (temp<<=i); // Left shift (temp * 2^i)
}
n2 >>= 1; // Right shift n2 by 1.
i++;
}
System.out.println(result);
}

Categories