When are hash functions orthogonal to each other? - java

When are hash functions orthogonal to each other?
And can you provide an example in Java of two hash functions that are orthogonal to each other?

From (a Google search result paper)
(Orthogonal Hash Functions) Two hash functions h1 and h2 are orthogonal,
if for all states s, s' ∈ S with h1 (s) = h1 (s') and h2 (s) = h2 (s') we have
s = s'.
S. Edelkamp, Perfect Hashing for State Space Exploration on the GPU.
In English, if any two given values passed to two different orthogonal hash functions result in the same outputs, those inputs must have been the same value.
Example:
Let h and g be hash functions.
Let b be a currently unknown value.
h(0) = h(b) = 5
g(0) = g(b) = 4
if h and g are orthogonal, b MUST equal 0.
Thus for any values given to h that result in a unique result,
If those same values are given to g, they must also result in a unique result,
IF they are orthogonal hash functions.
Pseudocode:
// Assume no wraparound will ever occur due to overflow.
HashFunc h = x -> x + 1;
HashFunc g = y -> y + 2;
h(0) = 1 // No other input value results in --> 1
g(0) = 2 // No other input value results in --> 2
// These must have been orthogonal hash functions.
// Now for some non-orthogonal hash functions:
// Let the domain be integers only.
HashFunc j = x -> ceil(abs(x / 2));
HashFunc k = x -> ceil(sqrt(x));
j(0) = 0 // Unique result
k(0) = 0 // Unique result
j(1) = j(2) = 1
k(1) = 1 != k(2) = 2
// k(1) results in a unique value, but it isn't unique for j.
// These cannot be orthogonal hash functions.

from http://www.aaai.org/ocs/index.php/ICAPS/ICAPS10/paper/download/1439/1529
Obtained with Google: define "orthogonal hash" (second hit).
Translating:
If you have a "perfect hashing function", then h(s) == h(s') iff s == s'.
If you have "any two hashing functions" for which there are values s, s' that have both
h1(s) == h1(s') and h2(s) == h2(s')
then these functions are called orthogonal if the above is true for s == s'
It's actually quite a tricky concept. If h1 and h2 were both perfect hashing functions, then they would automatically have to be orthogonal according to the above definition (if I understand correctly). But you can have imperfect functions that fit the above definition.
Example: in the state space [0, 9], two functions
h1(int x) return x % 5;
h2(int x) return x % 7;
Would be orthogonal:
x h1 h2
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 0 5
6 1 6
7 2 0
8 3 1
9 4 2
In the above, h1(s) = h1(s') for pairs of values s that are either 0 apart or 5 apart.
For h1, the distance is either 0 or 7.
The only pairs for which both conditions are true are those where the distance is 0 - so only when s1 == s2. Thus these are orthogonal (although imperfect) hashing functions.
And that, I think, answers both parts of your question.

Related

Collision strength of Java's Arrays.hashCode

How strong is the hashing mechanism that is used in the Arrays.hashCode methods against collision? What is the possibility of two different arrays (of, say, double) to have an exact hash value calculated with these methods?
Arrays.hashCode(double[]) is specified to return the equivalent value of a List containing Double values representing the same numeric value.
List.hashCode in turn is specified with a fairly simple algorithm:
int hashCode = 1;
for (E e : list)
hashCode = 31*hashCode + (e==null ? 0 : e.hashCode());
In general the multiplication with a prime number is a good practice for general-purpose hash functions, but it's far from a cryptographically strong hash function.
This means that while collisions are unlikely in the general (effectively random) case, they can usually be constructed quite easily if you can influence (or select) the hashCode of the items in the List.
As a constructed example consider these two statements:
System.out.println(Arrays.hashCode(new double[] {4.753E-321d}));
System.out.println(Arrays.hashCode(new double[] {4.9E-324d, 4.9E-324d}));
Both of these will output 993, despite being clearly different arrays.
This is the implementation of Arrays.hashCode that you use
public static int hashCode(int a[]) {
if (a == null)
return 0;
int result = 1;
for (int element : a)
result = 31 * result + element;
return result;
}
If your values happen to be smaller then 31 they are treated like distinct numbers in the base 31, so each result in a different numbers (if we ignore overflows for now). Lets call those pure hashes
Now of course 31^11 is way larger then the number of integers in Java, so we will get tons of overflows. But since the powers of 31 and the maximum integer are "very different" you don't get a almost random distribution, but a very regular uniform distribution.
Lets consider a smaller example. I assume you have only 2 elements in your array and the range from 0 to 5 each. I try to create "hashCode" between 0 and 37 by taking the modulo 38 of the "pure hash" The result is that I get streaks of 5 integers with small gaps in between, and not a single collision.
val hashes = for {
i <- 0 to 4
j <- 0 to 4
} yield (i * 31 + j) % 38
enter code here
println(hashes.size) // prints 25
println(hashes.toSet.size) // prints 25
To verify if this is what happens to your numbers you might create a graph as follows: For each hash take the first 16 bits for x and and the second 16 bits for y, color that dot black. I bet you will see an extremely regular pattern.

Algorithm for adding two numbers to reach a value

I have a homework assignment that asks of me to check, for any three numbers, a,b,c such that 0<=a,b,c<=10^16, if I can reach c by adding a and b to each other. The trick is, with every addition, their value changes, so if we add a to b, we would then have the numbers a and a+b, instead of a and b. Because of this, I realized it's not a simple linear equation.
In order for this to be possible, the target number c, must be able to be represented in the form:
c = xa + yb
Through some testing, I figured out that the values of x and y, can't be equal, nor can both of them be even, in order for me to be able to reach the number c. Keeping this in mind, along with some special cases involving a,b or c to be equal to zero.
Any ideas?
EDIT:
It's not Euclid's Algorithm, it's not a diophantine equation, maybe I have mislead you with the statement that c = xa + yc. Even though they should satisfy this statement, it's not enough for the assignment at hand.
Take a=2, b=3, c=10 for example. In order to reach c, you would need to add a to b or b to a in the first step, and then in the second step you'd get either : a = 2, b = 5 or a = 5, b = 3, and if you keep doing this, you will never reach c. Euclid's algorithm will provide the output yes, but it's clear that you can't reach 10, by adding 2 and 3 to one another.
Note: To restate the problem, as I understand it: Suppose you're given nonnegative integers a, b, and c. Is it possible, by performing a sequence of zero or more operations a = a + b or b = b + a, to reach a point where a + b == c?
OK, after looking into this further, I think you can make a small change to the statement you made in your question:
In order for this to be possible, the target number c, must be able to
be represented in the form:
c = xa + yb
where GCD(x,y) = 1.
(Also, x and y need to be nonnegative; I'm not sure if they may be 0 or not.)
Your original observation, that x may not equal y (unless they're both 1) and that x and y cannot both be even, are implied by the new condition GCD(x,y) = 1; so those observations were correct, but not strong enough.
If you use this in your program instead of the test you already have, it may make the tests pass. (I'm not guaranteeing anything.) For a faster algorithm, you can use Extended Euclid's Algorithm as suggested in the comments (and Henry's answer) to find one x0 and y0; but if GCD(x0,y0) ≠ 1, you'd have to try other possibilities x = x0 + nb, y = y0 - na, for some n (which may be negative).
I don't have a rigorous proof. Suppose we constructed the set S of all pairs (x,y) such that (1,1) is in S, and if (x,y) is in S then (x,x+y) and (x+y,y) are in S. It's obvious that (1,n) and (n,1) are in S for all n > 1. Then we can try to figure out, for some m and n > 1, how could the pair (m,n) get into S? If m < n, this is possible only if (m, n-m) was already in S. If m > n, it's possible only if (m-n, n) was already in S. Either way, when you keep subtracting the smaller number from the larger, what you get is essentially Euclid's algorithm, which means you'll hit a point where your pair is (g,g) where g = GCD(m,n); and that pair is in S only if g = 1. It appears to me that the possible values for x and y in the above equation for the target number c are exactly those which are in S. Still, this is partly based on intuition; more work would be needed to make it rigorous.
If we forget for a moment that x and y should be positive, the equation c = xa + yb has either no or infinitely many solutions. When c is not a multiple of gcd(a,b) there is no solution.
Otherwise, calling gcd(a,b) = t use the extended euclidean algorithm to find d and e such that t = da + eb. One solution is then given by c = dc/t a + ec/t b.
It is clear that 0 = b/t a - a/t b so more solutions can be found by adding a multiple f of that to the equation:
c = (dc + fb)/t a + (ec - af)/t b
When we now reintroduce the restriction that x and y must be positive or zero, the question becomes to find values of f that make x = (dc + fb)/t and y = (ec - af)/t both positive or zero.
If dc < 0 try the smallest f that makes dc + fb >= 0 and see if ec - af is also >=0.
Otherwise try the largest f (a negative number) that makes ec - af >= 0 and check if dc + fb >= 0.
import java.util.*;
import java.math.BigInteger;
public class Main
{
private static boolean result(long a, long b, long c)
{
long M=c%(a+b);
return (M%b == 0) || (M%a == 0);
}
}
Idea:c=xa+by, because either x or y is bigger we can write the latter equation in one of two forms:
c=x(a+b)+(y-x)b,
c=y(a+b)+(x-y)a
depending on who is bigger, so by reducing c by a+b each time, c eventually becomes:
c=(y-x)b or c=(x-y)b, so c%b or c%a will evaluate to 0.

Implementation of java.util.Random.nextInt

This function is from java.util.Random. It returns a pseudorandom int uniformly distributed between 0 and the given n. Unfortunately I did not get it.
public int nextInt(int n) {
if (n <= 0)
throw new IllegalArgumentException("n must be positive");
if ((n & -n) == n) // i.e., n is a power of 2
return (int)((n * (long)next(31)) >> 31);
int bits, val;
do {
bits = next(31);
val = bits % n;
} while (bits - val + (n-1) < 0);
return val;
}
My questions are:
Why does it treat the case where n is a power of two specially ? Is it just for performance ?
Why doest it reject numbers that bits - val + (n-1) < 0 ?
It does this in order to assure an uniform distribution of values between 0 and n. You might be tempted to do something like:
int x = rand.nextInt() % n;
but this will alter the distribution of values, unless n is a divisor of 2^31, i.e. a power of 2. This is because the modulo operator would produce equivalence classes whose size is not the same.
For instance, let's suppose that nextInt() generates an integer between 0 and 6 inclusive and you want to draw 0,1 or 2. Easy, right?
int x = rand.nextInt() % 3;
No. Let's see why:
0 % 3 = 0
1 % 3 = 1
2 % 3 = 2
3 % 3 = 0
4 % 3 = 1
5 % 3 = 2
6 % 3 = 0
So you have 3 values that map on 0 and only 2 values that map on 1 and 2. You have a bias now, as 0 is more likely to be returned than 1 or 2.
As always, the javadoc documents this behaviour:
The hedge "approximately" is used in the foregoing description only
because the next method is only approximately an unbiased source of
independently chosen bits. If it were a perfect source of randomly
chosen bits, then the algorithm shown would choose int values from the
stated range with perfect uniformity.
The algorithm is slightly tricky. It rejects values that would result
in an uneven distribution (due to the fact that 2^31 is not divisible
by n). The probability of a value being rejected depends on n. The
worst case is n=2^30+1, for which the probability of a reject is 1/2,
and the expected number of iterations before the loop terminates is 2.
The algorithm treats the case where n is a power of two specially: it
returns the correct number of high-order bits from the underlying
pseudo-random number generator. In the absence of special treatment,
the correct number of low-order bits would be returned. Linear
congruential pseudo-random number generators such as the one
implemented by this class are known to have short periods in the
sequence of values of their low-order bits. Thus, this special case
greatly increases the length of the sequence of values returned by
successive calls to this method if n is a small power of two.
The emphasis is mine.
next generates random bits.
When n is a power of 2, a random integer in that range can be generated just by generating random bits (I assume that always generating 31 and throwing some away is for reproducibility). This code path is simpler and I guess it's a more commonly used case so it's worth making a special "fast path" for this case.
When n isn't a power of 2, it throws away numbers at the "top" of the range so that the random number is evenly distributed. E.g. imagine we had n=3, and imagine we were using 3 bits rather than 31 bits. So bits is a randomly generated number between 0 and 7. How can you generate a fair random number there? Answer: if bits is 6 or 7, we throw it away and generate a new one.

Need help in understanding Rolling Hash computation in constant time for Rabin-Karp Implementation

I've been trying to implement Rabin-Karp algorithm in Java. I have hard time computing the rolling hash value in constant time. I've found one implementation at http://algs4.cs.princeton.edu/53substring/RabinKarp.java.html. Still I could not get how these two lines work.
txtHash = (txtHash + Q - RM*txt.charAt(i-M) % Q) % Q;
txtHash = (txtHash*R + txt.charAt(i)) % Q;
I looked at couple of articles on modular arithmetic but no article could able to penetrate my thick skull. Please give some pointers to understand this.
First you need to understand how the hash is computed.
Lets take a simple case of base 10 strings. How would you guarantee that the hash code of a string is unique? Base 10 is what we use to represent numbers, and we don't have collisions!!
"523" = 5*10^2 + 2*10^1 + 3*10^0 = 523
using the above hash function you are guaranteed to get distinct hashes for every string.
Given the hash of "523", if you want to calculate the hash of "238", i.e. by jutting out the leftmost digit 5 and bringing in a new digit 8 from the right, you would have to do the following:
1) remove the effect of the 5 from the hash:
hash = hash - 5*10^2 (523-500 = 23)
2) adjust the hash of the remaining chars by shifting by 1, hash = hash * 10
3) add the hash of the new character:
hash = hash + 8 (230 + 8 = 238, which as we expected is the base 10 hash of "238")
Now let's extend this to all ascii characters. This takes us to the base 256 world. Therefore the hash of the same string "523" now is
= 5*256^2 + 2*256^1 + 3*256^0 = 327680 + 512 + 3 = 328195.
You can imagine as the string length increases you will will exceed the range of integer/long in most programming languages relatively quickly.
How can we solve this? The way this is routinely solved is by working with modulus a large prime number. The drawback of this method is that we will now get false positives as well, which is a small price to pay if it takes the runtime of your algorithm from quadratic to linear!
The complicated equation you quoted is nothing but the steps 1-3 above done with modulus math.
The two modulus properties used above are ->
a) (a*b) % p = ((a % p) * (b % p)) % p
b) a % p = (a + p) % p
Lets go back to steps 1-3 mentioned above ->
1) (expanded using property a) hash = hash - ((5 % p)*(10^2 %p) %p)
vs. what you quoted
txtHash = (txtHash + Q - RM*txt.charAt(i-M) % Q) % Q;
Here are is how the two are related!
RM = 10^3 % p
txt.charAt(i-M) % Q = 5 % p
The additional + Q you see is just to ensure that the hash is not negative. See property b above.
2 & 3) hash = hash*10 + 8, vs txtHash = (txtHash*R + txt.charAt(i)) % Q;
Is the same but with taking mod of the final hash result!
Looking at properties a & b more closely, should help you figure it out!
This is the "rolling" aspect of the hash. It's eliminating the contribution of the oldest character (txt.charAt(i-M)), and incorporating the contribution of the newest character(txt.charAt(i)).
The hash function is defined as:
M-1
hash[i] = ( SUM { input[i-j] * R^j } ) % Q
j=0
(where I'm using ^ to denote "to the power of".)
But this can be written as an efficient recursive implementation as:
hash[i] = (txtHash*R - input[i-M]*(R^M) + input[i]) % Q
Your reference code is doing this, but it's using various techniques to ensure that the result is always computed correctly (and efficiently).
So, for instance, the + Q in the first expression has no mathematical effect, but it ensures that the result of the sum is always positive (if it goes negative, % Q doesn't have the desired effect). It's also breaking the calculation into stages, presumably to prevent numerical overflow.

Bitwise Multiply and Add in Java

I have the methods that do both the multiplication and addition, but I'm just not able to get my head around them. Both of them are from external websites and not my own:
public static void bitwiseMultiply(int n1, int n2) {
int a = n1, b = n2, result=0;
while (b != 0) // Iterate the loop till b==0
{
if ((b & 01) != 0) // Logical ANDing of the value of b with 01
{
result = result + a; // Update the result with the new value of a.
}
a <<= 1; // Left shifting the value contained in 'a' by 1.
b >>= 1; // Right shifting the value contained in 'b' by 1.
}
System.out.println(result);
}
public static void bitwiseAdd(int n1, int n2) {
int x = n1, y = n2;
int xor, and, temp;
and = x & y;
xor = x ^ y;
while (and != 0) {
and <<= 1;
temp = xor ^ and;
and &= xor;
xor = temp;
}
System.out.println(xor);
}
I tried doing a step-by-step debug, but it really didn't make much sense to me, though it works.
What I'm possibly looking for is to try and understand how this works (the mathematical basis perhaps?).
Edit: This is not homework, I'm just trying to learn bitwise operations in Java.
Let's begin by looking the multiplication code. The idea is actually pretty clever. Suppose that you have n1 and n2 written in binary. Then you can think of n1 as a sum of powers of two: n2 = c30 230 + c29 229 + ... + c1 21 + c0 20, where each ci is either 0 or 1. Then you can think of the product n1 n2 as
n1 n2 =
n1 (c30 230 + c29 229 + ... + c1 21 + c0 20) =
n1 c30 230 + n1 c29 229 + ... + n1 c1 21 + n1 c0 20
This is a bit dense, but the idea is that the product of the two numbers is given by the first number multiplied by the powers of two making up the second number, times the value of the binary digits of the second number.
The question now is whether we can compute the terms of this sum without doing any actual multiplications. In order to do so, we're going to need to be able to read the binary digits of n2. Fortunately, we can do this using shifts. In particular, suppose we start off with n2 and then just look at the last bit. That's c0. If we then shift the value down one position, then the last bit is c0, etc. More generally, after shifting the value of n2 down by i positions, the lowest bit will be ci. To read the very last bit, we can just bitwise AND the value with the number 1. This has a binary representation that's zero everywhere except the last digit. Since 0 AND n = 0 for any n, this clears all the topmost bits. Moreover, since 0 AND 1 = 0 and 1 AND 1 = 1, this operation preserves the last bit of the number.
Okay - we now know that we can read the values of ci; so what? Well, the good news is that we also can compute the values of the series n1 2i in a similar fashion. In particular, consider the sequence of values n1 << 0, n1 << 1, etc. Any time you do a left bit-shift, it's equivalent to multiplying by a power of two. This means that we now have all the components we need to compute the above sum. Here's your original source code, commented with what's going on:
public static void bitwiseMultiply(int n1, int n2) {
/* This value will hold n1 * 2^i for varying values of i. It will
* start off holding n1 * 2^0 = n1, and after each iteration will
* be updated to hold the next term in the sequence.
*/
int a = n1;
/* This value will be used to read the individual bits out of n2.
* We'll use the shifting trick to read the bits and will maintain
* the invariant that after i iterations, b is equal to n2 >> i.
*/
int b = n2;
/* This value will hold the sum of the terms so far. */
int result = 0;
/* Continuously loop over more and more bits of n2 until we've
* consumed the last of them. Since after i iterations of the
* loop b = n2 >> i, this only reaches zero once we've used up
* all the bits of the original value of n2.
*/
while (b != 0)
{
/* Using the bitwise AND trick, determine whether the ith
* bit of b is a zero or one. If it's a zero, then the
* current term in our sum is zero and we don't do anything.
* Otherwise, then we should add n1 * 2^i.
*/
if ((b & 1) != 0)
{
/* Recall that a = n1 * 2^i at this point, so we're adding
* in the next term in the sum.
*/
result = result + a;
}
/* To maintain that a = n1 * 2^i after i iterations, scale it
* by a factor of two by left shifting one position.
*/
a <<= 1;
/* To maintain that b = n2 >> i after i iterations, shift it
* one spot over.
*/
b >>>= 1;
}
System.out.println(result);
}
Hope this helps!
It looks like your problem is not java, but just calculating with binary numbers. Start of simple:
(all numbers binary:)
0 + 0 = 0 # 0 xor 0 = 0
0 + 1 = 1 # 0 xor 1 = 1
1 + 0 = 1 # 1 xor 0 = 1
1 + 1 = 10 # 1 xor 1 = 0 ( read 1 + 1 = 10 as 1 + 1 = 0 and 1 carry)
Ok... You see that you can add two one digit numbers using the xor operation. With an and you can now find out whether you have a "carry" bit, which is very similar to adding numbers with pen&paper. (Up to this point you have something called a Half-Adder). When you add the next two bits, then you also need to add the carry bit to those two digits. Taking this into account you can get a Full-Adder. You can read about the concepts of Half-Adders and Full-Adders on Wikipedia:
http://en.wikipedia.org/wiki/Adder_(electronics)
And many more places on the web.
I hope that gives you a start.
With multiplication it is very similar by the way. Just remember how you did multiplying with pen&paper in elementary school. Thats what is happening here. Just that it's happening with binary numbers and not with decimal numbers.
EXPLANATION OF THE bitwiseAdd METHOD:
I know this question was asked a while back but since no complete answer has been given regarding how the bitwiseAdd method works here is one.
The key to understanding the logic encapsulated in bitwiseAdd is found in the relationship between addition operations and xor and and bitwise operations. That relationship is defined by the following equation (see appendix 1 for a numeric example of this equation):
x + y = 2 * (x&y)+(x^y) (1.1)
Or more simply:
x + y = 2 * and + xor (1.2)
with
and = x & y
xor = x ^ y
You might have noticed something familiar in this equation: the and and xor variables are the same as those defined at the beginning of bitwiseAdd. There is also a multiplication by two, which in bitwiseAdd is done at the beginning of the while loop. But I will come back to that later.
Let me also make a quick side note about the '&' bitwise operator before we proceed further. This operator basically "captures" the intersection of the bit sequences against which it is applied. For example, 9 & 13 = 1001 & 1101 = 1001 (=9). You can see from this result that only those bits common to both bit sequences are copied to the result. It derives from this that when two bit sequences have no common bit, the result of applying '&' on them yields 0. This has an important consequence on the addition-bitwise relationship which shall become clear soon
Now the problem we have is that equation 1.2 uses the '+' operator whereas bitwiseAdd doesn't (it only uses '^', '&' and '<<'). So how do we make the '+' in equation 1.2 somehow disappear? Answer: by 'forcing' the and expression to return 0. And the way we do that is by using recursion.
To demonstrate this I am going to recurse equation 1.2 one time (this step might be a bit challenging at first but if needed there's a detailed step by step result in appendix 2):
x + y = 2*(2*and & xor) + (2*and ^ xor) (1.3)
Or more simply:
x + y = 2 * and[1] + xor[1] (1.4)
with
and[1] = 2*and & xor,
xor[1] = 2*and ^ xor,
[1] meaning 'recursed one time'
There's a couple of interesting things to note here. First you noticed how the concept of recursion sounds close to that of a loop, like the one found in bitwiseAdd in fact. This connection becomes even more obvious when you consider what and[1] and xor[1] are: they are the same expressions as the and and xor expressions defined INSIDE the while loop in bitwiseAdd. We also note that a pattern emerges: equation 1.4 looks exactly like equation 1.2!
As a result of this, doing further recursions is a breeze, if one keeps the recursive notation. Here we recurse equation 1.2 two more times:
x + y = 2 * and[2] + xor[2]
x + y = 2 * and[3] + xor[3]
This should now highlight the role of the 'temp' variable found in bitwiseAdd: temp allows to pass from one recursion level to the next.
We also notice the multiplication by two in all those equations. As mentioned earlier this multiplication is done at the begin of the while loop in bitwiseAdd using the and <<= 1 statement. This multiplication has a consequence on the next recursion stage since the bits in and[i] are different from those in the and[i] of the previous stage (and if you recall the little side note I made earlier about the '&' operator you probably see where this is going now).
The general form of equation 1.4 now becomes:
x + y = 2 * and[x] + xor[x] (1.5)
with x the nth recursion
FINALY:
So when does this recursion business end exactly?
Answer: it ends when the intersection between the two bit sequences in the and[x] expression of equation 1.5 returns 0. The equivalent of this in bitwiseAdd happens when the while loop condition becomes false. At this point equation 1.5 becomes:
x + y = xor[x] (1.6)
And that explains why in bitwiseAdd we only return xor at the end!
And we are done! A pretty clever piece of code this bitwiseAdd I must say :)
I hope this helped
APPENDIX:
1) A numeric example of equation 1.1
equation 1.1 says:
x + y = 2(x&y)+(x^y) (1.1)
To verify this statement one can take a simple example, say adding 9 and 13 together. The steps are shown below (the bitwise representations are in parenthesis):
We have
x = 9 (1001)
y = 13 (1101)
And
x + y = 9 + 13 = 22
x & y = 9 & 13 = 9 (1001 & 1101 = 1001)
x ^ y = 9^13 = 4 (1001 ^ 1101 = 0100)
pluging that back into equation 1.1 we find:
9 + 13 = 2 * 9 + 4 = 22 et voila!
2) Demonstrating the first recursion step
The first recursion equation in the presentation (equation 1.3) says that
if
x + y = 2 * and + xor (equation 1.2)
then
x + y = 2*(2*and & xor) + (2*and ^ xor) (equation 1.3)
To get to this result, we simply took the 2* and + xor part of equation 1.2 above and applied the addition/bitwise operands relationship given by equation 1.1 to it. This is demonstrated as follow:
if
x + y = 2(x&y) + (x^y) (equation 1.1)
then
[2(x&y)] + (x^y) = 2 ([2(x&y)] & (x^y)) + ([2(x&y)] ^ (x^y))
(left side of equation 1.1) (after applying the addition/bitwise operands relationship)
Simplifying this with the definitions of the and and xor variables of equation 1.2 gives equation 1.3's result:
[2(x&y)] + (x^y) = 2*(2*and & xor) + (2*and ^ xor)
with
and = x&y
xor = x^y
And using that same simplification gives equation 1.4's result:
2*(2*and & xor) + (2*and ^ xor) = 2*and[1] + xor[1]
with
and[1] = 2*and & xor
xor[1] = 2*and ^ xor
[1] meaning 'recursed one time'
Here is another approach for Multiplication
/**
* Multiplication of binary numbers without using '*' operator
* uses bitwise Shifting/Anding
*
* #param n1
* #param n2
*/
public static void multiply(int n1, int n2) {
int temp, i = 0, result = 0;
while (n2 != 0) {
if ((n2 & 1) == 1) {
temp = n1;
// result += (temp>>=(1/i)); // To do it only using Right shift
result += (temp<<=i); // Left shift (temp * 2^i)
}
n2 >>= 1; // Right shift n2 by 1.
i++;
}
System.out.println(result);
}

Categories