Bit circular shift just shows a bunch of zeros - java

I wanted to play around with bitwise operators and specifically wanted to make a circular bitwise shift. So say that I have the number 101. Shifting it left 1 step should result in 011. Now when I try this example in Java, it just shows a bunch of zeros, like this:
//Circular right shift
private static void testCircular() {
int x = 37;
System.out.println(x + " Is " + Integer.toBinaryString(x));
x = (x >>> 8) | (x << (Integer.SIZE - 8));
System.out.println(x + " Is " + Integer.toBinaryString(x));
}
This gives me the following result:
37 Is 100101
620756992 Is 100101000000000000000000000000
As you can see, it merly added trailing zeroes, not shifting anything. I also tried the state = Integer.rotateRight(state,8); method, it does the same thing. What am I missing here?

I think it works as expected, what You are missing is full representation of number in bits - when You print it's skipping zeros at the beginning. Integer is stored in 32 bits, so full representation looks like this:
int x = 37;
00000000000000000000000000100101
x = (x >>> 8) | (x << (Integer.SIZE - 8));
00100101000000000000000000000000
EDIT
Here is a method to get a full string representation of an Integer:
public static String toBinaryStringWithLeadingZeros(int x) {
StringBuffer buf = new StringBuffer(32);
char[] arr = new char[Integer.numberOfLeadingZeros(x)];
Arrays.fill(arr, '0');
buf.append(arr);
buf.append(Integer.toBinaryString(x));
return buf.toString();
}

Your code and Integer.rotateRight(state,8); give the same result and work as expected. intin java is 4 bytes so 100101 is actually:
00000000000000000000000000100101
When you shift rotate it right 8 times you get:
00100101000000000000000000000000
Integer.toBinaryString(x)) discards leading zeros so what you see is 100101 in the first case and 100101000000000000000000000000in the second (first two zeros are discarded).

Related

Use of << and >>> in a hash function

I have a project in C where I need to created a suitable hash function for void pointers which could contain alphanumeric chars, ints or just plain ol' chars.
I need to use a polynomial hash fuction, where instead of multiplying by a constant, I should use a cyclic shift of partial sums by a fixed number of bits.
In this page here
, there's the java code(I assume this is java because of the use of strings):
static int hashCode(String s) {
int h = 0;
for (int i = 0; i < s.length(); i++) {
h = (h << 5) | (h >>> 27); // 5-bit cyclic shift of the running sum
h += (int) s.charAt(i); // add in next character
}
return h;
}
What exactly is this line, below, doing?
h = (h << 5) | (h >>> 27); // 5-bit cyclic shift of the running sum
Yes, the comment says 5bit cyclic shift, but how does the <<, | and >>> operands work in this regard? I've never seen or used any of them before.
As it says, it's a 5-bit cyclic left shift. This means that all the bits are shifted left, with the bit "shifted off" added to the right side, five times.
The code replaces the value of h with the value of two bit patterns ORed together. The first bit pattern is the original value shifted left 5 bits. The second value is the original value shifted right 27 bits.
The left shift of 5 bits puts all the bits but the leftmost five in their final position. The leftmost 5 bits get "shifted out" by that shift and replaced with zeroes as the rightmost bits of the output. The right shift of 27 bits put the leftmost five bits in their final position as the rightmost bits, shifting in zeroes for the leftmost 27 bits. ORing them together produces the desired output.
The >>> is Java's unsigned shift operation. In C or C++, you'd just use >>.

Get n Least Significant Bits from an Int

This seems fairly straightforward, but I cant find an answer. If I have an int X, what is the best way to get N least significant bits from this int, in Java?
This should work for all non-negative N < 33 32:
x & ((1 << N) - 1)
It's worth elaborating on how this works for N == 31 and N == 32. For N == 31, we get 1 << N == Integer.MIN_VALUE. When you subtract 1 from that, Java silently wraps around to Integer.MAX_VALUE, which is exactly what you need. For N == 32, the 1 bit is shifted completely out, so 1 << N == 0; then (1 << N) - 1 == -1, which is all 32 bits set.
For N == 32, this unfortunately doesn't work because (thanks, #zstring!) the << operator only shifts by the right side mod 32. Instead, if you want to avoid testing for that case specially, you could use:
x & ((int)(1L << N) - 1)
By shifting a long, you get the full 32-bit shift, which, after casting back to an int, gets you 0. Subtracting 1 gives you -1 and x & -1 is just x for any int value x (and x is the value of the lower 32 bits of x).
Ted's approach is likely to be faster but here is another approach
x << -N >>> -N
This shift all the bit up and then down to chop off the top bits.
int i = -1;
System.out.println(Integer.toBinaryString(i));
i = i << -5 >>> -5;
System.out.println(Integer.toBinaryString(i));
prints
11111111111111111111111111111111
11111
You can also use a mask. If you use the & bitwise operator you can then remove whatever bit you would want to remove (say the highest x bits);
int mask = 0x7FFFFFFF //Example mask where you will remove the
// most significant bit
// (0x7 = 0111b and 0xF = 1111b).
int result = numberToProcess & mask; //And apply the mask with the &bitwise op.
The disadvantage to this is that you will need to make a mask for each bit, so perhaps this is better seen as another method of approach in general.

Porting '&' operator from Java to Javascript: Overflow issues

Consider the following JAVA statement:
System.out.println(3232235776l & 0xFFFFFFFE);
The output is: 3232235776
When I re-write the statement in JavaScript:
console.log(3232235776 & 0xFFFFFFFE);
The output is: -1062731520
Q. Is there a way to work around this overflow in JavaScript and get the right output?
For the sake of simplicity, I did not post the function I was converting from Java. Here it is. Please assume ipToLong and longToIp as working blackboxes in both Java and JavaScript (i.e. they do the right ip to long int conversion and vice-versa correctly, in both Java and JS, linted and unit tested).
Taken from here: https://stackoverflow.com/a/5032908/504674
Now, can someone help me convert the below Java line to JavaScript correctly?
Specifically: long maskedBase = start & mask;.
Full function to be converted:
public static List<String> range2cidrlist( String startIp, String endIp ) {
int[] CIDR2MASK = new int[] { 0x00000000, 0x80000000,
0xC0000000, 0xE0000000, 0xF0000000, 0xF8000000, 0xFC000000,
0xFE000000, 0xFF000000, 0xFF800000, 0xFFC00000, 0xFFE00000,
0xFFF00000, 0xFFF80000, 0xFFFC0000, 0xFFFE0000, 0xFFFF0000,
0xFFFF8000, 0xFFFFC000, 0xFFFFE000, 0xFFFFF000, 0xFFFFF800,
0xFFFFFC00, 0xFFFFFE00, 0xFFFFFF00, 0xFFFFFF80, 0xFFFFFFC0,
0xFFFFFFE0, 0xFFFFFFF0, 0xFFFFFFF8, 0xFFFFFFFC, 0xFFFFFFFE,
0xFFFFFFFF
};
long start = ipToLong(startIp);
long end = ipToLong(endIp);
ArrayList<String> pairs = new ArrayList<String>();
while ( end >= start ) {
byte maxsize = 32;
while ( maxsize > 0) {
long mask = CIDR2MASK[maxsize -1];
long maskedBase = start & mask;
if ( maskedBase != start ) {
break;
}
maxsize--;
}
double x = Math.log( end - start + 1) / Math.log( 2 );
byte maxdiff = (byte)( 32 - Math.floor( x ) );
if ( maxsize < maxdiff) {
maxsize = maxdiff;
}
String ip = longToIp(start);
pairs.add( ip + "/" + maxsize);
start += Math.pow( 2, (32 - maxsize) );
}
return pairs;
}
Instead of using & to remove the bit you want, you could subtract it.
long n = 3232235776L;
System.out.println(n - (n & 1)); // instead of 1 you can use ~0xFFFFFFFE
This shouldn't suffer from an overflow in your case.
Bitwise operators treat their operands as a sequence of 32 bits (zeros and ones)
says the Mozilla documentation.
You start out with a floating point value, it is converted to a 32 bit value. But because it's too big, it will overflow.
I suggest you try the following instead:
var number = 3232235776;
if (number % 2 == 1) {
number = number - 1;
}
Of course, you could write this more succinctly, but also more cryptic:
var number = 3232235776;
number = number - (number % 2);
That should be semantically equivalent for both positive and negative numbers.
Sign extension
In Java, 0xFFFFFFFE is a 32bit integer representing -2 when ANDing this with a long, it gets converted to a 64bit integer: 0xFFFF_FFFF_FFFF_FFFE, so all this effectively does is clear the last bit, i.e. round down (down, not towards zero).
I'm not sure if that's what you wanted. If it is intended, it's probably not something I would like in my codebase.
No sign extension
Here is the equivalent JavaScript code, if you intended this to happen without sign extension:
var number = 3232235776;
if (number % 2 == 1) {
number = number - 1;
}
number = number % 0x100000000; // That's 8 zeroes, i.e. keep the last 4 bytes

How to find the closest value of 2^N to a given input?

I somehow have to keep my program running until the output of the exponent function exceeds the input value, and then compare that to the previous output of the exponent function. How would I do something like that, even if in just pseudocode?
Find logarithm to base 2 from given number => x := log (2, input)
Round the value acquired in step 1 both up and down => y := round(x), z := round(x) + 1
Find 2^y, 2^z, compare them both with input and choose the one that suits better
Depending on which language you're using, you can do this easily using bitwise operations. You want either the value with a single 1 bit set greater than the highest one bit set in the input value, or the value with the highest one bit set in the input value.
If you do set all of the bits below the highest set bit to 1, then add one you end up with the next greater power of two. You can right shift this to get the next lower power of two and choose the closer of the two.
unsigned closest_power_of_two(unsigned value)
{
unsigned above = (value - 1); // handle case where input is a power of two
above |= above >> 1; // set all of the bits below the highest bit
above |= above >> 2;
above |= above >> 4;
above |= above >> 8;
above |= above >> 16;
++above; // add one, carrying all the way through
// leaving only one bit set.
unsigned below = above >> 1; // find the next lower power of two.
return (above - value) < (value - below) ? above : below;
}
See Bit Twiddling Hacks for other similar tricks.
Apart from the looping there's also one solution that may be faster depending on how the compiler maps the nlz instruction:
public int nextPowerOfTwo(int val) {
return 1 << (32 - Integer.numberOfLeadingZeros(val - 1));
}
No explicit looping and certainly more efficient than the solutions using Math.pow. Hard to say more without looking what code the compiler generates for numberOfLeadingZeros.
With that we can then easily get the lower power of 2 and then compare which one is nearer - the last part has to be done for each solution it seems to me.
set x to 1.
while x < target, set x = 2 * x
then just return x or x / 2, whichever is closer to the target.
public static int neareastPower2(int in) {
if (in <= 1) {
return 1;
}
int result = 2;
while (in > 3) {
in = in >> 1;
result = result << 1;
}
if (in == 3) {
return result << 1;
} else {
return result;
}
}
I will use 5 as input for an easy example instead of 50.
Convert the input to bits/bytes, in this case 101
Since you are looking for powers of two, your answer will all be of the form 10000...00 (a one with a certain amount of zeros). You take the input value (3 bits) and calculate the integer value of 100 (3 bits) and 1000 (4 bits). The integer 100 will be smaller then the input, the integer 1000 will be larger.
You calculate the difference between the input and the two possible values and use the smallest one. In this case 100 = 4 (difference of 1) while 1000 = 8 (difference of 3), so the searched answer is 4
public static int neareastPower2(int in) {
return (int) Math.pow(2, Math.round(Math.log(in) / Math.log(2)));
}
Here's the pseudo code for a function that takes the input number and returns your answer.
int findit( int x) {
int a = int(log(x)/log(2));
if(x >= 2^a + 2^(a-1))
return 2^(a+1)
else
return 2^a
}
Here's a bitwise solution--it will return the lessor of 2^N and 2^(N+1) in case of a tie. This should be very fast compare to invoking the log() function
let mask = (~0 >> 1) + 1
while ( mask > value )
mask >> 1
return ( mask & value == 0 ) ? mask : mask << 1

Bitwise Multiply and Add in Java

I have the methods that do both the multiplication and addition, but I'm just not able to get my head around them. Both of them are from external websites and not my own:
public static void bitwiseMultiply(int n1, int n2) {
int a = n1, b = n2, result=0;
while (b != 0) // Iterate the loop till b==0
{
if ((b & 01) != 0) // Logical ANDing of the value of b with 01
{
result = result + a; // Update the result with the new value of a.
}
a <<= 1; // Left shifting the value contained in 'a' by 1.
b >>= 1; // Right shifting the value contained in 'b' by 1.
}
System.out.println(result);
}
public static void bitwiseAdd(int n1, int n2) {
int x = n1, y = n2;
int xor, and, temp;
and = x & y;
xor = x ^ y;
while (and != 0) {
and <<= 1;
temp = xor ^ and;
and &= xor;
xor = temp;
}
System.out.println(xor);
}
I tried doing a step-by-step debug, but it really didn't make much sense to me, though it works.
What I'm possibly looking for is to try and understand how this works (the mathematical basis perhaps?).
Edit: This is not homework, I'm just trying to learn bitwise operations in Java.
Let's begin by looking the multiplication code. The idea is actually pretty clever. Suppose that you have n1 and n2 written in binary. Then you can think of n1 as a sum of powers of two: n2 = c30 230 + c29 229 + ... + c1 21 + c0 20, where each ci is either 0 or 1. Then you can think of the product n1 n2 as
n1 n2 =
n1 (c30 230 + c29 229 + ... + c1 21 + c0 20) =
n1 c30 230 + n1 c29 229 + ... + n1 c1 21 + n1 c0 20
This is a bit dense, but the idea is that the product of the two numbers is given by the first number multiplied by the powers of two making up the second number, times the value of the binary digits of the second number.
The question now is whether we can compute the terms of this sum without doing any actual multiplications. In order to do so, we're going to need to be able to read the binary digits of n2. Fortunately, we can do this using shifts. In particular, suppose we start off with n2 and then just look at the last bit. That's c0. If we then shift the value down one position, then the last bit is c0, etc. More generally, after shifting the value of n2 down by i positions, the lowest bit will be ci. To read the very last bit, we can just bitwise AND the value with the number 1. This has a binary representation that's zero everywhere except the last digit. Since 0 AND n = 0 for any n, this clears all the topmost bits. Moreover, since 0 AND 1 = 0 and 1 AND 1 = 1, this operation preserves the last bit of the number.
Okay - we now know that we can read the values of ci; so what? Well, the good news is that we also can compute the values of the series n1 2i in a similar fashion. In particular, consider the sequence of values n1 << 0, n1 << 1, etc. Any time you do a left bit-shift, it's equivalent to multiplying by a power of two. This means that we now have all the components we need to compute the above sum. Here's your original source code, commented with what's going on:
public static void bitwiseMultiply(int n1, int n2) {
/* This value will hold n1 * 2^i for varying values of i. It will
* start off holding n1 * 2^0 = n1, and after each iteration will
* be updated to hold the next term in the sequence.
*/
int a = n1;
/* This value will be used to read the individual bits out of n2.
* We'll use the shifting trick to read the bits and will maintain
* the invariant that after i iterations, b is equal to n2 >> i.
*/
int b = n2;
/* This value will hold the sum of the terms so far. */
int result = 0;
/* Continuously loop over more and more bits of n2 until we've
* consumed the last of them. Since after i iterations of the
* loop b = n2 >> i, this only reaches zero once we've used up
* all the bits of the original value of n2.
*/
while (b != 0)
{
/* Using the bitwise AND trick, determine whether the ith
* bit of b is a zero or one. If it's a zero, then the
* current term in our sum is zero and we don't do anything.
* Otherwise, then we should add n1 * 2^i.
*/
if ((b & 1) != 0)
{
/* Recall that a = n1 * 2^i at this point, so we're adding
* in the next term in the sum.
*/
result = result + a;
}
/* To maintain that a = n1 * 2^i after i iterations, scale it
* by a factor of two by left shifting one position.
*/
a <<= 1;
/* To maintain that b = n2 >> i after i iterations, shift it
* one spot over.
*/
b >>>= 1;
}
System.out.println(result);
}
Hope this helps!
It looks like your problem is not java, but just calculating with binary numbers. Start of simple:
(all numbers binary:)
0 + 0 = 0 # 0 xor 0 = 0
0 + 1 = 1 # 0 xor 1 = 1
1 + 0 = 1 # 1 xor 0 = 1
1 + 1 = 10 # 1 xor 1 = 0 ( read 1 + 1 = 10 as 1 + 1 = 0 and 1 carry)
Ok... You see that you can add two one digit numbers using the xor operation. With an and you can now find out whether you have a "carry" bit, which is very similar to adding numbers with pen&paper. (Up to this point you have something called a Half-Adder). When you add the next two bits, then you also need to add the carry bit to those two digits. Taking this into account you can get a Full-Adder. You can read about the concepts of Half-Adders and Full-Adders on Wikipedia:
http://en.wikipedia.org/wiki/Adder_(electronics)
And many more places on the web.
I hope that gives you a start.
With multiplication it is very similar by the way. Just remember how you did multiplying with pen&paper in elementary school. Thats what is happening here. Just that it's happening with binary numbers and not with decimal numbers.
EXPLANATION OF THE bitwiseAdd METHOD:
I know this question was asked a while back but since no complete answer has been given regarding how the bitwiseAdd method works here is one.
The key to understanding the logic encapsulated in bitwiseAdd is found in the relationship between addition operations and xor and and bitwise operations. That relationship is defined by the following equation (see appendix 1 for a numeric example of this equation):
x + y = 2 * (x&y)+(x^y) (1.1)
Or more simply:
x + y = 2 * and + xor (1.2)
with
and = x & y
xor = x ^ y
You might have noticed something familiar in this equation: the and and xor variables are the same as those defined at the beginning of bitwiseAdd. There is also a multiplication by two, which in bitwiseAdd is done at the beginning of the while loop. But I will come back to that later.
Let me also make a quick side note about the '&' bitwise operator before we proceed further. This operator basically "captures" the intersection of the bit sequences against which it is applied. For example, 9 & 13 = 1001 & 1101 = 1001 (=9). You can see from this result that only those bits common to both bit sequences are copied to the result. It derives from this that when two bit sequences have no common bit, the result of applying '&' on them yields 0. This has an important consequence on the addition-bitwise relationship which shall become clear soon
Now the problem we have is that equation 1.2 uses the '+' operator whereas bitwiseAdd doesn't (it only uses '^', '&' and '<<'). So how do we make the '+' in equation 1.2 somehow disappear? Answer: by 'forcing' the and expression to return 0. And the way we do that is by using recursion.
To demonstrate this I am going to recurse equation 1.2 one time (this step might be a bit challenging at first but if needed there's a detailed step by step result in appendix 2):
x + y = 2*(2*and & xor) + (2*and ^ xor) (1.3)
Or more simply:
x + y = 2 * and[1] + xor[1] (1.4)
with
and[1] = 2*and & xor,
xor[1] = 2*and ^ xor,
[1] meaning 'recursed one time'
There's a couple of interesting things to note here. First you noticed how the concept of recursion sounds close to that of a loop, like the one found in bitwiseAdd in fact. This connection becomes even more obvious when you consider what and[1] and xor[1] are: they are the same expressions as the and and xor expressions defined INSIDE the while loop in bitwiseAdd. We also note that a pattern emerges: equation 1.4 looks exactly like equation 1.2!
As a result of this, doing further recursions is a breeze, if one keeps the recursive notation. Here we recurse equation 1.2 two more times:
x + y = 2 * and[2] + xor[2]
x + y = 2 * and[3] + xor[3]
This should now highlight the role of the 'temp' variable found in bitwiseAdd: temp allows to pass from one recursion level to the next.
We also notice the multiplication by two in all those equations. As mentioned earlier this multiplication is done at the begin of the while loop in bitwiseAdd using the and <<= 1 statement. This multiplication has a consequence on the next recursion stage since the bits in and[i] are different from those in the and[i] of the previous stage (and if you recall the little side note I made earlier about the '&' operator you probably see where this is going now).
The general form of equation 1.4 now becomes:
x + y = 2 * and[x] + xor[x] (1.5)
with x the nth recursion
FINALY:
So when does this recursion business end exactly?
Answer: it ends when the intersection between the two bit sequences in the and[x] expression of equation 1.5 returns 0. The equivalent of this in bitwiseAdd happens when the while loop condition becomes false. At this point equation 1.5 becomes:
x + y = xor[x] (1.6)
And that explains why in bitwiseAdd we only return xor at the end!
And we are done! A pretty clever piece of code this bitwiseAdd I must say :)
I hope this helped
APPENDIX:
1) A numeric example of equation 1.1
equation 1.1 says:
x + y = 2(x&y)+(x^y) (1.1)
To verify this statement one can take a simple example, say adding 9 and 13 together. The steps are shown below (the bitwise representations are in parenthesis):
We have
x = 9 (1001)
y = 13 (1101)
And
x + y = 9 + 13 = 22
x & y = 9 & 13 = 9 (1001 & 1101 = 1001)
x ^ y = 9^13 = 4 (1001 ^ 1101 = 0100)
pluging that back into equation 1.1 we find:
9 + 13 = 2 * 9 + 4 = 22 et voila!
2) Demonstrating the first recursion step
The first recursion equation in the presentation (equation 1.3) says that
if
x + y = 2 * and + xor (equation 1.2)
then
x + y = 2*(2*and & xor) + (2*and ^ xor) (equation 1.3)
To get to this result, we simply took the 2* and + xor part of equation 1.2 above and applied the addition/bitwise operands relationship given by equation 1.1 to it. This is demonstrated as follow:
if
x + y = 2(x&y) + (x^y) (equation 1.1)
then
[2(x&y)] + (x^y) = 2 ([2(x&y)] & (x^y)) + ([2(x&y)] ^ (x^y))
(left side of equation 1.1) (after applying the addition/bitwise operands relationship)
Simplifying this with the definitions of the and and xor variables of equation 1.2 gives equation 1.3's result:
[2(x&y)] + (x^y) = 2*(2*and & xor) + (2*and ^ xor)
with
and = x&y
xor = x^y
And using that same simplification gives equation 1.4's result:
2*(2*and & xor) + (2*and ^ xor) = 2*and[1] + xor[1]
with
and[1] = 2*and & xor
xor[1] = 2*and ^ xor
[1] meaning 'recursed one time'
Here is another approach for Multiplication
/**
* Multiplication of binary numbers without using '*' operator
* uses bitwise Shifting/Anding
*
* #param n1
* #param n2
*/
public static void multiply(int n1, int n2) {
int temp, i = 0, result = 0;
while (n2 != 0) {
if ((n2 & 1) == 1) {
temp = n1;
// result += (temp>>=(1/i)); // To do it only using Right shift
result += (temp<<=i); // Left shift (temp * 2^i)
}
n2 >>= 1; // Right shift n2 by 1.
i++;
}
System.out.println(result);
}

Categories