This question already has answers here:
n & (n-1) what does this expression do? [duplicate]
(4 answers)
Closed 6 years ago.
I saw this efficiently written code at Leetcode.com
public static boolean isPowerOfTwo(int n) {
return n>0 && ((n&(n-1))==0);
}
This works awesomely fine but I am not able to figure out the working of single '&' in the code.
Can some onetake efforts to explain how the code works?
And by the same logic what would eb the code to determine if an integer is power of 3?
The single & is a bitwise 'and' operator (as opposed to &&, which acts on booleans).
So when you use & on two integers, the result is the logical 'and' of their binary representations.
This code works because any power of 2 will be a 1 followed by some number of 0s in binary (eg, 4 is 100, 8 is 1000, etc). Any power of 2, less one, will just be all 1s (eg. 3 is 11, 7 is 111).
So, if you take a power of 2, and bitwise and it with itself minus 1, you should just have 0. However, anything other than a power of 2 would give a non-zero answer.
Example:
1000 = 8
0111 = 7 (8-1), and '&'ing these gives
0000 = 0
However, if you had something like 6 (which isnt a power of 2):
110 = 6
101 = 5 (6-1), and '&'ing these gives
100 = 4 (which isnt equal to 0, so the code would return false).
I hope that makes it clear!
The & in Java is a bitwise and operator. It takes two integers and performs an and operation on each bit, producing an int where each bit is set to '1' if and only if that bit was '1' in both operands. The code uses the understanding that any power of two in binary is a '1' followed by some number of '0's. This means that subtracting one will change ALL the bits of the number. For any non power of two, there will be at least one nonzero digit after the first, so the first digit will remain the same. Since performing an AND on two different values always produces '0', ANDing the original number and itself minus one will produce 0 if and only if that number is a power of two. Because this is a trick with binary number specifically, it wouldn't work for finding powers of other bases.
To understand how this function works you need to understand how binary numbers are represented. If you don't I suggest reading a tutorial such as Learn Binary (the easy way).
So say we have a number, 8, and we want to find out if it's a power of two. Let's convert it to binary first: 1000. Now let's look at 8-1 = 7's binary form: 0111. The & operator is for binary AND. When we apply the AND operator to 8 and 7 we get:
1000
0111
&----
=0000
Every integer which is a power of 2 is a 1 followed by a non-negative amount of zeroes. When you subtract 1 from that number you will always get a 0 followed by a sequence of 1s. Since applying the AND operation to those two numbers will always give you 0, you can always verify if it's a power of 2. If the number is not a power of 2, when you subtract 1 from it it won't invert all of its digits and the AND test will produce a positive number (fail).
Its an bitwise operator :
if we take 2 exponent 3 equals to 8,
e.g 2³ = 2×2×2 = 8
now to calculate if 8 is a power of 2, it works like this:
n&(n-1) --> 8 AND (8-1) --> 1000 AND 0111 = 0000
thus it satisfies the condition --> (n&(n-1))==0
The single "&" performs a bitwise AND operation, meaning that in the result of A & B with A and B being integers only those bits will be set to 1 where both A and B have a 1.
For example, lets look a the number 16:
16 & (16 - 1) =
00010000 &
00001111 =
00000000
This works for powers of two because any power of two minus one will have all lower bits set, or in other words n bits can express n different values including zero, therefore (2^n)-1 is the highest value that can be expressed in n bits when they're all set.
I hope this helps.
Powers of three are a bit more problematic as our computers don't use ternary numbers. Basically a power of three is any ternary number that only has one digit different from zero and where that digit is a "1" just like in any other number system.
From the top of my head, I can't come up with anything more elegant than repeatedly doing modulo 3 until you reach one as a division result (in which case you'd have a power of three) or a nonzero modulo result (which would mean it's not a power of three).
Maybe this can help as well: http://www.tutorialspoint.com/computer_logical_organization/number_system_conversion.htm
Related
What is "consecutive in gray code" supposed to mean? I mean 10 and 11 are consecutive in decimal system but what is "consecutive in gray code" meaning? I only know gray code is a binary numeral system where two successive values differ in only one bit.
Here is a solution online but I cannot understand this
private static int graycode(byte term1, byte term2) {
byte x = (byte)(term1^term2); // why use XOR?
int count = 0;
while(x!=0)
{
x = (byte)(x &(x-1)); // why use bitwise operator?
count++; // what is count?
}
return count == 1;
}
I try to understand spending a hour but I still do not have a clue.
Two numbers are considered consecutive in gray code if they differ by only one bit in their binary representation e.g. 111 and 101 differ by only the 2nd bit. The function you have checks if two input bytes have only one bit that makes them different. So 111 and 101 would return 1 from the function whereas 111 and 100 would return 0.
XOR is used to find differences between both numbers; XOR yields 1 when bits are different and 0 otherwise e.g. 1111 XOR 1011 would give 0100. So with XOR, each bit difference is highlighted by a 1 in that position. If both numbers are consecutive gray codes then there should be only one 1 in the XOR's result. More than one 1 would indicate multiple differences thus failing the criterion. The XOR result is stored in variable x.
The next task is then to count the number of 1's -- hence the variable count. If you try other gray code pairs (of greater bit length), you will notice the XOR value obtained will always be in this format (neglecting leading zeros): 10, 100, 1000, etc. Basically, 1 followed by zeros or, in other words, always a power of 2.
If these sample XOR results were decremented by 1, you would get: 01, 011, 0111, etc. If these new values were ANDed with the original XOR results, 0 would be the result everytime. This is the logic implemented in your solution: for a consecutive gray code pair, the while loop would run only once (and increment count) after which it would terminate because x had become 0. So count = 1 at the end. For a non-consecutive pair, the loop would run more than once (try it) and count would be greater than 1 at the end.
The function uses this as a basis to return 1 if count == 1 and 0 otherwise.
A bit obscure but it gets the job done.
It means the two numbers differ in exactly one bit.
So the solution begins with xor'ing the two numbers. The xor operation results in a 1 where the bits of the operands differ, else zero.
So you need to count the number of bits in the xor result and compare with 1. That's what your downloaded example does. This method of counting 1's in a binary number is a rather well-known method due to Brian Kernighan. The state x = (byte)(x & (x-1)) is bit magic that resets the highest order 1 bit to zero. There are lots of others.
Alternately you could search a table of the 8 possible bytes with 1 bit.
byte one_bit_bytes[] = { 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80 };
It is a very non-intuitive way to count how many bits in a binary number are equal to '1'.
It requires a knowledge of binary arithmetic. Start with what happens when you subtract 1 for a decimal number which is written by a '1' followed by one or more zeroes: you get a sequence of 9's, which length is equal to the number of zeroes:
1000000 - 1 = 999999
A similar thing happens with binary numbers. If you subtract 1 from a non-negative binary number, all the lowest '0' digits are replaced by '1', and the '1' just before theses zeroes is replaced by zero. This follows from the way borrowing is done in binary. Example:
0101_0000_0001_0000 - 1 = 0101_0000_0000_1111
aaaa aaaa aaab cccc -> aaaa aaaa aaab cccc
Notation: Underscores to improve legibility. All the digits that appear above the letter a are unchanged. The digit '1' that appears above the letter b is changed to a '0'. And the digits '0' that appear above the letter c are changed to '1'.
The next step consists of doing a bitwise AND operation with the two numbers (X) and (X-1). With the arithmetic property described above, at each iteration there is exactly one '1' digit that disappear from the number (starting from the right, i.e. the least significant bit).
By counting the number of iterations, we can know how many '1' bits were initially present in number X. The iteration stops when the variable X equals zero.
Other people have already answered the question about gray code. My answer only explains how the "bit counting" works (after XOR'ing the two values).
Here is a naive test for a particular Gray code monotonic ordering (the binary reflected Gray code):
// convert Gray code binary number to base 2 binary number
int Base2(byte Gray){ Gray^=Gray>>4; Gray^=Gray>>2; return Gray^=Gray>>1; }
// test if Gray codes are consecutive using "normal" base 2 numbers
boolean GraysAdjacent(byte x, byte y){ return 1 == abs(Base2(x)-Base2(y)); }
see especially this answer (best):
How to find if two numbers are consecutive numbers in gray code sequence
coded in C as:
int GraysTouch(byte x, byte y){ return !( (x^y ^ 1) && ( x^y ^ (y&-y)<<1 ) ); }
// test x marks the spots! (where they touch!)
for(int i=31; i>-1; --i )
for(int j=31; j>-1; --j )
Serial.print((String)(GraysTouch( i^i>>1, j^j>>1 )?"x":".") +
(GraysTouch( j^j>>1, i^i>>1 )?"X":".") + (j?"":"\n"));
How this works: ... will be explained and not the OP code because it is highly suspect (see Caveats commentary below).
A property of XOR, aka the ^ operator, is that bits that match are 0 and bits that are different are 1.
1^0 == 0^1 == 1
1^1 == 0^0 == 0
Also, for a bit, 0 XOR b works as the identity function or simply b
and
1 XOR b works as the complement (no compliments please) function or ~b.
id(x) == x == x^0
opposite(x) == ~x == x^11111111 Why eight 1's? Are eight enough?
When comparing two bit strings with XOR, bits that are different XOR as 1, otherwise the bits must match and the XOR is 0 :
0101 0001111001100111000
XOR 0011 XOR 0001111001100000111
------ ---------------------
0110 0000000000000111111
This explains the x^y part of the code above.
----------------------------------------------------------------------
An aside:
n^n>>1 does a quick conversion from base 2 binary to the Gray code binary numbers used here.
Also note how potent it is that f(a,b)=a^b^b=a is idempotent for any b!
An in place swap is then a=a^b; b=a^b; a=a^b;.
Unrolled c=a^b; d=c^b; e=c^d; ie. d=a^b^b=a; e=a^b^a=b;
----------------------------------------------------------------------
Now, by definition, for two Gray coded numbers to be adjacent or consecutive there must be one and only one bit that can change and be different.
Examples:
Johnson
Code
000 000 000 000
001 001 001 100
011 101 011 110
111 111 010 010
110 011 110 011
100 010 111 111
110 101 101
100 100 001
^^^
this Gray coding
is the one used here
Examine it carefully.
Case 1
When the lowest order bit of consecutive numbers, x and y, for any of the Gray codes, are different, the rest must be the same! This is the definition of a Gray code. This means x^y must look like 0000...0001.
Remember complement, the ~ function aka 1^b? To test the last bit x^y is XOR'd with 1.
This explains the x^y ^ 1.
-------------------------------------------
Case 2
The location of the different bit in the consecutive Gray code numbers x and y is not the lowest order bit. Look carefully at these Gray code consecutive numbers.
001 010 101 lower order bits all match
011 110 111
| | | <-- | mark location of lowest 1
010 100 010 <-- XOR's
Interestingly, in this Gray code, when the lowest order bits match in x and y, so too does the location of the lowest order 1.
Even more interesting is that, for consecutive numbers, the bits are always different (for this Gray code) in the next higher order bit position!
So, x^y looks like ???...?1000...0 where 1000...0 must have at least one 0, 10 (Why?) and ???...? are the mystery bits that for consecutive Gray code numbers must be 000...0. (Why? ie. to be consecutive x^y must look like ... )
The observation is that
x^y looks like ???...?100...0 if and only if
x and y look like ???...?:10...0
| <-- remember? the 1 location !!
The | location can be found by either x&-x or y&-y. (Why? Why must the - be done using a 2's complement machine?)
However, the : location must be checked to see that it is 1 (Why?) and the ???...? are 000...0. (Why?)
So,
x^y looks like ???...?100...0 and
(y&-y)<<1 looks like 000...0100...0
and this explains the x^y ^ ((y&-y)<<1) test.
-------------------------------------------------------------------
Why this works: ... is a consequence of the properties of the particular Gray code used here. An examination and explanation is too complicated to be given here as to why this Gray code should have these properties.
----------------------------------------------------------------------
Commentary on the inadequacies of previous answers due to OP code issues.
Caveat 1: Just to be explicit, the algorithm in the OP's question:
private static int graycode(byte term1, byte term2) {
byte x = (byte)(term1^term2); // why use XOR?
int count = 0;
while(x!=0)
{
x = (byte)(x &(x-1)); // why use bitwise operator?
count++; // what is count?
}
return count == 1;
}
has an interesting interpretation of consecutive Gray codes. It does report correctly when any two binary sequences differ in a single bit position.
If, by consecutive codes it is meant that the Gray codes are used to enumerate a monotonic ordering, there is a problem.
Specifically, the code will return true for all these pairs:
000, 001 or 000, 010 or 000, 100
so an ordering might be 001, 000, 010 but then where can 100 go?
The algorithm reports (correctly) that the "consecutiveness" of 100 with either of 001 or 010 is false.
Thus 100 must immediately precede or follow 000 in an enumeration but cannot immediately precede or follow 001 or 010. DOH!!!
Caveat 2: Note x = (byte)(x & (x-1)) resets the lowest order 1 bit of x to zero.
refs:
Gray code increment function
Deriving nth Gray code from the (n-1)th Gray Code
https://electronics.stackexchange.com/questions/26677/3bit-gray-counter-using-d-flip-flops-and-logic-gates
How do I find next bit to change in a Gray code in constant time?
How to find if two numbers are consecutive numbers in gray code sequence
I understand that using a short in Java we can store a minimum value of -32,768 and a maximum value of 32,767 (inclusive).
And using an int we can store a minimum value of -2^31 and a maximum value of 2^31-1
Question: If I have an int[] numbers and the numbers I can store are positive up to 10Million.
Is it possible to somehow store these numbers without having to use 4 bytes for each? I am wondering if for a specific "small" range there might be some "hack/trick" so that I could use less memory than numbers.length*4
You could attempt to use a smaller number of bits by using masking or bit-operations to represent each number, and then perform a sign-extension later on if you wish to get the full number of bits. This kind of operation is done on a system-architecture level in nearly all computer systems today.
It may help you to research 2's Complement, which seems to be what you are going for... And possibly Sign Extension for good measure.
Typically, in high-level languages an int is represented by the basic size of the processor register. ex) 8, 16, 32, or 64-bits.
If you use a 2's-Complement method, you could easily account for the full spectrum of positive and negative numbers if needed. This is also very easy on the hardware, because you only have to invert all the bits and then add 1, which may prove to give you a big performance increase over other possible methods.
How 2's Complement Works:
Get –N by inversing all bits and then
add 1
That is, get 1-complement of N and then add 1 to it.
For example with 8-bit words:
9 = 00001001
-9 = 11110111 (11110110 + 1)
Easily and efficiently in hardware
(inverse and then +1)
• An n-bit word can be used to represent numbers
from -2^(N-1) to +(2^(N-1) - 1)
UPDATE: Bit-operations to represent larger numbers.
If you are trying to get a larger number, say 1,000,000 as in your comment, then you can use a Bitwise left-shift operation to then extract the number by increasing your current number by the appropriate power of 2.
9 (base 10): 00000000000000000000000000001001 (base 2)
--------------------------------
9 << 2 (base 10): 00000000000000000000000000100100 (base 2) = 36 (base 10)
You could also try:
(Zero-fill right shift)
This operator shifts the first operand the specified number of bits to the right. Excess bits shifted off to the right are discarded. Zero bits are shifted in from the left. The sign bit becomes 0, so the result is always non-negative.
For non-negative numbers, zero-fill right shift and sign-propagating right shift yield the same result. For example, 9 >>> 2 yields 2, the same as 9 >> 2:
9 (base 10): 00000000000000000000000000001001 (base 2)
--------------------------------
9 >>> 2 (base 10): 00000000000000000000000000000010 (base 2) = 2 (base 10)
However, this is not the case for negative numbers. For example, -9 >>> 2 yields 1073741821, which is different than -9 >> 2 (which yields -3):
-9 (base 10): 11111111111111111111111111110111 (base 2)
--------------------------------
-9 >>> 2 (base 10): 00111111111111111111111111111101 (base 2) = 1073741821 (base 10)
As others have stated in the comments, you could actually hamper your overall performance in the long-run if you are attempting to manipulate data that is not specifically word/double/etc-aligned. This is because your hardware will have to work a bit harder to try and piece together what you truly need.
Just another thought. One parameter is the range of numbers you have. But also other properties can help save storage. For example, when you know that each number will be divisible by some multiple of 8, you need not store the lower 3 bits, since you know they are 0 all the time. (This is how the JVM stores "compressed" references.)
Or, to take another possible scenario: When you store prime numbers, then all of them (except 2) will be odd. So no need to store the lowest bit, as it is always 1. Of course you need to handle 2 separately. A similar trick is used in floating point representations: since the first bit of the mantissa of a non-null number is always 1, it is not stored at all, thus increasing precision by 1 bit.
One solution is to use bit manipulation and use a number of bits of your choosing to store a single number. Say you select to use 5 bits. You can then store 4 such numbers in 4 bytes. You need to pack and unpack the bits into an integer when operations need to be done.
You need to decide if you want to deal with negative numbers in which case you need to store a sign bit.
To make it easier to use, you need to create a class that will conceal the nitty-gritty details via get and store operations.
In light of the questions about performance, as is often the case, we are trading space for performance or vise versa. Depending on the situation various optimization techniques can be used to minimize the number of CPU cycles.
That said, is there a need for such optimization in the first place? If so, is it at the memory level or storage level? Could we use a generic mechanism such as compression to take care of this instead of using special techniques?
This code segment:
(x >>> 3) & ((1 << 5) - 1)
apparently results in a 5-bit integer with bits 3 - 7 of x.
How would you go about understanding this?
Let's look at ((1 << 5) - 1) first.
1 << 5 is equal to 100000 in binary.
When we subtract 1, we're left with 11111, a binary number of five 1s.
Now, it's important to understand that a & 0b11111 is an operation that keeps only the 5 least significant bits of a. Recall that the & of two bits is 1 if and only if both of the bits are 1. Any bits in a above the 5th bit, therefore, will become 0, since bit & 0 == 0. Moreover, all of the bits from bit 1 to bit 5 will retain their original value, since bit & 1 == bit (0 & 1 == 0 and 1 & 1 == 1).
Now, because we shift the bits of x in x >>> 3 down by 3, losing the three least significant bits of x, we are applying the process above to bits 4 to 8 (starting at index 1). Hence, the result of the operation retains only those bits (if we say the first bit is bit 0, then that would indeed be bit 3 to bit 7, as you've stated).
Let's take an example: 1234. In binary, that's 10011010010. So, we start with the shift by 3:
10011010010 >>> 3 = 10011010
Essentially we just trim off the last 3 bits. Now we can perform the & operation:
10011010
& 00011111
--------
00011010
So, our final result is 11010. As you can see, the result is as expected:
bits | 1 0 0 1 1 0 1 0 0 1 0
index | 10 9 8 7 6 5 4 3 2 1 0
^-------^
(x >>> 3)
Shifts x right 3 bits logically, i.e. not sign-extending at the left. The lower-order 3 bits are lost. (This is equivalent to an unsigned division by 8.)
1 << 5
Shifts 1 left 5 bits, i.e. multiplies it by 32, yielding 0b00000000000000000000000000100000.
-1
Subtracts one from that, giving 31, or 0b00000000000000000000000000011111.
&
ANDs these together, yielding only the lower-order 5 bits of the result of x >>> 3, in other words bits 3..7 of the original x.
"How would you go about understanding this?".
I assume that you are actually asking how you should go about understanding it. (As distinct from someone just explaining it to you ...)
The way to understand it is to "hand execute" it.
Get a piece of paper and a pencil.
Based on your understanding of how Java operator precedence works, figure out the order in which the operations will be performed.
Based on your understanding of each operator, write the input patterns of bits on the piece of paper and "hand execute" each operation ... in the correct order.
If you do this a few times with a few values of x, you should get to understand why this expression gives you a 5 bit number.
If you repeat this exercise for a few other examples, you should get to the point where you don't need to go through the tedious process of working it out with a pencil and paper.
I see that #arshajii has essentially done this for you for this example. But I think you will get a deeper understanding if you do / repeat the work for yourself.
One thing to remember about integer and bitwise operations in Java is that the operations are always performed using 32 or 64 bit operations ... even if the operands are 8 or 16 bit. Another thing to remember (though it is not relevant here) is that the right hand operand of a shift operator is chopped to 5 or 6 bits, depending on whether this is a 32 or 64 bit operation.
This question already has answers here:
Difference between >>> and >>
(9 answers)
Closed 5 years ago.
If the shifted number is positive >>> and >> work the same.
If the shifted number is negative >>> fills the most significant bits with 1s whereas >> operation shifts filling the MSBs with 0.
Is my understanding correct?
If the negative numbers are stored with the MSB set to 1 and not the 2s complement way that Java uses the the operators would behave entirely differently, correct?
The way negative numbers are represented is called 2's complement. To demonstrate how this works, take -12 as an example. 12, in binary, is 00001100 (assume integers are 8 bits though in reality they are much bigger). Take the 2's complement by simply inverting every bit, and you get 11110011. Then, simply add 1 to get 11110100. Notice that if you apply the same steps again, you get positive 12 back.
The >>> shifts in zero no matter what, so 12 >>> 1 should give you 00000110, which is 6, and (-12) >>> 1 should give you 01111010, which is 122. If you actually try this in Java, you'll get a much bigger number since Java ints are actually much bigger than 8 bits.
The >> shifts in a bit identical to the highest bit, so that positive numbers stay positive and negative numbers stay negative. 12 >> 1 is 00000110 (still 6) and (-12) >> 1 would be 11111010 which is negative 6.
Definition of the >>> operator in the Java Language Specification:
The value of n>>>s is n right-shifted s bit positions with zero-extension. If n is positive, then the result is the same as that of n>>s; if n is negative, the result is equal to that of the expression (n>>s)+(2<<~s) if the type of the left-hand operand is int, and to the result of the expression (n>>s)+(2L<<~s) if the type of the left-hand operand is long.
Just the opposite, the >>> fills with zeros while >> fills with ones if the h.o bit is 1.
I am trying to create a simple function that utilizes modular arithmetic. This is essentially a number line that wraps around. Specifically I want to use a Mod 8 number line in Java.
What I want is to compare two numbers between 0 and 7. I want to subtract these numbers to get a difference score. However, instead of 0-7=-7, I want it to equal 1. The idea being that after you reach 7, the number line wraps around back to 0 (therefore 0 and 7 are only one space across.)
Are there any packages that fit this criterion?
how about ((0-7)+8) % 8 ? This should fix up your case.
Note: % is the Modular operator.
It appears you want to reverse what negative numbers do with modulos. Keep in mind that the modulus is the remainder after integer division. Normally you would have a range that looks like this:
-7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7
You want it to look like this for the same series of values:
1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
If you want to solve for the general case where you can have any negative number (such that it will work for -15, -20, -27 as well as -7) then you have to adjust it after the modulus, like this:
int m = x % 8;
m = (m < 0) ? m + 8 : m;
Essentially this leaves the positive case alone, and will adjust the negative case so the numbers roll over as you want them to.
An alternative way to do this with straight math is to take the modulus twice:
int m = ((x % 8) + 8) % 8
The first modulus gives you your expected range from -7 to 7. The addition adjusts the negative modulus so that it is positive, but of course moves the positive values above 7. The second modulus ensures that all the answers are in the range 0 to 7. This should work for any negative number as well as any positive number.
It sounds like you need to use the % modulo operator. Perhaps write a set of integer functions which work with modulo math, eg. Modulo plus would be =(a+b) % 8;
The modulo operation is what you want. However, the % operator in Java, which is often called modulo, isn't the mathematical modulo. It's rather the remainder operator. The difference is subtle and often irrelevant. It's only important if you have negative parameters like in your case. I think Wikipedia can explain the exact difference.
For you're "wrap around" you need the mathematical version of modulo which sadly isn't implemented in Java for Integer. However, the BigInteger class has a mod() function which does exactly what you need:
BigInteger.valueOf(0-7).mod(BigInteger.valueOf(8)).longValue()
It's not pretty but works.
Um... there is the built-in modulus operator %, which is also present in basically every other language that's at all popular these days.