As part of a serial data protocol decoder, I must decode data that has sync bits inserted (bits that are not part of the data and are always '1'). I need to remove the sync bits and assemble the data by shifting the remaining bits left. Each 32-bit word has a different pattern of sync bits. I know what the patterns are, but I cannot come up with a generalized was of removing the sync bits.
For example, I might have a bit pattern like this (just showing 12 bits for example):
0 1 1 1 1 0 0 1 1 0 1 1
I know that some of those bits are sync bits, specifically those that are '1' in this mask:
0 0 1 1 0 0 0 0 1 0 0 1
The resulting data should be those data bits with a '0' in the corresponding mask, shifted to remove the sync bits, padded right with zeros. The mask above could be understood as "take first 2 bits, skip next 2 bits, take next 4 bits, skip next bit, take 2 bits, skip 1 bit".
E.g I should end up with:
0 1 1 0 0 1 0 1 0 0 0 0
Trying to do this in Java but I don't see any bit mask/shift operations that would make this work.
Best method I could come up with (does not left align the results, but that is OK for my purposes):
private static final int MSB_ONLY = 0x80000000;
private static int squeezeBits(int data, int mask) {
int v = 0;
for (int i=0; i<32; i++) {
if ((mask & MSB_ONLY) != MSB_ONLY) {
// There is a 0 in the mask, so we want this data bit
v = (v << 1) | ((data & MSB_ONLY) >>> 31);
}
else {
// Throw bit away
}
mask = mask << 1;
data = data << 1;
}
return v;
}
Related
Problem formulation: Given a binary code of length l, for every t bits set (l%t=0), if there exists at least one bit of value 1, we add 1 to the result.
My question : How to efficiently get the final result?
For example, we have a binary code 010 110 000, and t=3. Then the final result is 2. Since for 000, there is no bit of value 1. For 110, there exists at least one bit of value 1. We add 1 to the result. For 010, there also exists one bit of value 1. We add 1 to the result. Thus, the final result is 2.
My question is that how to efficiently solve this problem without scanning through every t bits, which causes a time complexity linear with the length of the binary code.
For the counting set problem (calculate how many 1's in a binary code), there are some algorithms which take constant time to solve it by taking a limited number of mask and shift operations such as the MIT HAKMEM Count algorithm.
However, existing algorithms for the traditional counting set problem cannot be used to solve my problem.
Does anyone know some tricks for my problems? You can assume a maximum length of the input binary code if it makes the problem easier to solve.
From comment:
Then you can assume that the input is a binary code of 64-bit integer for this problem.
Here are two different ways to do it.
The first run in O(I/t) and works by testing each set for all 0-bits.
public static int countSets(int setSize, long bits) {
Long mask = (1L << setSize) - 1;
int count = 0;
for (long b = bits; b != 0; b >>>= setSize)
if ((b & mask) != 0)
count++;
return count;
}
The second run in O(log I) and works by collapsing the bits of a set to the right-most bit of the set, then counting bits set.
public static int countSets(int setSize, int bitCount, long bits) {
long b = bits, mask = 1;
for (int i = 1; i < setSize; i++)
b |= bits >>> i;
for (int i = setSize; i < bitCount; i <<= 1)
mask |= mask << i;
return Long.bitCount(b & mask);
}
Further explanation
The first method builds a mask for the set, e.g. with t = 3, the mask is 111.
It then shifts the value to the right, t bits at a time, e.g. with input = 010 110 000 and t = 3, you get:
mask = 111
b = 010 110 000 -> b & mask = 000 -> don't count
b = 010 110 -> b & mask = 110 -> count
b = 010 -> b & mask = 010 -> count
result: 2
The second method first merge bits to the right-most bit of the sets, e.g. with input = 010 110 000 and t = 3, you get:
bits = 010 110 000
bits >> 1 = 001 011 000
bits >> 2 = 000 101 100
b (OR'd) = 011 111 100
It then builds a mask for only checking right-most bits, and then applies mask and counts the bits set:
mask = 001 001 001
b & mask = 001 001 000
result: 2
I have been given this sample code for some exercises, and it shows how to find whether an integer is odd or even.
int x = 4;
if ( (x & 1) == 0 )
{
System.out.println("even");
}
else
{
System.out.println("odd");
}
But I dont understand why you do ' x & 1 '. What's the purpose of that?
In the binary representation of a number, any number with its least significant bit set to 0 is even. It would also be helpful to know what the & operator does.
For example 5 = 0101 (binary) and 1 = 0001 (binary). In this case, it compares 0101 with 0001.
You compare it bitwise, so the first bit would be 1 & 0 = 0. The second bit is 0 & 0 = 0. The third bit is 0 & 0 = 0. The last bit is 1 & 1 = 1.
So 5 & 1 = 0001, which is 1 in decimal. 1 == 0 evaluates to false for x = 5.
For all other even numbers, the least significant digit is 0, so any even number & 1 will always evaluate to 0.
That is because & performs a bitwise AND operation:
if ( (x & 1) == 0 )
Your code is as good as saying, print "Odd" if last binary digit of x is 1.
And it will work because all odd numbers will always have 1 as their last binary digit.
Consider this:
1 is 0001 in binary.
2 is 0010 in binary.
When (0001 & 0010) only those positions with both matched with 1 will remained as 1, which means:
0001 & 0010 gives you 0000 (0) // 1 & 2 = 0
Look at this pattern:
0001 & 0001 = 1 //1 & 1 = 1 (is odd)
0010 & 0001 = 0 //2 & 1 = 0 (is even)
0011 & 0001 = 1 //3 & 1 = 1 (is odd)
0100 & 0001 = 0 //4 & 1 = 0 (is even)
0101 & 0001 = 1 //5 & 1 = 1 (is odd)
0110 & 0001 = 0 //6 & 1 = 0 (is even)
It's a bitwise AND operation between the binary representation of the two numbers. Odd numbers always have their 1 bit set. Even numbers do not.
So, the ampersand AND == 0 is true for even numbers, but not for odd ones.
http://www.tutorialspoint.com/java/java_bitwise_operators_examples.htm
It evaluates the variable's binary value
Let's say x = 6 (110 in binary) and y = 7 (111)
Since we know that 1&0=0 and 1&1=1 (or true&false=false and true&true=true)
x & 1 == 0 // evaluates to true if x is even because
110
&001
----
000
y & 1 == 0 // evaluates to false because
111
&001
----
001
The LSB of a binary number is holding the information about Parity,
any odd numbers has LBS==1 and any even has LSB==0
so when you do a bitwise and you are multiplying bit by bit
against 1, the porpouse of this is to clear all other bits but leaving the LSB just like it is (that is why multpling by 1)
A binary number can be easily identified if it's odd or even just by looking at least significant bit, wheather it is set or not(1 or 0). If least significant bit is 1 then that's an odd number else it's an even.
Just check (number % 10) if true, its odd number, else even number.
I found this code online. But, I am unable to get the logic behind the following code:
public static int add(int a, int b) {
if (b == 0) return a;
int sum = a ^ b; // add without carrying
System.out.println("sum is : "+sum);
int carry = (a & b) << 1; // carry, but don’t add
return add(sum, carry); // recurse
}
Let's look at an example (using 8 bits for simplicity)
a = 10010110
b = 00111101
a^b is the xor, which gives 1 for places where there is a 1 in one number and 0 in the other. In our example:
a^b = 10101011
Since 0 + 0 = 0, 0 + 1 = 1 and 1 + 0 = 1, the only columns left to deal with are the ones that have a 1 in both of the numbers. In our example, a^b is short by whatever the answer to
00010100
+ 00010100
is. In binary, 1 + 1 = 10, so the answer to the above sum is
00101000
or (a & b) << 1. Therefore the sum of a^b and (a & b) << 1 is the same as a + b.
So, assuming the process is guaranteed to terminate, the answer will be correct. But the process will terminate because each time we call sum recursively the second parameter has at least one more 0 at the end, due to the bit shift <<. Therefore, we are guaranteed to eventually end up with the second argument consisting entirely of 0s, so that the line if (b == 0) return a; can end the process and give us an answer.
Consider, as an example, 5+7:
5 = 101 (Base 2)
7 = 111 (Base 2)
Now consider adding the two (base 2) digits:
0+0 = 0 = 0 carry 0
1+0 = 1 = 1 carry 0
0+1 = 1 = 1 carry 0
1+1 = 10 = 0 carry 1
The sum (without carrying) of A+B is A^B and the carry is A&B; and when you carry a number it is shifted one digit to the left (hence (A&B)<<1).
So:
5 = 101 (Base 2)
7 = 111 (Base 2)
5^7 = 010 (sum without carrying)
5&7 = 101 (the carry shifted left)
Then we can recurse to add the carry:
A = 010
B = 1010
A^B = 1000 (sum without carrying)
A&B = 0010 (the carry shifted left)
Then we can recurse again as we still have more to carry:
A' = 1000
B' = 100 (without the leading zeros)
A'^B' = 1100 (sum without carrying)
A'&B' = 0000 (the carry shifted left)
Now there is nothing to carry - so we can stop and the answer is 1100 (base 2) = 12 (base 10).
The algorithm is just implementing decimal addition as (longhand) binary addition using the ors to add and the bitshifted ands to find the carry and will recurse until there is nothing more to carry (which will always occur as the bitshift appends another zero to the carry each time so with each recursion at least one more bit will not generate a carry value each time).
We are adding converting the integers to bits and using bitwise operators .
EXOR i.e ^ : 0 ^0 and 1 ^1 =0 , other cases give 1.
AND i.e & 1^1 =1 , ..other cases give 0.
<< or left shift . i.e shift left and append a 0 bit : 0010 becomes 0100
eg.
add(2,3)
2= 0010
3=0011
exor both : to get initial sum : 0001
carry : a &b = 0010
Left shift by 1 bit : 0100 i.e 4
add(1,4)
exor both : 0001 0100 and u get 0101 i.e 5
carry = 0000 <<1 i.e 0000 ..
since carry is 0 , it stops addition and returns previous sum
This is the table for addition:
+ 0 1
-- --
0 | 0 1
1 | 1 10
▲
If you ignore the carry bit ▲ you'll see that it's the same as the XOR table:
^ 0 1
-- --
0 | 0 1
1 | 1 0
So if you combine two numbers with bitwise XOR you get bit-by-bit addition without carry.
Now, what is the carry? It's a bit that's only there when both inputs are 1.
You can get that with AND:
& 0 1
-- --
0 | 0 0
1 | 0 1
But it needs to be added to the sum after being shifted one position to the left, because it's "carried" over, hence the (a & b) << 1
So you can compute the addition without carry and the carry itself. How do you add them together without using addition? Simple! By recursing on this very definition of addition!
See #pbabcdefp's answer on why the recursion always terminates.
I am trying to change the LSB of a numerical value, say 50 which LSB is 0 because 50 % 2 is 0 (remainder operator) to a value of 1. Thus change the LSB from 0 to 1 in this case.
The code is below:
//Get the LSB from 50 using the modulas operator
lsb = 50 % 2;
//if the character equals 1
//and the least significant bit is 0, add 1
if(binaryValue == '1' && lsb ==0)
{
//This clearly does not work.
//How do I assign the altered LSB (1) to the value of 50?
50 = lsb + 1;
}
I am having problems inside the if statement, where I am tying to assign the altered LSB, which in this case is 1 to the value of 50. This is not the full code, thus all values are different.
Thanks
The xor operation ^ can be used to flip the value of a single bit. For example
int value = 4;
value = value ^ 1;
System.out.println(value);
Will output 5 since the least significant bit was changed to one.
XOR in java:
System.out.println(50 ^ 1);
I'm in my first programming course and I'm quite stuck right now. Basically, what we are doing is we take 16 values from a text file (on the first line of code) and there is a single value on the second line of code. We read those 16 values into an array, and we set that 2nd line value as our target. I had no problem with that part.
But, where I'm having trouble is creating a bitmap to test every possible subset of the 16 values, that equal the target number.
IE, say we had these numbers:
12 15 20 4 3 10 17 12 24 21 19 33 27 11 25 32
We then correspond each value to a bitmap
0 1 1 0 0 0 0 1 1 1 0 1 0 0 1 0
Then we only accept the values predicated with "1"
15 20 12 24 21 33 25
Then we test that subset to see if it equals the "target" number.
We are only allowed to use one array in the problem, and we aren't allowed to use the math class (haven't gotten to it yet).
I understand the concept, and I know that I need to implement shifting operators and the logical & sign, but I'm truly at a loss. I'm very frustrated, and I just was wondering if anybody could give me any tips.
To generate all possible bit patterns inside an int and thus all possible subsets defined by that bit map would simply require you to start your int at 1 and keep incrementing it to the highest possible value an unsigned short int can hold (all 1s). At the end of each inner loop, compare the sum to the target. If it matches, you got a solution subset - print it out. If not, try the next subset.
Can someone help to explain how to go about doing this? I understand the concept but lack the knowledge of how to implement it.
OK, so you are allowed one array. Presumably, that array holds the first set of data.
So your approach needs to not have any additional arrays.
The bit-vector is simply a mental model construct in this case. The idea is this: if you try every possible combination (note, NOT permutation), then you are going to find the closest sum to your target. So lets say you have N numbers. That means you have 2^N possible combinations.
The bit-vector approach is to number each combination with 0 to 2^N - 1, and try each one.
Assuming you have less that 32 numbers in the array, you essentially have an outer loop like this:
int numberOfCombinations = (1 << numbers.length - 1) - 1;
for (int i = 0; i < numberOfCombinations; ++i) { ... }
for each value of i, you need to go over each number in numbers, deciding to add or skip based on shifts and bitmasks of i.
So the task is to what an algorithm that, given a set A of non-negative numbers and a goal value k, determines whether there is a subset of A such that the sum of its elements is k.
I'd approach this using induction over A, keeping track of which numbers <= k are sums of a subset of the set of elements processed so far. That is:
boolean[] reachable = new boolean[k+1];
reachable[0] = true;
for (int a : A) {
// compute the new reachable
// hint: what's the relationship between subsets of S and S \/ {a} ?
}
return reachable[k];
A bitmap is, mathematically speaking, a function mapping a range of numbers onto {0, 1}. A boolean[] maps array indices to booleans. So one could call a boolean[] a bitmap.
One disadvanatage of using a boolean[] is that you must process each array element individually. Instead, one could use that a long holds 64 bits, and use bitshifting and masking operations to process 64 "array" elements at a time. But that sort of microoptimization is error-prone and rather involved, so not commonly done in code that should be reliable and maintainable.
I think you need something like this:
public boolean equalsTarget( int bitmap, int [] numbers, int target ) {
int sum = 0; // this is the variable we're storing the running sum of our numbers
int mask = 1; // this is the bitmask that we're using to query the bitmap
for( int i = 0; i < numbers.length; i++ ) { // for each number in our array
if( bitmap & mask > 0 ) { // test if the ith bit is 1
sum += numbers[ i ]; // and add the ith number to the sum if it is
}
mask <<= 1; // shift the mask bit left by 1
}
return sum == target; //if the sum equals the target, this bitmap is a match
}
The rest of your code is fairly simple, you just feed every possible value of your bitmap (1..65535) into this method and act on the result.
P.s.: Please make sure that you fully understand the solution and not just copy it, otherwise you're just cheating yourself. :)
P.p.s: Using int works in this case, as int is 32 bit wide and we only need 16. Be careful with bitwise operations though if you need all the bits, as all primitive integer types (byte, short, int, long) are signed in Java.
There are a couple steps in solving this. First you need to enumerate all the possible bit maps. As others have pointed out you can do this easily by incrementing an integer from 0 to 2^n - 1.
Once you have that, you can iterate over all the possible bit maps you just need a way to take that bit map and "apply" it to an array to generate the sum of the elements at all indexes represented by the map. The following method is an example of how to do that:
private static int bitmapSum(int[] input, int bitmap) {
// a variable for holding the running total
int sum = 0;
// iterate over each element in our array
// adding only the values specified by the bitmap
for (int i = 0; i < input.length; i++) {
int mask = 1 << i;
if ((bitmap & mask) != 0) {
// If the index is part of the bitmap, add it to the total;
sum += input[i];
}
}
return sum;
}
This function will take an integer array and a bit map (represented as an integer) and return the sum of all the elements in the array whose index are present in the mask.
The key to this function is the ability to determine if a given index is in fact in the bit map. That is accomplished by first creating a bit mask for the desired index and then applying that mask to the bit map to test if that value is set.
Basically we want to build an integer where only one bit is set and all the others are zero. We can then bitwise AND that mask with the bit map and test if a particular position is set by comparing the result to 0.
Lets say we have an 8-bit map like the following:
map: 1 0 0 1 1 1 0 1
---------------
indexes: 7 6 5 4 3 2 1 0
To test the value for index 4 we would need a bit mask that looks like the following:
mask: 0 0 0 1 0 0 0 0
---------------
indexes: 7 6 5 4 3 2 1 0
To build the mask we simply start with 1 and shift it by N:
1: 0 0 0 0 0 0 0 1
shift by 1: 0 0 0 0 0 0 1 0
shift by 2: 0 0 0 0 0 1 0 0
shift by 3: 0 0 0 0 1 0 0 0
shift by 4: 0 0 0 1 0 0 0 0
Once we have this we can apply the mask to the map and see if the value is set:
map: 1 0 0 1 1 1 0 1
mask: 0 0 0 1 0 0 0 0
---------------
result of AND: 0 0 0 1 0 0 0 0
Since the result is != 0 we can tell that index 4 is included in the map.