Java bits to Integer - java

I have an application that is sending me 2 shorts one increments from 0 -> 32 -> 48 -> 16 then back to 0 and the second increments each time the first one hits 0. The second one goes up to a maximum of 65535 and then loops round back to 0. I'm guessing this is some bits that are encoded which can be made to create a single number?
How can I combine these two shorts into a single number that increments by 1 if they increment in the behaviour described above?

0b0000_0000 0
0b0010_0000 32
0b0011_0000 48
0b0001_0000 16
So you can increment a counter modulo 4, 0, 1, 2, 3, 0, 1, 2, ... and switch the two bits. Modulo 4 means & 0b11.
int x = 0;
for (int i = 0; i < 100; ++i) {
System.out.printf("%04x%n", x);
x = (x + 1) & 0xFFFF;
x |= (x & 2) << 16;
x |= ~((x & 2) ^ (x & 1)) << 17; // Or something like that
}
I leave it to you to find the logic.

Related

Left circular bit shift

I have to write a function that does a left circular shift of the bits of y positions.
For example, if I put: 01011000 and 2 as the y, the function has to return 01100001.
I have tried to use Integer.rotateLeft() but it seems to be useless for this.
This should work I think:
int rotate_8bits_left(int val, int y) {
// amend y to the range [0, 7] with this:
// y = ((y % 8) + 8) % 8;
// or better, with this:
y = y & 0x7;
// do the rotation
return ((val << y) & 0xFF) | (val >> (8-y));
}
Let's explain the different parts:
// move val left y bits:
(val << y)
Above however keeps the bits that get beyond the 8th bit, so we need to truncate them, so we go with:
// move val left y bits and truncate anything beyond the 8th bit:
(val << y) & 0xFF
Now we need to add the bits that went out, to the beginning. We can calculate the bit that went out to the left by just moving to the right:
// move to the right, 8-y bits
val >> (8-y)
If we now glue together the two parts, we would get the rotation:
int new_val = ((val << y) & 0xFF) | (val >> (8-y));
Now for the first part, we want to handle y that might not be in the range [0, 8]. We can amend y to this range, before using it, with:
y = ((y % 8) + 8) % 8;
The expression above fixes both negative and positive values of y, if y is negative the modulo would return a negative value in the range [-7, -1], then by adding 8 we get back to the positive range. We have to do modulo again, for the case where y was positive and adding 8 to fix the negative case took it back to be above 8. The second modulo fixes this.
But we can achieve the same amendment for y with a more simple approach, by leaving only the 3 first bits that encounter for the range [0, 7], treating 8 as 0, this can be done with the following expression, that works well for both negative and positive values of y:
y = y & 0x7;

Combine two 3 byte integers, and one 2 byte integer into one 8 byte integer

Trying to store three integers into one to use for a hash, and decode back into their original values.
The variables:
x = 3 byte integer (Can be negative)
z = 3 byte integer (Can be negative)
y = 2 byte integer (Cannot be negative)
My current code - doesn't work with negatives:
long combined = (y) | (((long) z) << 16) | ((((long) x)) << 40);
int newX = (int) (combined >> 40); // Trim off 40 bits, leaving the heading 24
int newZ = (int) ((combined << 24) >> (40)); // Trim off 24 bits left, and the 16 bits to the right
int newY = (int) ((combined << 48) >> 48); // Trim off all bits other then the first 16
It doesn't work for negatives because your "3 byte integer" or "2 byte integer" is actually a regular 4-byte int. If the number is negative, all the highest bits will be set to "1"; if you binary-or the numbers together, these high 1 bits will overwrite the bits from the other numbers.
You can use bit-masking to encode the number correctly:
long combined = (y & 0xffff) | (((long) z & 0xffffff) << 16) | ((((long) x & 0xffffff)) << 40);
This will cut off the high-bits outside the 16 or 24 bit range that you're interested in.
The decoding already works fine, because the bit-shifting that you perform takes care of sign-extension.

How to get all possible permutations for 0 and 1 bits in JAVA

I need the output of permutation for bits of length 3 to be (the order doesn't matter as the initial combination of 0 and 1 is generated randomly):
[0,0,0]
[0,0,1]
[0,1,0]
[0,1,1]
[1,0,0]
[1,0,1]
[1,1,0]
[1,1,1]
I have done but it seems that there are duplicates and some possible permutation are not being displayed which I'm not sure why. This is my code:
'
ArrayList<Item> itemsAvailable = new ArrayList<Item>();
ArrayList<Integer>bits = new ArrayList<Integer>();
ArrayList<ArrayList<Integer>> tried = new ArrayList<ArrayList<Integer>>();
itemsAvailable.add(new Item(5,4));
itemsAvailable.add(new Item(12,10));
itemsAvailable.add(new Item(8,5));
System.out.println("itemsAvailable: " + itemsAvailable);
Random r = new Random();
//permutations
for(int i = 0; i < Math.pow(2,itemsAvailable.size()); i++){
//Generate random bits
for(int j = 0; j < itemsAvailable.size(); j++){
int x = 0;
if (r.nextBoolean())
x = 1;
bits.add(x);
}
System.out.println("Added to bits #" + (i+1) + ": " + bits);
bits = new ArrayList<Integer>();
}
'
The output that I obtained is:
Added to bits #1: [0, 0, 1]
Added to bits #2: [1, 1, 0] - duplicate
Added to bits #3: [1, 0, 1]
Added to bits #4: [0, 0, 1]
Added to bits #5: [0, 0, 0] - dupicate
Added to bits #6: [1, 1, 0] - dupicate
Added to bits #7: [1, 1, 1]
Added to bits #8: [0, 0, 0] - dupicate
Therefore how can I obtain 8 different permutations as the bits are generated randomly? Please help.
Thank you.
There's an easier way to go about this. Think of what these bits represent in binary, in unsigned two's complement:
[0,0,0] -> 0
[0,0,1] -> 1
[0,1,0] -> 2
...
[1,1,1] -> 7
So the easy way to get all these permutations is:
for (int i = 0; i < 8; ++i) {
bits.add(i);
}
Where does that 8 come from? It's just 2^3, since you wanted length 3.
This technique works for up to 31 bits, since Java's int type is signed (whereas the above basically treats it as unsigned, which works at those lower numbers).
You can bump it up to 2^63 by using long instead of int, and you can get 64-length by just enumerating all longs. Beyond that, you'll need a different approach; but 2^64 longs, at 8 bytes per long, is about 1.5e11 gigabytes -- so you'll have run out of RAM way before you need a more complex algorithm.
If you are aware that the combinations is nothing else than counting then you can just do something like:
public static void main(String[] args) {
for (int i = 0; i < 8; i++) {
System.out.println(String.format("%3s", Integer.toBinaryString(i)).replace(' ', '0'));
}
}
where
Integer.toBinaryString(i) will print the i value as binary
and
String.format("%3s", Integer.toBinaryString(i)).replace(' ', '0')
will add leading zeros to the left so you can read it better

How to (cheaply) calculate all possible length-r combinations of n possible elements

What is the fastest way to calculate all possible length-r combinations of n possible elements without resorting to brute force techniques or anything that requires STL?
While working on an Apriori algorithm for my final project in my data structures class, I developed an interesting solution that uses bit-shifting and recursion, which i will share in an answer below for anyone who is interested. However, is this the fastest way of achieving this (without using any common libraries)?
I ask more out of curiosity than anything else, as the algorithm i currently have works just fine for my purposes.
Here is the algorithm that i developed to solve this problem. It currently just outputs each combination as a series of ones and zeros, but can be easily adapted to create data sets based on an array of possible elements.
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum - bitVal) << 1;
n += bitVal ? 1 : 0;
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
How it works:
This function treats each combination of elements as a sequence of ones and zeros, which can then be expressed with respect to a set of possible elements (but is not in this particular example).
For example, the results of 3C2 (all combinations of length-2 from a set of 3 possible elements) can be expressed as 011, 110, and 101. If the set of all possible elements is {A, B, C}, then the results can be expressed with respect to this set as {B, C}, {A, B}, and {A, C}.
For this explanation, i will be calculating 5C3 (all length-3 combinations composed of 5 possible elements).
This function accepts 3 arguments, all of which are unsigned integers:
The first parameter is the smallest possible integer whose binary representation has a number of 1s equal to the length of the combinations we're creating. This is out starting value for generating combinations. For 5C3, this would be 00111b, or 7 in decimal.
The second parameter is the value of highest bit that is set to 1 in the starting number. This is the first bit that will be subtracted when creating the combinations. For 5C3, this is the third bit from the right, which has a value of 4.
The third parameter is the value of the nth bit from the right, where n is the number of possible elements that we are combining. This number will be bitwise-anded with the combinations we create to check whether the left-most bit of the combination is a 1 or a 0. For 5C3, we will use the 5th bit from the right, which is 10000b, or 16 in decimal.
Here are the actual steps that the function performs:
Calculate startNum - bitVal, bit-shift one space to the left, and add 1 if bitVal is not 0.
For the first iteration, the result should be the same as startNum. This is so that we can print out the first combination (which is equal to startNum) within the function so we don't have to do it manually ahead of time. The math for this operation occurs as follows:
00111 - 00100 = 00011
00011 << 1 = 00110
00110 + 1 = 00111
The result of the previous calculation is a new combination. Do something with this data.
We are going to be printing the result to the console. This is done using a for-loop whose variable starts out equal to the number of bits we are working with (calculated by taking log2 of the testNum and adding 1; log2(16) + 1 = 4 + 1 = 5) and ends at 0. Each iteration, we bit-shift right by i-1 and print the right-most bit by and-ing the result with 1. Here is the math:
i=5:
00111 >> 4 = 00000
00000 & 00001 = 0
i=4:
00111 >> 3 = 00000
00000 & 00001 = 0
i=3:
00111 >> 2 = 00001
00001 & 00001 = 1
i=2:
00111 >> 1 = 00011
00011 & 00001 = 1
i=1:
00111 >> 0 = 00111
00111 & 00001 = 1
output: 00111
If the left-most bit of n (the result of the calculation in step 1) is 0 and n is not equal to startNum, we recurse with n as the new startNum.
Obviously this will be skipped on the first iteration, as we have already shown that n is equal to startNum. This becomes important in subsequent iterations, which we will see later.
If bitVal is greater than 0 and less than testNum, recurse with the current iteration's original startNum as the first argument. Second argument is bitVal shifted right by 1 (same thing as integer division by 2).
We now recurse with the new bitVal set to the value of the next bit to the right of the current bitVal. This next bit is what will be subtracted in the next iteration.
Continue to recurse until bitVal becomes equal to zero.
Because bitVal is bit-shifted right by one in the second recursive call, we will eventually reach a point when bitVal equals 0. This algorithm expands as a tree, and when bitVal equals zero and the left-most bit is 1, we return to one layer up from our current position. Eventually, this cascades all the way back the the root.
In this example, the tree has 3 subtrees and 6 leaf nodes. I will now step through the first subtree, which consists of 1 root node and 3 leaf nodes.
We will start at the last line of the first iteration, which is
if (bitVal)
r_nCr(startNum, bitVal >> 1, testNum);
So we now enter the second iteration with startNum=00111(7), bitVal = 00010(2), and testNum = 10000(16) (this number never changes).
Second Iteration
Step 1:
n = 00111 - 00010 = 00101 // Subtract bitVal
n = 00101 << 1 = 01010 // Shift left
n = 01010 + 1 = 01011 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 0 and n is not equal to startNum, so we recurse with n as the new startNum. We now enter the third iteration with startNum=01011(11), bitVal = 00010(2), and testNum = 10000(16).
Third Iteration
Step 1:
n = 01011 - 00010 = 01001 // Subtract bitVal
n = 01001 << 1 = 10010 // Shift left
n = 10010 + 1 = 10011 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1. We now enter the fourth iteration with startNum=01011(11), bitVal = 00001(1), and testNum = 10000(16).
Fourth Iteration
Step 1:
n = 01011 - 00001 = 01010 // Subtract bitVal
n = 01010 << 1 = 10100 // Shift left
n = 10100 + 1 = 10101 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1. We now enter the fifth iteration with startNum=01011(11), bitVal = 00000(0), and testNum = 10000(16).
Fifth Iteration
Step 1:
n = 01011 - 00000 = 01011 // Subtract bitVal
n = 01011 << 1 = 10110 // Shift left
n = 10110 + 0 = 10110 // bitVal is 0, so add 0
// Because bitVal = 0, nothing is subtracted or added; this step becomes just a straight bit-shift left by 1.
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is 0, so do not recurse.
Return to Second Iteration
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1.
This will continue on until bitVal = 0 for the first level of the tree and we return to the first iteration, at which point we will return from the function entirely.
Here is a simple diagram showing the function's tree-like expansion:
And here is a more complicated diagram showing the function's thread of execution:
Here is an alternate version using bitwise-or in place of addition and bitwise-xor in place of subtraction:
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum ^ bitVal) << 1;
n |= (bitVal != 0);
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
What about this?
#include <stdio.h>
#define SETSIZE 3
#define NELEMS 7
#define BYTETOBINARYPATTERN "%d%d%d%d%d%d%d%d"
#define BYTETOBINARY(byte) \
(byte & 0x80 ? 1 : 0), \
(byte & 0x40 ? 1 : 0), \
(byte & 0x20 ? 1 : 0), \
(byte & 0x10 ? 1 : 0), \
(byte & 0x08 ? 1 : 0), \
(byte & 0x04 ? 1 : 0), \
(byte & 0x02 ? 1 : 0), \
(byte & 0x01 ? 1 : 0)
int main()
{
unsigned long long x = (1 << SETSIZE) -1;
unsigned long long N = (1 << NELEMS) -1;
while(x < N)
{
printf ("x: "BYTETOBINARYPATTERN"\n", BYTETOBINARY(x));
unsigned long long a = x & -x;
unsigned long long y = x + a;
x = ((y & -y) / a >> 1) + y - 1;
}
};
It should print 7C3.

Math.min(Math.max(x, 0), 8) what does it mean?

I was looking at the sudoku code of the "mine" sudoku Android application and I've noticed this code:
selX = Math.min(Math.max(x, 0), 8);
selY = Math.min(Math.max(y, 0), 8);
What does Math.min(Math.max(x, 0), 8) and Math.min(Math.max(y, 0), 8) mean?
Break it down step by step using the docs:
http://docs.oracle.com/javase/7/docs/api/java/lang/Math.html#max(long
max(int a, int b) Returns the greater of two int values.
min(int a, int b) Returns the smaller of two int values.
So Math.min(Math.max(x, 0), 8); breaks down to:
int maximum = Math.max(x,0);
int final = Math.min(maximum,8);
First you take the maximum value of x and 0, so if x < 0, it will be zero.
Next take the minimum of the result and 8, so the maximum value will be 8.
It is about the same as:
int selX = x;
if (selX < 0) selX = 0;
if (selX > 8) selX = 8;
or
int selX = (x < 0) ? 0 : ((x > 8) ? 8 : x);
The first one returns x, if x is between 0 and 8, 0 if x is less than 0, and 8 if x is greater than 8.
The second one works in a similar fashion. So basically you're getting a number back that is guaranteed to be between 0 and 8, inclusive.
The Java Math class describes what the min and max functions do in detail.
Sudoku means 9 x 9 squares. You can index them from 0 to 8. Math.min(Math.max(x, 0), 8) guarantees that you get a number in that range. Is x > 8 then min(x,8) makes it 8. If x < 0 then max(x,0) makes it 0. That's all.

Categories