Understanding logic behind Integer.highestOneBit() method implementation - java

Java Integer class has the static method highestOneBit method which will return a value with a single one-bit, in the position of the highest-order one-bit in the specified value, or zero if the specified value is itself equal to zero.
For example input of int 17 will return 16; As 17 can be represented in binary as 10001 so it will return the furthest bit left which is equal to 16.
And in Integer class it has the following implementation in Java doc.
public static int highestOneBit(int i) {
// HD, Figure 3-1
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
i |= (i >> 8);
i |= (i >> 16);
return i - (i >>> 1);
}
I just want to know the logic behind implementing it this way and the logic behind using shift operations
. Can any one put some light on it.

This algorithm calculates for a given i whose binary representation is:
0..01XXXXXXX...XXXX
the value
0..011111111...1111
That's what the 5 |= operators do.
Then, in the return statement, it subtracts from it that value shifted right by one bit
0..001111111...1111
to get the result
0..010000000...0000
How does it work:
The highest possible 1 bit the the 32nd (left most) bit. Suppose the input number has 1 in that bit:
1XXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX
You or that value with the value shifted right by 1 (i >> 1) and get
11XXXXXX XXXXXXXX XXXXXXXX XXXXXXXX
Then you or that new value with the value shifted right by 2 (i >> 2) and get
1111XXXX XXXXXXXX XXXXXXXX XXXXXXXX
Then you or that new value with the value shifted right by 4 (i >> 4) and get
11111111 XXXXXXXX XXXXXXXX XXXXXXXX
Then you or that new value with the value shifted right by 8 (i >> 8) and get
11111111 11111111 XXXXXXXX XXXXXXXX
Finally you or that new value with the value shifted right by 16 (i >> 16) and get
11111111 11111111 11111111 11111111
If the highest 1 bit is smaller than the 32nd bit, these operations still turn all the bits to the right of it to 1 and keep the remaining (higher bits) 0.

The i |= statements help to compute a sequence of ones that is the same length as i. For example, for 101011 it computes 111111. I've explained how it works in this answer (I can't retype it right now since I am on mobile).
So basically, once you have the string of ones, subtracting itself shifted right one bit gives the H.O. bit.
111111 - (111111 >>> 1) = 111111 - 011111 = 100000

The first five lines (i |= (i >> x)) will set every bit below the highest 1-bit to 1. Then, the final line will subtract every 1-bit below the highest one, so that only the highest 1-bit will remain.
For simplicity, let's pretend an int was 8 bits. The code would in that case be like this:
public static int highestOneBit(int i) {
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
return i - (i >>> 1);
}
Now, let's say we have the value 128 (10000000). This is what would happen:
// i == 10000000
i |= (i >> 1); // i = 10000000 | 11000000 = 11000000
i |= (i >> 2); // i = 11000000 | 11110000 = 11110000
i |= (i >> 4); // i = 11110000 | 11111111 = 11111111
return i - (i >>> 1); // 11111111 - 01111111 = 10000000
The >> is an arithmetic right shift, so it will preserve the signed bit.
The last >>> is a logical right shift, which will not preserve the signed bit. It will always insert zeroes on the left side.
Now, let's try with 64 (01000000):
// i == 01000000
i |= (i >> 1); // i = 01000000 | 00100000 = 01100000
i |= (i >> 2); // i = 01100000 | 00011000 = 01111000
i |= (i >> 4); // i = 01111000 | 00000111 = 01111111
return i - (i >>> 1); // 01111111 - 00111111 = 01000000

Related

How to convert 16-bit audio created with Android's AudioRecord to 12-bit audio through bit shifting?

I am attempting to convert 16 bit audio into 12 bit audio. However, I am quite inexperienced with such conversions and believe my approach is possibly incorrect or flawed.
The use case, as context for the code snippets below, is an Android app which the user can speak into and that audio is transmitted to an IoT device for immediate playback. The IoT device expects audio in mono 12 bit, 8k sample rate, little endian, unsigned, with the data stored in the first twelve bits (0-11) and final four bits (12-15) are zeroes. Audio data needs to be received in packets of 1000 bytes.
The audio is being created in the Android app through the use of AudioRecord. The instantiation of which is as follows:
int bufferSize = 1000;
this.audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC,
8000,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
bufferSize
);
In a while loop, the AudioRecord is being read from by 1000 byte packets and modified to the specifications in the use case. Not sure this part is relevant, but for completeness:
byte[] buffer = new byte[1000];
audioRecord.read(buffer, 0, buffer.length);
byte[] modifiedBytes = convert16BitTo12Bit(buffer);
Then the modifiedBytes are sent off to the device.
Here are the methods which modify the bytes. Basically, to conform to the specifications, I am shifting the bits in each 16 bit set (tossing the least significant 4) and adding zeroes to the final four spots. I do this through BitSet.
/**
* Takes a byte array presented as 16 bit audio and converts it to 12 bit audio through bit
* manipulation. Packets must be of 1000 bytes or no manipulation will occur and the input
* will be immediately returned.
*/
private byte[] convert16BitTo12Bit(byte[] input) {
if (input.length == 1000) {
for (int i = 0; i < input.length; i += 2) {
Log.d(TAG, "convert16BitTo12Bit: pass #" + (i / 2));
byte[] chunk = new byte[2];
System.arraycopy(input, i, chunk, 0, 2);
if (!isEmptyByteArray(chunk)) {
byte[] modifiedBytes = convertChunk(chunk);
System.arraycopy(
modifiedBytes,
0,
input,
i,
modifiedBytes.length
);
}
}
return input;
}
Log.d(TAG, "convert16BitTo12Bit: Failed - input is not 1000 in length; it is " + input.length);
return input;
}
/**
* Converts 2 bytes 16 bit audio into 12 bit audio. If the input is not 2 bytes, the input
* will be returned without manipulation.
*/
private byte[] convertChunk(byte[] chunk) {
if (chunk.length == 2) {
BitSet bitSet = BitSet.valueOf(chunk);
Log.d(TAG, "convertChunk: bitSet starts as " + bitSet.toString());
modifyBitSet(bitSet);
Log.d(TAG, "convertChunk: bitSet ends as " + bitSet.toString());
return bitSet.toByteArray();
}
Log.d(TAG, "convertChunk: Failed = chunk is not 2 in length; it is " + chunk.length);
return chunk;
}
/**
* Removes the first four bits and shifts the rest to leave the final four bits as 0.
*/
private void modifyBitSet(BitSet bitSet) {
for (int i = 4; i < bitSet.length(); i++) {
bitSet.set(i - 4, bitSet.get(i));
}
if (bitSet.length() > 8) {
bitSet.clear(12, 16);
} else {
bitSet.clear(4, 8);
}
}
/**
* Returns true if the byte array input contains all zero bits.
*/
private boolean isEmptyByteArray(byte[] input) {
BitSet bitSet = BitSet.valueOf(input);
return bitSet.isEmpty();
}
Unfortunately, this approach produces subpar results. The audio is quite noisy and it is difficult to make out what someone is saying (but you can hear that words are being spoken).
I also have been playing around with just saving the bytes to a file and playing it back on Android through AudioTrack. I noticed that if I just remove the first four bits and do not shift anything, the audio actually sounds good, as such:
private void modifyBitSet(BitSet bitSet) {
bitSet.clear(0, 4);
}
However, when played through the device, it sounds even worse, and I don't even think I can make out any words.
Clearly, my approach is not working here. Central question is how would one convert a 16 bit chunk into 12 bit audio and maintain audio quality given the requirement that the final four bits must be zero? Additionally, given my larger approach of using AudioRecord to obtain the audio, would such a solution for the prior question fit this use case?
Please let me know if there is anything more I can provide to clarify these questions and my intent.
Given that the audio is 16 bits but must be changed to 12 with four zeros at the end, four bits somewhere do have to be tossed.
Yes, of course and there is no other way, is there?
This is something quick that I can comeout with right now. Certainly not fully tested though. Only tested with input of 2 and 4 bytes. I'll leave it to you to test it.
//Reminder :: Convert as many as possible.
//Reminder :: To calculate the required size for store:
//if((bytes.length & 1) == 0) Math.round((bytes.length * 6) / 8F) : Math.round(((bytes.length - 1) * 6) / 8F).
//Return :: Amount of converted bytes.
public static final int convert16BitTo12Bit(final byte[] bytes, final byte[] store)
{
final int size = bytes.length;
int storeIndex = 0;
//Copy the first 2 bytes into store.
store[storeIndex++] = bytes[0]; store[storeIndex] = bytes[1];
if(size < 4) {
store[storeIndex] = (byte)(store[storeIndex] & 0xF0);
return 2;
}
final int result;
final byte tmp;
// 11111111 11110000 00000000 00000000
//+ 11111111 11110000 (<< 12)
//= 11111111 11111111 11111111 00000000 (1)
//-----------------------------------------
// 11111111 00000000 00000000 00000000 (1)
//+ 11111111 11110000 (<< 16)
//= 11111111 11111111 11110000 00000000 (2)
//-----------------------------------------
// 11110000 00000000 00000000 00000000 (2)
//+ 1111 11111111 0000 (<< 20)
//= 11111111 11111111 00000000 00000000 (3)
//-----------------------------------------
// 00000000 00000000 00000000 00000000 (3)
//+ 11111111 11110000 (<< 24)
//= 11111111 11110000 00000000 00000000
for(int i=2, shiftBits = 12; i < size; i += 2) {
if(shiftBits == 24) {
//Copy 2 bytes from bytes[] into store[] and move on.
store[storeIndex] = bytes[i];
//Never store byte 0 (Garbage).
tmp = (byte)(bytes[i + 1] & 0xF0); //Bit order: 11110000.
if(tmp != 0) store[++storeIndex] = tmp;
shiftBits = 12; //Reset
} else if(shiftBits == 20) {
result = ((store[storeIndex - 1] << 24) | ((store[storeIndex] & 0xFF) << 16))
| (((bytes[i] & 0xFF) << 20) | ((bytes[i + 1] & 0xFF) << 12));
store[storeIndex] = (byte)((result >> 24) & 0xFF);
tmp = (byte)((result >> 16) & 0xFF);
//Never store byte 0 (Garbage).
if(tmp != 0) store[++storeIndex] = tmp;
shiftBits = 24;
} else if(shiftBits == 16) {
result = ((store[storeIndex - 1] << 24) | ((store[storeIndex] & 0xFF) << 16))
| (((bytes[i] & 0xFF) << 16) | ((bytes[i + 1] & 0xFF) << 8));
store[storeIndex] = (byte)((result >> 16) & 0xFF);
tmp = (byte)((result >> 8) & 0xF0);
//Never store byte 0 (Garbage).
if(tmp != 0) store[++storeIndex] = tmp;
shiftBits = 20;
} else {
result = ((store[storeIndex - 1] << 24) | ((store[storeIndex] & 0xFF) << 16))
| (((bytes[i] & 0xFF) << 12) | ((bytes[i + 1] & 0xFF) << 4));
store[storeIndex] = (byte)((result >> 16) & 0xFF);
tmp = (byte)((result >> 8) & 0xFF);
//Never store byte 0 (Garbage).
if(tmp != 0) store[++storeIndex] = tmp;
shiftBits = 16;
}
}
return ++storeIndex;
}
Explanations
result = ((store[storeIndex - 1] << 24) | ((store[storeIndex] & 0xFF) << 16))
| (((bytes[i] & 0xFF) << 20) | ((bytes[i + 1] & 0xFF) << 12));
What this does is basically merge two integers into one.
((store[storeIndex - 1] << 24) | ((store[storeIndex] & 0xFF) << 16))
The first one is make an integer with same constant bit position.
(((bytes[i] & 0xFF) << 20) | ((bytes[i + 1] & 0xFF) << 12));
The latter is for 2 current bytes with different bit positions.
(...) | (...)
Pipe or vertical bar at the middle is to merge these two integers we've just created into one.
Usage
To use this method is pretty straight forward.
byte[] buffer = new byte[1000];
byte[] store;
if((buffer.length & 1) == 0) { //Even.
store = new byte[Math.round((bytes.length * 6) / 8F)];
} else { //Odd.
store = new byte[Math.round(((bytes.length - 1) * 6) / 8F)];
}
audioRecord.read(buffer, 0, buffer.length);
int convertedByteSize = convert16BitTo12Bit(buffer, store);
System.out.println("size: " + convertedByteSize);
I have discovered a solution that produces clear audio. First, it is important to recount the requirements for the use case, which is 12 bit unsigned mono audio which will be read in little endian by the device in packets of 1000 bytes.
The initialization and configuration of the AudioRecord as described in the question is fine.
Once the 1000 bytes of audio is read from AudioRecord, it can be put into a ByteBuffer and defined as little endian for modification, and then put into a ShortBuffer to do manipulation on the 16 bit level:
// Audio specifications of device is in little endian.
ByteBuffer byteBuffer = ByteBuffer.wrap(input).order(ByteOrder.LITTLE_ENDIAN);
// Turn into a ShortBuffer so bitwise manipulation can occur on the 16 bit level.
ShortBuffer shortBuffer = byteBuffer.asShortBuffer();
Next, in a loop, take each short and modify it to 12 bit unsigned:
for (int i = 0; i < shortBuffer.capacity(); i++) {
short currentShort = shortBuffer.get(i);
shortBuffer.put(i, convertShortTo12Bit(currentShort));
}
This can be accomplished by shifting the 16 bits four spaces to the right to turn it into 12 bit signed. Then, to convert to unsigned, add 2048. For our purposes as a safety step, we also mask the least significant four bits as required by device, but given the shifting and adding, it shouldn't be the case that any bits actually remain there:
private static short convertShortTo12Bit(short input) {
int inputAsInt = input;
inputAsInt >>>= 4;
inputAsInt += 2048;
input = (short) (inputAsInt & 0B0000111111111111);
return input;
}
If one wishes to return 12 bits to 16 bits, do the reverse for each short (subtract 2048 and shift four spaces to the left).

Combine two 3 byte integers, and one 2 byte integer into one 8 byte integer

Trying to store three integers into one to use for a hash, and decode back into their original values.
The variables:
x = 3 byte integer (Can be negative)
z = 3 byte integer (Can be negative)
y = 2 byte integer (Cannot be negative)
My current code - doesn't work with negatives:
long combined = (y) | (((long) z) << 16) | ((((long) x)) << 40);
int newX = (int) (combined >> 40); // Trim off 40 bits, leaving the heading 24
int newZ = (int) ((combined << 24) >> (40)); // Trim off 24 bits left, and the 16 bits to the right
int newY = (int) ((combined << 48) >> 48); // Trim off all bits other then the first 16
It doesn't work for negatives because your "3 byte integer" or "2 byte integer" is actually a regular 4-byte int. If the number is negative, all the highest bits will be set to "1"; if you binary-or the numbers together, these high 1 bits will overwrite the bits from the other numbers.
You can use bit-masking to encode the number correctly:
long combined = (y & 0xffff) | (((long) z & 0xffffff) << 16) | ((((long) x & 0xffffff)) << 40);
This will cut off the high-bits outside the 16 or 24 bit range that you're interested in.
The decoding already works fine, because the bit-shifting that you perform takes care of sign-extension.

why is the base64 encode java code doing this

So I'm trying to understand base64 encoding better and I came across this implementation on wikipedia
private static String base64Encode(byte[] in) {
StringBuffer out = new StringBuffer((in.length * 4) / 3);
int b;
for (int i = 0; i < in.length; i += 3) {
b = (in[i] & 0xFC) >> 2;
out.append(codes.charAt(b));
b = (in[i] & 0x03) << 4;
if (i + 1 < in.length) {
b |= (in[i + 1] & 0xF0) >> 4;
out.append(codes.charAt(b));
b = (in[i + 1] & 0x0F) << 2;
if (i + 2 < in.length) {
b |= (in[i + 2] & 0xC0) >> 6;
out.append(codes.charAt(b));
b = in[i + 2] & 0x3F;
out.append(codes.charAt(b));
} else {
out.append(codes.charAt(b));
out.append('=');
}
} else {
out.append(codes.charAt(b));
out.append("==");
}
}
return out.toString();
}
And I'm following along and I get to the line:
b = (in[i] & 0xFC) >> 2;
and I don't get it...why would you bitwise and 252 to a number then shift it right 2...wouldn't it be the same if you just shifted the byte itself without doing the bitwise operation? example:
b = in[i] >> 2;
Say my in[i] was the letter e...represented as 101 or in binary 01100101. If I shift that 2 to the right I get 011001 or 25. If I bitwise & it I get
01100101
11111100
--------
01100100
but then the shift is going to chop off the last 2 anyway...so why bother doing it?
Can somebody clarify for me please. Thanks.
IN in[i] >> 2, in[i] is converted to an int first. If it was a negative byte (with the high bit set) it will be converted to a negative int (with the now-highest 24 bits set as well).
In (in[i] & 0xFC) >> 2, in[i] is converted to an int as above, and then & 0xFC makes sure the extra bits are all reset to 0.
You're partially right, in that (in[i] & 0xFF) >> 2 would give the same result. & 0xFF is a common way to convert a byte to a non-negative int in the range 0 to 255.
The only way to know for sure why the original developer used 0xFC, and not 0xFF, is to ask them - but I speculate that it's to make it more obvious which bits are being used.

How to (cheaply) calculate all possible length-r combinations of n possible elements

What is the fastest way to calculate all possible length-r combinations of n possible elements without resorting to brute force techniques or anything that requires STL?
While working on an Apriori algorithm for my final project in my data structures class, I developed an interesting solution that uses bit-shifting and recursion, which i will share in an answer below for anyone who is interested. However, is this the fastest way of achieving this (without using any common libraries)?
I ask more out of curiosity than anything else, as the algorithm i currently have works just fine for my purposes.
Here is the algorithm that i developed to solve this problem. It currently just outputs each combination as a series of ones and zeros, but can be easily adapted to create data sets based on an array of possible elements.
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum - bitVal) << 1;
n += bitVal ? 1 : 0;
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
How it works:
This function treats each combination of elements as a sequence of ones and zeros, which can then be expressed with respect to a set of possible elements (but is not in this particular example).
For example, the results of 3C2 (all combinations of length-2 from a set of 3 possible elements) can be expressed as 011, 110, and 101. If the set of all possible elements is {A, B, C}, then the results can be expressed with respect to this set as {B, C}, {A, B}, and {A, C}.
For this explanation, i will be calculating 5C3 (all length-3 combinations composed of 5 possible elements).
This function accepts 3 arguments, all of which are unsigned integers:
The first parameter is the smallest possible integer whose binary representation has a number of 1s equal to the length of the combinations we're creating. This is out starting value for generating combinations. For 5C3, this would be 00111b, or 7 in decimal.
The second parameter is the value of highest bit that is set to 1 in the starting number. This is the first bit that will be subtracted when creating the combinations. For 5C3, this is the third bit from the right, which has a value of 4.
The third parameter is the value of the nth bit from the right, where n is the number of possible elements that we are combining. This number will be bitwise-anded with the combinations we create to check whether the left-most bit of the combination is a 1 or a 0. For 5C3, we will use the 5th bit from the right, which is 10000b, or 16 in decimal.
Here are the actual steps that the function performs:
Calculate startNum - bitVal, bit-shift one space to the left, and add 1 if bitVal is not 0.
For the first iteration, the result should be the same as startNum. This is so that we can print out the first combination (which is equal to startNum) within the function so we don't have to do it manually ahead of time. The math for this operation occurs as follows:
00111 - 00100 = 00011
00011 << 1 = 00110
00110 + 1 = 00111
The result of the previous calculation is a new combination. Do something with this data.
We are going to be printing the result to the console. This is done using a for-loop whose variable starts out equal to the number of bits we are working with (calculated by taking log2 of the testNum and adding 1; log2(16) + 1 = 4 + 1 = 5) and ends at 0. Each iteration, we bit-shift right by i-1 and print the right-most bit by and-ing the result with 1. Here is the math:
i=5:
00111 >> 4 = 00000
00000 & 00001 = 0
i=4:
00111 >> 3 = 00000
00000 & 00001 = 0
i=3:
00111 >> 2 = 00001
00001 & 00001 = 1
i=2:
00111 >> 1 = 00011
00011 & 00001 = 1
i=1:
00111 >> 0 = 00111
00111 & 00001 = 1
output: 00111
If the left-most bit of n (the result of the calculation in step 1) is 0 and n is not equal to startNum, we recurse with n as the new startNum.
Obviously this will be skipped on the first iteration, as we have already shown that n is equal to startNum. This becomes important in subsequent iterations, which we will see later.
If bitVal is greater than 0 and less than testNum, recurse with the current iteration's original startNum as the first argument. Second argument is bitVal shifted right by 1 (same thing as integer division by 2).
We now recurse with the new bitVal set to the value of the next bit to the right of the current bitVal. This next bit is what will be subtracted in the next iteration.
Continue to recurse until bitVal becomes equal to zero.
Because bitVal is bit-shifted right by one in the second recursive call, we will eventually reach a point when bitVal equals 0. This algorithm expands as a tree, and when bitVal equals zero and the left-most bit is 1, we return to one layer up from our current position. Eventually, this cascades all the way back the the root.
In this example, the tree has 3 subtrees and 6 leaf nodes. I will now step through the first subtree, which consists of 1 root node and 3 leaf nodes.
We will start at the last line of the first iteration, which is
if (bitVal)
r_nCr(startNum, bitVal >> 1, testNum);
So we now enter the second iteration with startNum=00111(7), bitVal = 00010(2), and testNum = 10000(16) (this number never changes).
Second Iteration
Step 1:
n = 00111 - 00010 = 00101 // Subtract bitVal
n = 00101 << 1 = 01010 // Shift left
n = 01010 + 1 = 01011 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 0 and n is not equal to startNum, so we recurse with n as the new startNum. We now enter the third iteration with startNum=01011(11), bitVal = 00010(2), and testNum = 10000(16).
Third Iteration
Step 1:
n = 01011 - 00010 = 01001 // Subtract bitVal
n = 01001 << 1 = 10010 // Shift left
n = 10010 + 1 = 10011 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1. We now enter the fourth iteration with startNum=01011(11), bitVal = 00001(1), and testNum = 10000(16).
Fourth Iteration
Step 1:
n = 01011 - 00001 = 01010 // Subtract bitVal
n = 01010 << 1 = 10100 // Shift left
n = 10100 + 1 = 10101 // bitVal is not 0, so add 1
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1. We now enter the fifth iteration with startNum=01011(11), bitVal = 00000(0), and testNum = 10000(16).
Fifth Iteration
Step 1:
n = 01011 - 00000 = 01011 // Subtract bitVal
n = 01011 << 1 = 10110 // Shift left
n = 10110 + 0 = 10110 // bitVal is 0, so add 0
// Because bitVal = 0, nothing is subtracted or added; this step becomes just a straight bit-shift left by 1.
Step 2: Print result.
Step 3: The left-most bit is 1, so do not recurse.
Step 4: bitVal is 0, so do not recurse.
Return to Second Iteration
Step 4: bitVal is not 0, so recurse with bitVal shifted right by 1.
This will continue on until bitVal = 0 for the first level of the tree and we return to the first iteration, at which point we will return from the function entirely.
Here is a simple diagram showing the function's tree-like expansion:
And here is a more complicated diagram showing the function's thread of execution:
Here is an alternate version using bitwise-or in place of addition and bitwise-xor in place of subtraction:
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum ^ bitVal) << 1;
n |= (bitVal != 0);
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
What about this?
#include <stdio.h>
#define SETSIZE 3
#define NELEMS 7
#define BYTETOBINARYPATTERN "%d%d%d%d%d%d%d%d"
#define BYTETOBINARY(byte) \
(byte & 0x80 ? 1 : 0), \
(byte & 0x40 ? 1 : 0), \
(byte & 0x20 ? 1 : 0), \
(byte & 0x10 ? 1 : 0), \
(byte & 0x08 ? 1 : 0), \
(byte & 0x04 ? 1 : 0), \
(byte & 0x02 ? 1 : 0), \
(byte & 0x01 ? 1 : 0)
int main()
{
unsigned long long x = (1 << SETSIZE) -1;
unsigned long long N = (1 << NELEMS) -1;
while(x < N)
{
printf ("x: "BYTETOBINARYPATTERN"\n", BYTETOBINARY(x));
unsigned long long a = x & -x;
unsigned long long y = x + a;
x = ((y & -y) / a >> 1) + y - 1;
}
};
It should print 7C3.

RBG 24bit to RGB 8bit bit shifting

How do you store 3 numbers in a single byte using bit shifting in java, ie use the first 3 bits for R, the next 3 bits for G, and the last 2 bits B. I think I know how to retrieve the numbers from the bytes, however an example with encoding and decoding would be great.
Thanks Jake
EDIT:
The range of the values for R and G would be 0-7 and 0-3 for B.
Given r, g and b are in the range 0 - 255:
rgb = (b >>> 6) << 6 | (g >>> 5) << 3 | (r >>> 5);
This is filling out the result in this order:
+--+--+--+--+--+--+--+--+
|B7|B6|G7|G6|G5|R7|R6|R5|
+--+--+--+--+--+--+--+--+
i.e. I've assumed that when you've said "first" you meant least significant. If you want them the other way around it would be:
rgb = (b >>> 6) | (g >>> 5) << 2 | (r >>> 5) << 5;

Categories