I have been working in C++ and Java and in both the languages I have often times come across a strange way of variable assignment, using bitwise operators. Instead of what could have been a simple assignment using the assignment operator it is complicated using bit operators like left shift.
For example, in Java's ServerSocketChannel class we see the following assignments:
public static final int OP_READ = 1 << 0;
public static final int OP_WRITE = 1 << 2;
public static final int OP_CONNECT = 1 << 3;
public static final int OP_ACCEPT = 1 << 4;
I am trying to understand here what have we gained by using the << operator. We could have made simple assignments to assign the variables as 1,4,8,16 respectively as below:
public static final int OP_READ = 1;
public static final int OP_WRITE = 4;
public static final int OP_CONNECT = 8;
public static final int OP_ACCEPT = 16;
What is the value add in using << operator here ?
This is for clarity/readability (when it matters).
At the byte code level, OP_ACCEPT = 16 or OP_ACCEPT = 1 << 4 are the same thing (via javap -constants <YourClass>)
It's just easier to see exactly how many times this has been shifted. Usually this matters when you do different operations that are bound to power of two operations.
On example would be HashMap (or I assume HashXXX structures), where, at least in java, buckets are chosen on the next power of two, always. This simplifies processing or may be, rationalizing, thus number of buckets is declared as :
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
At the same time, where, power of two, does not matter, variables are not declared like this:
static final int TREEIFY_THRESHOLD = 8;
static final int MIN_TREEIFY_CAPACITY = 64
Think of how a bucket is chosen for example in case of HashMap via (n - 1) & hash, where n is the number of buckets (always a power of two). A default capacity of 16 (or better 1 << 4), means that the last 4 bits are zero, doing a minus 1 will make them all ones. So, in a way, 1 << 4 for HashMap would mean that the last 4 bits only are taken into consideration (until next re-hash). Now think 1 << 28 for example... without shift this would be pretty long to reason about.
A least for me, for such cases, doing for example an or or and would make faster sense on such variables.
They're the most efficient way of representing something whose state is defined by several "yes or no" properties. ACLs are a good example; if you have let's say 4 discrete permissions (read, write, execute, change policy), it's better to store this in 1 byte rather than waste 4.
When you left shift 1 bit, the number get multiplied by 2.
for example take binary number of 5 = 0101
after left shifting 1 time it will become 1010, that is equivalent to 10.
Related
I would like to drastically improve the time performance of an operation I would best describe as a bit wise operation.
The following is a constructor for a BitFile class, taking three BitFile as parameters. Whichever bit the first and second parameter (firstContender and secondContender) agree on is taken from firstContender into the BitFile being constructed. Whichever bit they don't agree on is taken from the supportContender.
data is the class-field storing the result and the backbone of the BitFile class.
compare(byte,byte) returns true if both bytes are identical in value.
add(byte,int) takes a byte representing a bit and the index within the bit to extract, a second class-field "index" is used and incremented in add(byte,int) to put the next bit in location.
'BitFile.get(int)' returns a byte with just a specific bit being one, if it is one, BitFile.get(9) would return a byte with value 2 if the second bit of the second byte is a one, otherwise 0.
Xor bit wise operation can quickly tell me which bits are different in the two BitFile. Is there any quick way to use the result of a Xor, where all it's zeroes are represented by the firstContender's equivalent bit and all the one's are represented by the supportContender's equivalent bit, something like a
three operand Bit Wise operator?
public BitFile(
BitFile firstContender,BitFile secondContender,BitFile supportContender)
{
if(firstContender.getLength() != secondContender.getLength())
{
throw new IllegalArgumentException(
"Error.\n"+
"In BitFile constructor.\n"+
"Two BitFiles must have identical lengths.");
}
BitFile randomSet = supportContender;
int length = firstContender.getLength();
data = new byte[length];
for(int i = 0; i < length*8;i++)
{
if(compare(firstContender.get(i),secondContender.get(i)))
{
add(firstContender.get(i),i%8);
}
else
{
add(randomSet.get(i),i%8);
}
}
}
I found this question fairly confusing, but I think what you're computing is like this:
merge(first, second, support) = if first == second then first else support
So just choose where the bit comes from depending on whether the first and second sources agree or not.
something like a three operand Bit Wise operator?
indeed something like that. But of course we need to implement it manually in terms of operations supported by Java. There are two common patterns in bitwise arithmetic to choose between two sources based on a third:
1) (a & ~m) | (b & m)
2) a ^ ((a ^ b) & m)
Which choose, for each bit, the bit from a where m is zero, and from b where m is one. Pattern 1 is easier to understand so I'll use it but it's simple to adapt the code to the second pattern.
As you predicted, the mask in this case will be first ^ second, so:
for (int i = 0; i < data.length; i++) {
int m = first.data[i] ^ second.data[i];
data[i] = (byte)((first.data[i] & ~m) | (support.data[i] & m));
}
The same thing could easily be done with an array of int or long which would need fewer operations to process the same amount of data.
I was going through source code of HashMap . I saw something like
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16. I want to know why they are using shift operator. Does this speed up calculation or something. So I looked for byte differences between these three operations:
int DEFAULT_INITIAL_CAPACITY = 0x10;
L0
LINENUMBER 52 L0
BIPUSH 16
ISTORE 1
int DEFAULT_INITIAL_CAPACITY1 = 1 << 4;
L1
LINENUMBER 54 L1
BIPUSH 16
ISTORE 2
int test = 16;
L2
LINENUMBER 56 L2
BIPUSH 16
ISTORE 3
Does it matter how the value is initialized?
Believe it or not, it's actually about readability. The expression 1 << 4 surely doesn't evaluate faster than the expression 16. Plus, whatever the expression is, it is evaluated at compile time.
The point of using the shift-representation is that it is a more natural way to express round binary numbers. The invariant for initial capacity, as well as for many other things in hashtable implementations, is that it must be a pure power of two. This is communicated more directly with the expression 1 << n (equivalent to 2n) than the decimal representation, especially when you go into higher values of n (such as anything higher than 16, for example).
As you figured out by yourself, byte code is identical for constant 16 or 1 << 4. In this particular case I suppose it is just a matter of readability: to emphasise that initial capacity should be a power of 2 (by shifting 1 to the left you can get only powers of 2). This is what I have in sources for HashMap:
/**
* The default initial capacity - MUST be a power of two.
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
from a speed perspective, it is probably not an advantage. However, it looks like a HashMap's capacity is always a power of 2 (even specifying the capacity in the constructor results in a call to Collections.roundUpToPowerOfTwo(capacity)), so notating it in the form 1 << x ensures this restriction trivially, even if you were to change x. the other forms would be easier to mess up when changing if you weren't aware of the restriction
Below is a timed test of the three initialization methods described in your question.
public static void main(String[] args) {
long time = System.currentTimeMillis();
int test = 0;
for(int i = 0; i < 100000; i++){
test = 16;
}
System.out.println((System.currentTimeMillis() - time) + " : " + test);
time = System.currentTimeMillis();
int test2 = 0;
for(int i = 0; i < 100000; i++){
test2 = 1 << 4;
}
System.out.println((System.currentTimeMillis() - time) + " : " + test2);
time = System.currentTimeMillis();
int test3 = 0;
for(int i = 0; i < 100000; i++){
test3 = 0x10;
}
System.out.println((System.currentTimeMillis() - time) + " : " + test3);
}
running this yields
2 : 16
2 : 16
2 : 16
each time it's + or - 5ms for each execution. This indicates it's pretty much irrelevant how the value is initialized.
conclusion:
It makes no programmatic difference which method is used to initialize the value.
It seems that the only reason to use 1 << 4 over 16 or 0x10 is to enforce that the initial value is a power of 2.
basically shift operators are used to work on bits and are more fast than working on other operators like +,-
example-
to multiply two number what CPU will do is it will internally perform multiply by replacing it with addition and subtraction and it is nothing but shifting and performing AND,OR,NOT etc. operation on bits.
so if you are working directly on bits it means you are doing the work which has to be done by CPU after lots of processing.
also see:
https://docs.oracle.com/javase/tutorial/java/nutsandbolts/op3.html
so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).
I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.
So far my algorithms are (In Pseudocode):
The one for Ө(n)
int f(int n) {
int number = 2
if(n = 0) then return 0
if(n==1) then return 1
while(n > 1)
number = number * 2
n--
number = number - 1
return number
A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):
int f(int n) {
if(n==0) then return 0
if(n==1) then return 1
return 3*f(n-1) - 2*f(n-2)
}
Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm
to yield 2^n -1 in Ө(1), since the size of the output itself is
Ө(log(n)), so if we assume there is such algorithm, and let it
run in constant time and makes less than C operations, for n =
2^(C+1), you will require C+1 operations only to print the
output, which contradicts the assumption that C is the upper bound, so
there is no such algorithm.
For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
As commented by #kajacx, you can even improve (3) by applying Fourier transform
Something like:
HugeInt h = 1;
h = h << n;
h = h - 1;
Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.
=====
Look at amit's answer instead!
the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:
return n-1 bits filled with 1
(assuming your architecture can allocate and fill n-1 bits in constant time)
;)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I recently have started learning Java, and now I am covering the bit operator part. While studying, I was wondering when this bitwise operators are used, and I would like you to give me some examples if possible. Thank you!
Good example - bitwise XOR to swap two numbers (again, very popular in interviews) - fast swapping values without any third variable:
int a = 2; // a = 0010
int b = 11; // b = 1011
a = a ^ b; // a = 0010 ^ 1011 = 1001
b = a ^ b; // b = 1001 ^ 1011 = 0010 (as a at the beginning)
a = a ^ b; // a = 1001 ^ 0010 = 1011 (as b at the beginning)
You can find an article about this in wiki
There's several places, though they aren't things that you will use often. You'll just end up using them when you need them.
A good example is checking if a number is even:
if (num & 1 == 0) {}
They are also useful in flags, such as having this:
private static final int ENABLE_FOO = 0x0001;
private static final int ENABLE_BAR = 0x0002;
static int mask = (ENABLE_FOO | ENABLE_BAR);
public static void example() {
if (mask & ENABLE_FOO) { //If flag set.
do_foo();
}
if (mask & ENABLE_BAR) { //If flag set.
do_bar();
}
}
public static void doFooOnce() {
if (mask & ENABLE_FOO) { //If flag set.
do_foo();
}
mask &= ~ENABLE_FOO; //Bitwise and mask by the bitwise opposite of ENABLE_FOO
}
There's other places, too. Just know that you won't use them too often, but when you do they will be useful.
Bitwise operators are used for bit manipulation, i.e. in cases when you want to go down to "gory details" of data structures that in the end of the day are sequences of bytes.
There are a lot of tutorials that explain various usages of bitwise operators, however I will give you only one that (IMHO) is the most useful (at least for me).
Sometimes you want to handle a lot of boolean flags. You can create Map<String, Boolean> and (for example) pass instance of such map to some method (let's call it foo()), i.e.:
Map<String, Boolean> options = new HashMap<>();
// fill the map
foo(options);
Obviously we can use enum and EnumMap instead of string keys.
Alternatively we can define a series of constants like:
public static final int ONE = 1;
public static final int TWO = 2;
public static final int THREE = 4;
public static final int FOUR = 8;
etc. etc.
Now we can change foo() to get int parameter and call it as following:
foo(ONE | TWO);
foo(ONE | FOUR);
etc.
In some cases this notation is more readable; in most cases it saves memory and gives some performance benefits.
Please note that already mentioned EnumMap is implemented using this technique, so you can just use it in most cases and enjoy both efficiency and OOD.
In C++, why does a bool require one byte to store true or false where just one bit is enough for that, like 0 for false and 1 for true? (Why does Java also require one byte?)
Secondly, how much safer is it to use the following?
struct Bool {
bool trueOrFalse : 1;
};
Thirdly, even if it is safe, is the above field technique really going to help? Since I have heard that we save space there, but still compiler generated code to access them is bigger and slower than the code generated to access the primitives.
Why does a bool require one byte to store true or false where just one bit is enough
Because every object in C++ must be individually addressable* (that is, you must be able to have a pointer to it). You cannot address an individual bit (at least not on conventional hardware).
How much safer is it to use the following?
It's "safe", but it doesn't achieve much.
is the above field technique really going to help?
No, for the same reasons as above ;)
but still compiler generated code to access them is bigger and slower than the code generated to access the primitives.
Yes, this is true. On most platforms, this requires accessing the containing byte (or int or whatever), and then performing bit-shifts and bit-mask operations to access the relevant bit.
If you're really concerned about memory usage, you can use a std::bitset in C++ or a BitSet in Java, which pack bits.
* With a few exceptions.
Using a single bit is much slower and much more complicated to allocate. In C/C++ there is no way to get the address of one bit so you wouldn't be able to do &trueOrFalse as a bit.
Java has a BitSet and EnumSet which both use bitmaps. If you have very small number it may not make much difference. e.g. objects have to be atleast byte aligned and in HotSpot are 8 byte aligned (In C++ a new Object can be 8 to 16-byte aligned) This means saving a few bit might not save any space.
In Java at least, Bits are not faster unless they fit in cache better.
public static void main(String... ignored) {
BitSet bits = new BitSet(4000);
byte[] bytes = new byte[4000];
short[] shorts = new short[4000];
int[] ints = new int[4000];
for (int i = 0; i < 100; i++) {
long bitTime = timeFlip(bits) + timeFlip(bits);
long bytesTime = timeFlip(bytes) + timeFlip(bytes);
long shortsTime = timeFlip(shorts) + timeFlip(shorts);
long intsTime = timeFlip(ints) + timeFlip(ints);
System.out.printf("Flip time bits %.1f ns, bytes %.1f, shorts %.1f, ints %.1f%n",
bitTime / 2.0 / bits.size(), bytesTime / 2.0 / bytes.length,
shortsTime / 2.0 / shorts.length, intsTime / 2.0 / ints.length);
}
}
private static long timeFlip(BitSet bits) {
long start = System.nanoTime();
for (int i = 0, len = bits.size(); i < len; i++)
bits.flip(i);
return System.nanoTime() - start;
}
private static long timeFlip(short[] shorts) {
long start = System.nanoTime();
for (int i = 0, len = shorts.length; i < len; i++)
shorts[i] ^= 1;
return System.nanoTime() - start;
}
private static long timeFlip(byte[] bytes) {
long start = System.nanoTime();
for (int i = 0, len = bytes.length; i < len; i++)
bytes[i] ^= 1;
return System.nanoTime() - start;
}
private static long timeFlip(int[] ints) {
long start = System.nanoTime();
for (int i = 0, len = ints.length; i < len; i++)
ints[i] ^= 1;
return System.nanoTime() - start;
}
prints
Flip time bits 5.0 ns, bytes 0.6, shorts 0.6, ints 0.6
for sizes of 40000 and 400K
Flip time bits 6.2 ns, bytes 0.7, shorts 0.8, ints 1.1
for 4M
Flip time bits 4.1 ns, bytes 0.5, shorts 1.0, ints 2.3
and 40M
Flip time bits 6.2 ns, bytes 0.7, shorts 1.1, ints 2.4
If you want to store only one bit of information, there is nothing more compact than a char, which is the smallest addressable memory unit in C/C++. (Depending on the implementation, a bool might have the same size as a char but it is allowed to be bigger.)
A char is guaranteed by the C standard to hold at least 8 bits, however, it can also consist of more. The exact number is available via the CHAR_BIT macro defined in limits.h (in C) or climits (C++). Today, it is most common that CHAR_BIT == 8 but you cannot rely on it (see here). It is guaranteed to be 8, however, on POSIX compliant systems and on Windows.
Though it is not possible to reduce the memory footprint for a single flag, it is of course possible to combine multiple flags. Besides doing all bit operations manually, there are some alternatives:
If you know the number of bits at compile time
bitfields (as in your question). But beware, the ordering of fields is not guaranteed, which may result in portability issues.
std::bitset
If you know the size only at runtime
boost::dynamic_bitset
If you have to deal with large bitvectors, take a look at the BitMagic library. It supports compression and is heavily tuned.
As others have pointed out already, saving a few bits is not always a good idea. Possible drawbacks are:
Less readable code
Reduced execution speed because of the extra extraction code.
For the same reason, increases in code size, which may outweigh the savings in data consumption.
Hidden synchronization issues in multithreaded programs. For example, flipping two different bits by two different threads may result in a race condition. In contrast, it is always safe for two threads to modify two different objects of primitive types (e.g., char).
Typically, it makes sense when you are dealing with huge data because then you will benefit from less pressure on memory and cache.
Why don't you just store the state to a byte? Haven't actually tested the below, but it should give you an idea. You can even utilize a short or an int for 16 or 32 states. I believe I have a working JAVA example as well. I'll post this when I find it.
__int8 state = 0x0;
bool getState(int bit)
{
return (state & (1 << bit)) != 0x0;
}
void setAllOnline(bool online)
{
state = -online;
}
void reverseState(int bit)
{
state ^= (1 << bit);
}
Alright here's the JAVA version. I've stored it to an Int value since. If I remember correctly even using a byte would utilize 4 bytes anyways. And this obviously isn't be utilized as an array.
public class State
{
private int STATE;
public State() {
STATE = 0x0;
}
public State(int previous) {
STATE = previous;
}
/*
* #Usage - Used along side the #setMultiple(int, boolean);
* #Returns the value of a single bit.
*/
public static int valueOf(int bit)
{
return 1 << bit;
}
/*
* #Usage - Used along side the #setMultiple(int, boolean);
* #Returns the value of an array of bits.
*/
public static int valueOf(int... bits)
{
int value = 0x0;
for (int bit : bits)
value |= (1 << bit);
return value;
}
/*
* #Returns the value currently stored or the values of all 32 bits.
*/
public int getValue()
{
return STATE;
}
/*
* #Usage - Turns all bits online or offline.
* #Return - <TRUE> if all states are online. Otherwise <FALSE>.
*/
public boolean setAll(boolean online)
{
STATE = online ? -1 : 0;
return online;
}
/*
* #Usage - sets multiple bits at once to a specific state.
* #Warning - DO NOT SET BITS TO THIS! Use setMultiple(State.valueOf(#), boolean);
* #Return - <TRUE> if states were set to online. Otherwise <FALSE>.
*/
public boolean setMultiple(int value, boolean online)
{
STATE |= value;
if (!online)
STATE ^= value;
return online;
}
/*
* #Usage - sets a single bit to a specific state.
* #Return - <TRUE> if this bit was set to online. Otherwise <FALSE>.
*/
public boolean set(int bit, boolean online)
{
STATE |= (1 << bit);
if(!online)
STATE ^= (1 << bit);
return online;
}
/*
* #return = the new current state of this bit.
* #Usage = Good for situations that are reversed.
*/
public boolean reverse(int bit)
{
return (STATE ^= (1 << bit)) == (1 << bit);
}
/*
* #return = <TRUE> if this bit is online. Otherwise <FALSE>.
*/
public boolean online(int bit)
{
int value = 1 << bit;
return (STATE & value) == value;
}
/*
* #return = a String contains full debug information.
*/
#Override
public String toString()
{
StringBuilder sb = new StringBuilder();
sb.append("TOTAL VALUE: ");
sb.append(STATE);
for (int i = 0; i < 0x20; i++)
{
sb.append("\nState(");
sb.append(i);
sb.append("): ");
sb.append(online(i));
sb.append(", ValueOf: ");
sb.append(State.valueOf(i));
}
return sb.toString();
}
}
Also I should point out that you really shouldn't utilize a special class for this, but to just have the variable stored within the class that'll be most likely utilizing it. If you plan to have 100's or even 1000's of Boolean values consider an array of bytes.
E.g. the below example.
boolean[] states = new boolean[4096];
can be converted into the below.
int[] states = new int[128];
Now you're probably wondering how you'll access index 4095 from a 128 array. So what this is doing is if we simplify it. The 4095 is be shifted 5 bits to the right which is technically the same as divide by 32. So 4095 / 32 = rounded down (127). So we are at index 127 of the array. Then we perform 4095 & 31 which will cast it to a value between 0 and 31. This will only work with powers of two minus 1. E.g. 0,1,3,7,15,31,63,127,255,511,1023, etc...
So now we can access the bit at that position. As you can see this is very very compact and beats having 4096 booleans in a file :) This will also provide a much faster read/write to a binary file. I have no idea what this BitSet stuff is, but it looks like complete garbage and since byte,short,int,long are already in their bit forms technically you might as well use them as is. Then creating some complex class to access the individual bits from memory which is what I could grasp from reading a few posts.
boolean getState(int index)
{
return (states[index >> 5] & 1 << (index & 0x1F)) != 0x0;
}
Further information...
Basically if the above was a bit confusing here's a simplified version of what's happening.
The types "byte", "short", "int", "long" all are data types which have different ranges.
You can view this link: http://msdn.microsoft.com/en-us/library/s3f49ktz(v=vs.80).aspx
To see the data ranges of each.
So a byte is equal to 8 bits. So an int which is 4 bytes will be 32 bits.
Now there isn't any easy way to perform some value to the N power. However thanks to bit shifting we can simulate it somewhat. By performing 1 << N this equates to 1 * 2^N. So if we did 2 << 2^N we'd be doing 2 * 2^N. So to perform powers of two always do "1 << N".
Now we know that a int will have 32 bits so can use each bits so we can just simply index them.
To keep things simple think of the "&" operator as a way to check if a value contains the bits of another value. So let's say we had a value which was 31. To get to 31. we must add the following bits 0 through 4. Which are 1,2,4,8, and 16. These all add up to 31. Now when we performing 31 & 16 this will return 16 because the bit 4 which is 2^4 = 16. Is located in this value. Now let's say we performed 31 & 20 which is checking if bits 2 and 4 are located in this value. This will return 20 since both bits 2 and 4 are located here 2^2 = 4 + 2^4 = 16 = 20. Now let's say we did 31 & 48. This is checking for bits 4 and 5. Well we don't have bit 5 in 31. So this will only return 16. It will not return 0. So when performing multiple checks you must check that it physically equals that value. Instead of checking if it equals 0.
The below will verify if an individual bit is at 0 or 1. 0 being false, and 1 being true.
bool getState(int bit)
{
return (state & (1 << bit)) != 0x0;
}
The below is example of checking two values if they contain those bits. Think of it like each bit is represented as 2^BIT so when we do
I'll quickly go over some of the operators. We've just recently explained the "&" operator slightly. Now for the "|" operator.
When performing the following
int value = 31;
value |= 16;
value |= 16;
value |= 16;
value |= 16;
The value will still be 31. This is because bit 4 or 2^4=16 is already turned on or set to 1. So performing "|" returns that value with that bit turned on. If it's already turned on no changes are made. We utilize "|=" to actually set the variable to that returned value.
Instead of doing -> "value = value | 16;". We just do "value |= 16;".
Now let's look a bit further into how the "&" and "|" can be utilized.
/*
* This contains bits 0,1,2,3,4,8,9 turned on.
*/
const int CHECK = 1 | 2 | 4 | 8 | 16 | 256 | 512;
/*
* This is some value were we add bits 0 through 9, but we skip 0 and 8.
*/
int value = 2 | 4 | 8 | 16 | 32 | 64 | 128 | 512;
So when we perform the below code.
int return_code = value & CHECK;
The return code will be 2 + 4 + 8 + 16 + 512 = 542
So we were checking for 799, but we recieved 542 This is because bits o and 8 are offline we equal 256 + 1 = 257 and 799 - 257 = 542.
The above is great great great way to check if let's say we were making a video game and wanted to check if so and so buttons were pressed if any of them were pressed. We could simply check each of those bits with one check and it would be so many times more efficient than performing a Boolean check on every single state.
Now let's say we have Boolean value which is always reversed.
Normally you'd do something like
bool state = false;
state = !state;
Well this can be done with bits as well utilizing the "^" operator.
Just as we performed "1 << N" to choose the whole value of that bit. We can do the same with the reverse. So just like we showed how "|=" stores the return we will do the same with "^=". So what this does is if that bit is on we turn it off. If it's off we turn it on.
void reverseState(int bit)
{
state ^= (1 << bit);
}
You can even have it return the current state. If you wanted it to return the previous state just swap "!=" to "==". So what this does is performs the reversal then checks the current state.
bool reverseAndGet(int bit)
{
return ((state ^= (1 << bit)) & (1 << bit)) != 0x0;
}
Storing multiple non single bit aka bool values into a int can also be done. Let's say we normally write out our coordinate position like the below.
int posX = 0;
int posY = 0;
int posZ = 0;
Now let's say these never wen't passed 1023. So 0 through 1023 was the maximum distance on all of these. I'm choose 1023 for other purposes as previously mentioned you can manipulate the "&" variable as a way to force a value between 0 and 2^N - 1 values. So let's say your range was 0 through 1023. We can perform "value & 1023" and it'll always be a value between 0 and 1023 without any index parameter checks. Keep in mind as previously mentioned this only works with powers of two minus one. 2^10 = 1024 - 1 = 1023.
E.g. no more if (value >= 0 && value <= 1023).
So 2^10 = 1024, which requires 10 bits in order to hold a number between 0 and 1023.
So 10x3 = 30 which is still less than or equal to 32. Is sufficient for holding all these values in an int.
So we can perform the following. So to see how many bits we used. We do 0 + 10 + 20. The reason I put the 0 there is to show you visually that 2^0 = 1 so # * 1 = #. The reason we need y << 10 is because x uses up 10 bits which is 0 through 1023. So we need to multiple y by 1024 to have unique values for each. Then Z needs to be multiplied by 2^20 which is 1,048,576.
int position = (x << 0) | (y << 10) | (z << 20);
This makes comparisons fast.
We can now do
return this.position == position;
apposed to
return this.x == x && this.y == y && this.z == z;
Now what if we wanted the actual positions of each?
For the x we simply do the following.
int getX()
{
return position & 1023;
}
Then for the y we need to perform a left bit shift then AND it.
int getY()
{
return (position >> 10) & 1023;
}
As you may guess the Z is the same as the Y, but instead of 10 we use 20.
int getZ()
{
return (position >> 20) & 1023;
}
I hope whoever views this will find it worth while information :).
If you really want to use 1 bit, you can use a char to store 8 booleans, and bitshift to get the value of the one you want. I doubt it will be faster, and it's probably going to gives you a lot of headaches working that way, but technically it's possible.
On a side note, an attempt like this could prove useful for systems that don't have a lot of memory available for variables but do have some more processing power then what you need. I highly doubt you will ever need it though.