So I saw this one question on the interwebs that was along the lines of
Write a function that counts the number of bits in a character
Now obviously this confused me (or I wouldn't be here).
My very first thought was "aren't all chars 16 bits by default?" but obviously that has to be wrong because this question exists. I have no idea where to start. Maybe I can get the hex value of a char? Is there an easy way to convert from hex to binary or something? Is this something that can be asked about ANY language (I'm curious about Java here) or does it only matter to like C or something?
Here's another approach if you want to avoid recursion.
public static int bitsSet(char arg) {
int counter = 0;
for (int oneBit = 1; oneBit <= 0x8000; oneBit <<= 1) {
if ((arg & oneBit) > 0) {
counter++;
}
}
return counter;
}
Update
Here's a bit of an explanation. In the loop, oneBit bit-shifts to the left each time, which doubles its value. The <<= operation is a kind of shorthand for oneBit = oneBit << 1. So, the first time through, we have oneBit = 0000000000000001. Then the next time, we have oneBit = 0000000000000010, then oneBit = 0000000000000100, and so on, until we reach the last iteration, when we have oneBit = 1000000000000000 (these are all binary of course).
Now, the value of arg & oneBit will equal oneBit if arg has the matching bit set, or 0 otherwise. So the condition is executing counter++ if it encounters a set bit. By the time the loop has run all 16 times, we've counted all the set bits.
I'm assuming from your title that you're after the number of "set bits" (that is, bits that are equal to one). You can do it like this.
public static int bitsSet(char arg) {
return arg == 0 ? 0 : (arg & 1) + bitsSet((char)( arg >>> 1 ));
}
And yes, all chars in Java are 16 bits.
Update
Here's a bit of an explanation. (arg & 1) will check the rightmost bit of arg and return 0 or 1 depending on whether it is clear or set. So we want to take that 0 or 1, and add it to the number of set bits among the leftmost 15 bits. So to work that out, we shift arg to the right, introducing a zero at the left end. We need >>> rather than >> to make sure that we get a zero at the left end. Then we call bitsSet all over again, with the right-shifted value of arg.
But every time we do that, arg gets smaller, so eventually it's going to reach zero. When that happens, no more bits are set, so we can return 0.
To see the recursion working, take, for example, arg = '%' = 100101. Then we have the following - where all numbers shown are binary -
bitsSet(100101)
= (100101 & 1) + bitsSet(10010))
= (100101 & 1) + (10010 & 1) + bitsSet(1001)
= (100101 & 1) + (10010 & 1) + (1001 & 1) + bitsSet(100)
= (100101 & 1) + (10010 & 1) + (1001 & 1) + (100 & 1) + bitsSet(10)
= (100101 & 1) + (10010 & 1) + (1001 & 1) + (100 & 1) + (10 & 1) + bitsSet(1)
= (100101 & 1) + (10010 & 1) + (1001 & 1) + (100 & 1) + (10 & 1) + (1 & 1) + bitsSet(0)
= 1 + 0 + 1 + 0 + 0 + 1 + 0
= 3
All chars are 1-byte in C, Unicode (TCHAR) char's are 2 bytes.
To count the bits, you do bit shifting. I am not really that good at binary arithmetic personally.
According to Oracle JAVA doc for primitives.
char: The char data type is a single 16-bit Unicode character.
Related
In my Object-Oriented Programming course we discussed a topic that I don't think he ever named, I've tried to find out what it's name is to find a proper way to solve these, but I have had no luck.
This is not homework, but a question for clarification about the process to solve this problem.
for I = (N + 2) downto -1
for J = (I - 1) to (N + 4)
// Code is run here
The question is "How many times is // Code is run here ran?"
Here is what I have tried to solve this:
1) I = (N + 2), J = [(N + 2) - 1] from this (and what I remember) you use b - a - 1 to solve for the number of times executed, which gives us X = [(N + 2) - 1] - (N + 2) - 1 which can be simplified to X = -2
2) I = -1, J =((-1) - 1)andX = ((-1) - 1) - (-1) - 1which simplifies toX = -2`
I'm getting lost on dealing with the second for loop and how to finish the problem. I know that we have to end up with an answer such as r(r + 1)/2
I just want to say that I have attempted to look for a name of this type of technique, but he simply called it "Code Counting" which didn't return any searches relating to this topic.
Thank you
EDIT: This course was in Java, so that is why I used the Java tag for this question, if anyone is curious.
EDIT2: To clarify, this was on a written exam, so we are expected to do this via pen-and-paper, I would like an explanation of how to solve this question as I have attempted it many times and still end up with the wrong answer.
Just look at the "code" and start counting logically. In the first iteration of the outer loop (called OL) you execute the inner loop (IL) (N + 4) - (N + 2 - 1) + 1 times = 4 times.
Explanation of the +1: if we run the loop from -1 to 2, we in fact run it 4 times: -1, 0, 1, 2, which in math is `2 - (-1) + 1.
The next time I = N + 1, therefore the IL runs (N + 4) - (N + 1 - 1) + 1 times = 5 times. Same goes for the next step and the step after that, the times the IL is executed increase by one each time : 4 + 5 + 6 + .... The question remaining is how far we go.
The last step is I = -1, there IL gets run (N + 4) - (-1 - 1) + 1 = N + 7 times.
The sum you are looking for therefore seems to be 4 + 5 + 6 + ... + (N + 6) + (N + 7). Which in fact is something like r(r + 1)/2 with a few substractions and additions.
The above numbers assume the to boundaries to be inclusive.
Note: whenever you come up with some kind of a formular, choose the input parameter as something small (like 0 or 1) and verify that the formula works for that value.
Summing the values using the little gaussian formula r * (r + 1) / 2 we have r -> N + 7. And therefore (N + 7) * (N + 8) / 2. But then we count the 3, 2 and 1 as well, which are actually not in the above listing, we need to subtract them and come to the final solution of:
(N + 7) * (N + 8) / 2 - 6
The algorithm as shown in the question looks like the good old Basic syntax
for X down/to Y, that includes Y
The outer loop goes from n+2 to -1, so the inner loop goes
n+1 to n+4 => 4 iterations
...
-2 to n+4 => n+7 iterations
Summing all of these, we get
n+3
∑ (4+i) = 4(n+4) + (n+3)(n+4) / 2
i=0
= (n+11)(n+4) / 2
which is also equal to (N + 7)(N + 8) / 2 - 6
public class Operator {
public static void main(String[] args) {
byte a = 5;
int b = 10;
int c = a >> 2 + b >> 2;
System.out.print(c); //prints 0
}
}
when 5 right shifted with 2 bits is 1 and 10 right shifted with 2 bits is 2 then adding the values will be 3 right? How come it prints 0 I am not able to understand even with debugging.
This table provided in JavaDocs will help you understand Operator Precedence in Java
additive + - /\ High Precedence
||
shift << >> >>> || Lower Precedence
So your expression will be
a >> 2 + b >> 2;
a >> 12 >> 2; // hence 0
It's all about operator precedence. Addition operator has more precedence over shift operators.
Your expression is same as:
int c = a >> (2 + b) >> 2;
Is this what you want?
int c = ((a >> 2) + b) >> 2;
You were shifting to the right by whatever is 2+b. I assume you wanted to shift 5 by 2 positions, right?
b000101 >> 2 == b0001øø
| |___________________|_|
| |
|_____________________|
i.e. the leftmost bit shifts to the right by 2 positions and right most bit does as well (but is has no more valid positions left on its right side so it simply disappears) and the number becomes what's left - in this case '1'. If you shift number 5 by 12 positions you will get zero as 5 has less than 12 positions in binary form. In case of '5' you can shift by 2 positions at most if you want to preserve non-zero value.
I need to implement my own F5 algorithm. I have read the F5 documentation and the paper can be found here.
In the section 6.2 Matrix Encoding of the paper, equations 16-18 define the change density, D(k), the embedding rate, R(k) and the efficiency rate, W(k), as follows:
D(k) = 1 / (n + 1) = 1 / 2**k
R(k) = k / n = k / (2**k - 1)
W(k) = R(k) / D(k) = k * 2**k / (2**k - 1)
Where n is the number of modifiable places in a code word and k the number of embedding bits. W(k) indicats the average number of bits we can embed per change.
In the source code we find the number of bits as stated below. Can someome please explain why usable and changed are calculated this way? I simply don't understand the logic.
int _changed = 0;
int _expected = 0;
int _one = 0;
int _large = 0;
int _zero = 0;
for (i = 0; i < coeffCount; i++) {
if (i % 64 == 0) {
continue;
}
if (coeff[i] == 1) {
_one++;
}
if (coeff[i] == -1) {
_one++;
}
if (coeff[i] == 0) {
_zero++;
}
}
_large = coeffCount - _zero - _one - coeffCount / 64;
_expected = _large + (int) (0.49 * _one);
for (i = 1; i < 8; i++) {
int usable, changed, n;
n = (1 << i) - 1;
usable = _expected * i / n - _expected * i / n % n;
changed = coeffCount - _zero - coeffCount / 64;
changed = changed * i / n - changed * i / n % n;
changed = n * changed / (n + 1) / i;
//
changed = _large - _large % (n + 1);
changed = (changed + _one + _one / 2 - _one / (n + 1)) / (n + 1);
usable /= 8;
if (usable == 0) {
break;
}
if (i == 1) {
System.out.print("default");
} else {
System.out.print("(1, " + n + ", " + i + ")");
}
System.out.println(" code: " + usable + " bytes (efficiency: " + usable * 8 / changed + "." + usable * 8
/ changed % 10 + " bits per change)");
}
coeff is an array that holds the DCT coefficients,coeffCount is the number of DCT coefficients,_large is the teoretical number of bits from the image that can be encoded and expected is the expected capacity of the image(with shrinkage).I don't understand what is the logic behind usable and changed variables
The last paragraph of the section 6.2 in the paper says the following and I quote:
We can find an optimal parameter k for every message to embed and every
carrier medium providing sufficient capacity, so that the message just fits into the
carrier medium. For instance, if we want to embed a message with 1000 bits into
a carrier medium with a capacity of 50000 bits, then the necessary embedding
rate is R = 1000 : 50000 = 2 %. This value is between R(k = 8) and R(k = 9) in
Table 1. We choose k = 8, and are able to embed 50000 : 255 = 196 code words
with a length n = 255. The (1, 255, 8) code could embed 196 · 8 = 1568 bits. If
we chose k = 9 instead, we could not embed the message completely.
I believe this should be straightforward. If you can understand this, you can follow the steps below.
One more preliminary thing is the expression result = var - var % n; throughout the code. This means you make var exactly divisible by n by removing the remainder (modulo operation). Now onto the loop block.
n = 1 << i - 1
This is the code word length, as defined in the paper.
usable = _expected * i / n - _expected * i / n % n;
To understand this line, remember that i / n is the embedding rate, R(i). In simple words, the number of possibly available bits (_expected) times the embedding rate (i / n), gives the number of bit we can encode. In the example from the quote that's 50000 / 255 * 8 = 1568 bits.
changed = coeffCount - _zero - coeffCount / 64;
changed = changed * i / n - changed * i / n % n;
changed = n * changed / (n + 1) / i;
The first line says that the number of bits that we can go through (call this total) is the number of coefficients (coeffCount), while skipping any zeros and the DC component of each 8x8 block (coeffCount / 64). Each 8x8 block has 64 coefficients, but only one is the DC coefficient, so every 64 coefficients you have one more DC coefficient to skip.
The second and third lines go together. Notice than in the second line you multiply by the embedding rate and you make that result perfectly divisible by the code word length. In the third line you divide by the embedding rate, thereby cancelling the previous step, and then you multiply by the change density, 1 / (n + 1), to find the number of bits to be changed on average.
The reason you go through this whole process is because the order of divisions and multiplications matter. As a straightforward example, consider you have 150 bits and 7 items that you want to distribute evenly into as many bits as possible. How many bits will you need overall?
7 * (150 / 7) = 7 * 21 = 147
Note: The following lines overwrite the currently computed value of changed. The previous 3 lines and the following 2 independently tend to give similar answers when I make up my own _one, _zero, coeffCount values. One of these two versions may be old code which was not removed. Regardless, the logic is the following.
changed = _large - _large % (n + 1);
changed = (changed + _one + _one / 2 - _one / (n + 1)) / (n + 1);
The first line has to do with the change density, D(i), since you make the expression perfectly divisible by n + 1. Because of how _large is defined, this is similar to how changed is computed in the previous version.
_large = coeffCount - _zero - _one - coeffCount / 64;
Which bears close resemblance to this
changed = coeffCount - _zero - coeffCount / 64;
The next line is a little bit hazy to me, but this is what it seems to achieve. It reintroduces back the _one it substracted in _large and one half of the ones. This is due to shrinkage, since it replicates the idea in _expected = _large + int(0.49*_ones). I don't quite understand why you would substract ones / (n + 1), but multiplying this whole expression by the change density, 1 / (n + 1), you get the number of bits you expect to change.
Conclusion
The two ways for calculating the expected number of bits to change are not exact and it has to do with not knowing in advance exactly how many will be changed. They both seem to give similar results for given values of _zero, _one and coeffCount. None of this is really necessary as it just estimates the efficiency for different k as in the quote. You just need to find the maximum k for which you use as much of the carrier medium to embed your information. This is done by just calculating usable and breaking the loop as soon as you don't have enough bits to embed your message. And this exact thing is done a bit further down in the source code.
I have a long number like:
long l = Long.parseLong("10*000001111110" , 2) ;
Now, I want to add two bits in one position (say 2nd position, marked as *) into the long number.
Like,
long l = Long.parseLong("10*11*000001111110" , 2) ; (given between *)
Can anybody help me how to do that ? Note that I give an example to illustrate what I want. In real, I have only long land I have to work on it.
Edit:
1) position is not constant may be 0, 1 , 2 .. whatever.
2) and msb's can be 0. Means,
long l = Long.parseLong("00000010*000001111110" , 2) ;
long l = Long.parseLong("00000010*11*000001111110" , 2) ;
It sounds like you want something like bitStuffing where masking (&, ~, ^, and |) and shifting (>> and <<) are your instruments of choice.
long insertBit(long p_orignal, long p_new_bits, int p_starting_position_from_right, int p_ending_position_from_right)
{
long returnValue = p_original;
long onlyNewBits = 0;
// Set the bit to zero
long mask = (0xFFFFFFFFFFFFFFFFl);
for (int i=p_starting_position_from_right; i<=p_ending_position_from_right; i++)
{
mask ^ (1l << i);
}
returnValue = returnValue & mask;
mask = ~mask;
onlyNewBits = ~(p_new_bits & mask);
returnValue |= onlyNewBits;
return returnValue;
}
Disclaimer: I don't have a Java compiler available to compile this, but it should be something like this.
The first idea I had is the following:
Extract the first x bits that needs to stay on the position they are (in your example: 10) -> you could do this by running through a loop which creates the appropriate bitmask:
long bitmask = 1;
for(long bit = 1; bit < index; bit++) {
bitmask = (bitmask << 1) | 1;
}
Now you can create the long number that gets inserted -> just shift that number index positions to the left.
After that, you can easily build the new number:
long number = (((oldNumber >> index) << index) << insertionLength) | (insertion << index) | (oldNumber && bitmask);
Note: ((oldNumber >> index) << index) clears out the right part of the number (this part gets appended at the end using the bistmask). then you just need to shift this result by the length of the insertion (make space for it) and or it with the insertion (this needs to get shifted to the left by the index where to insert: (insertion << index). Finally, or the last part of the number (extracted via the bitmask: oldNumber && bitmask) to this result and you are done.
Note: I haven't tested this code. However, generally it should work but you may need to check my shifts (either it is index or index - 1 or so)!
If you only have the Long value say 123 you need to first convert this to a binary string. Like so:
String binaryValue = Long.toBinaryString("123L");
Then we take the string representation and perform a manipulation a specific character like so:
char[] characters = binaryValue.toCharArray();
char desiredCharacter = characters[index];
if(desiredCharacter == '1')
{
if(newValue == '1')
{
desiredCharacter = '0';
}
}else{
if(newValue == '1')
{
desiredCharacter ='1';
}
}
finally we convert the modified characters back into a string like so:
String rebuiltString = new String(characters);
I am sure there are more efficient ways to do this.
Well, if you want to set a specific bit in a number:
To turn it on:
number |= (1 << pos)
if pos = 4: (1<<pos) = 00000000 00000000 00000000 00010000
To turn it off:
number &= ~(1 << pos)
if pos = 4: ~(1<<pos) = 11111111 11111111 11111111 11101111
where pos is the position of the bit (with 0 being the low order bit, and 64 being the high order bit).
I was looking at some code that outputs a number to the binary form with prepended 0s.
byte number = 48;
int i = 256; //max number * 2
while( (i >>= 1) > 0) {
System.out.print(((number & i) != 0 ? "1" : "0"));
}
and didn't understand what the i >>= 1 does. I know that i >> 1 shifts to the right by 1 bit but didn't understand what the = does and as far as I know, it is not possible to do a search for ">>=" to find out what it means.
i >>= 1 is just shorhand for i = i >> 1 in the same way that i += 4 is short for i = i + 4
EDIT: Specifically, those are both examples of compound assignment operators.