What is the difference between '>>' and ' / ' , shifting and division in java - java

we can shift using >> operator, and we can use '/' to divide in java. What I am asking is what really happens behind the scene when we do these operations, both are exactly same or not..?

No, absolutely not the same.
You can use >> to divide, yes, but just by 2, because >> shift all the bits to the right with the consequence of dividing by 2 the number.
This is just because of how binary base operations work. And works for unsigned numbers, for signed ones it depends on which codification are you using and what kind of shift it is.
eg.
122 = 01111010 >> 1 = 00111101 = 61

Check this out for an explanation on bit shifting:
What are bitwise shift (bit-shift) operators and how do they work?
Once you understand that, you should understand the difference between that and the division operation.

Related

Bitwise shift operation choice

Hello everyone I'm learning programming in java and I wanted to know why is the choice for
Sign bit propagation is ">>" and not ">>>"?
I would assume << and >> should have the same implementation.
Sorry if it sounds like a silly question :)
Thanks in advance!
The reason it works this way is because C and C++ used << for left shift and >> for right shift long before Java. Those languages have both signed and unsigned types, and for signed types the sign bit was propagated in the right-shift case.
Java does not have unsigned types, so they kept the behavior of C and C++ so as not to sow confusion and incur the undying wrath of developers the world over. Then they included >>> to provide a right-shift that treated the bit value as unsigned.
This question is really about reading James Gosling's mind :-). But my guess is that << and >> both make sense mathematically: << causes a number to be multiplied by 2 (barring overflow), and >> causes a number to be divided by 2--when you have sign propagation, this works whether the number is positive or negative. Perhaps the language designers thought this would be a more common use of the right shift than the operator that propagates 0's, which is more useful when integers are treated as strings of bits rather than actual numbers. Neither way is "right" or "wrong", and it's possible that if Gosling had had something different for breakfast that morning, he might have seen things your way instead...
Lets start with the questions that you didn't ask :-)
Q: Why is there no <<<?
A1: Because << performs the appropriate reverse operation for both >> and >>>.
>> N is equivalent to divide by 2N for a signed integer
>>> N is equivalent to divide by 2N for an unsigned integer
<< N is equivalent to multiply by 2N for both signed and unsigned integers
A2: Because the sign bit is on the left hand end, so "extending" it when you shift leftwards is nonsensical. (Rotating would make sense, but Java doesn't have any "rotate" operators. Reasons: C precedent, lack of hardware support in some instruction sets, rotation is rarely needed in Java code.)
Q: Why does only one of >> and >>> to sign extension
A: Because if they both did (or neither did) then you wouldn't need two operators.
Now for your questions (I think):
Q: Why did they choose >> to do sign extension and not >>>?
A: This is really unanswerable. As far as we know, there is no extant publicly available contemporaneous record of the original Oak / Java language design decisions. At best, we have to rely on the memory of James Gosling ... and his willingness to answer questions. AFAIK, the question has not been asked.
But my conjecture is that since Java integer types are (mostly) signed, it was thought that the >> operator would be used more often. In hindsight, I think Gosling et al got that right.
But this was NOT about copying C or C++. In those languages, there is only one right-shift operator (>>) and its behavior for signed integers is implementation defined!!. The >>> operator in Java was designed to fix that problem; i.e. remove the portability problem of C / C++ >>.
(Reference: Section 6.5.7 of the draft C11 Language spec.)
Next your comment:
I would assume << and >> should have the same implementation. By same implementation I mean same process but opposite direction.
That is answered above. From the perspective of useful functionality, >> and << do perform the same process for signed numbers but in different direction; i.e. division versus multiplication. And for unsigned numbers <<< corresponds to >> in the same way?
Why the difference? It is basically down to the mathematics of 2's complement and unsigned binary representations.
Note that you cannot perform the mathematical inverse of >> or >>>. Intuitively, these operators throw away the bits on the right end. Once thrown away, those bits cannot be recovered.
Q: So why don't they make << (or a hypothetical <<<) "extend" the right hand bit?
A: Because:
It is not useful. (I cannot think of any mainstream use-cases for extending the right hand bits of a number.)
There is typically no hardware support (... because it is not useful!)

Alternative for checking module before using `>>`?

In- Java , C# , Javascript :
AFAIU - the >> is a shift right operand which can deal also with signed numbers :
There is no problem with :
12>>2 --> 3
And also for signed number :
-12>>2 --> -3
But when the decimal result is not an integer , the result are differnt:
10>>2 --> 2
Whereas -10>>2 --> -3
I'm fully aware why it is happening (via Two's complement ) , but :
Question :
Does it mean that when I use the fastest division ever >> - I must check that :
10%4 is not zero ?
Am I missing something here ?
You can use methods like Integer.numberOfTrailingZeros() and Long.numberOfTrailingZeros() to tell if shifting will be accurate or truncated.
You can also use bitwise AND to test the last bits, for example testing the last 4 bits:
int i = 543;
if ((i & 0x0f) == i )
System.out.println("Last 4 bits are zeros!");
Although note that it's not worth using bit shift for "fast" division. You're not going to outsmart the compiler because most of today's compilers are intelligent enough to optimize these cases.
More on this: Is multiplication and division using shift operators in C actually faster?
Edit:
The answer to your question is that bit shifting is not defined as "the fastest division ever", it is defined as what its name says: bit shifting, which in case of negative numbers gives (or might give) different result.
You're not missing anything. If your input can be negative, your 2 options are:
Either check the value and if it might give different result, corrigate it or use division. A simple check might be to test if it's negative, or test the last bits (described above).
Completely avoid using bit shift for division purposes.

Simple bitwise operation in Java

I'm writing code in Java using short typed variables. Short variables are normally 16 bits but unfortunately Java doesn't have unsigned primitive types so I'm using the 15 lower bits instead ignoring the sign bit. Please don't suggest changes to this part as I'm already quite far in this implementation... Here is my question:
I have a variable which I need to XOR.
In C++ I would just write
myunsignedshort = myunsignedshort ^ 0x2000;
0x2000 (hex) = 0010000000000000 (binary)
However, in Java, I have to deal with the sign bit also so I'm trying to change my mask so that it doesn't affect the xor...
mysignedshort = mysignedshort ^ 0xA000;
0xA000 (hex) = 1010000000000000 (binary)
This isn't having the desired effect and I'm not sure why. Anyone can see where I'm going wrong?
Regards.
EDIT: ok you guys are right, that bit wasn't causing the issue.
the issue comes when I'm shifting bits to the left.
I accidentally shift bits into the sign bit.
mysignedshort = mysignedshort << 1;
Any any ideas how to avoid this new prob so that if it shifts into the MSB then nothing happens at all? or should I just do a manual test? Theres a lot of this shifting in the code though so I would prefer a more terse solution.
Regards.
Those operations don't care about signedness, as mentioned in the comments. But I can expand on that.
Operations for which the signed and unsigned versions are the same:
addition/subtraction
and/or/xor
multiplication
left shift
equality testing
Operations for which they are different:
division/remainder
right shift, there's >> and >>>
ordered comparison, you can make a < b as (a ^ 0x80000000) < (b ^ 0x80000000) to change from signed to unsigned, or unsigned to signed.
You can also use (a & 0xffffffffL) < (b & 0xffffffffL) to get an unsigned comparison, but that doesn't generalize to longs.

What is arithmetic left shift of 01001001?

I would think it is 00010010
i.e. it tries to maintain the sign bit as is
On the other hand, the logical left shift by 1 pos would be
10010010
Is this correct?
For left shift, arithmetic and logical shift are the same.
The difference is for right shift only, where an arithmetic right shift will copy the old MSB to the new MSB after having shifted, thus keeping a negative number from being converted to a positive when shifting.
Wikipedia has a more detailed explanation.
In Java << is a logical left-shift. 0 is always added as the LSB.
(Do note that Java will promote the [byte] value in question, so care must be taken to mask back to an octect! Otherwise you'll keep the shifted bit(s), which might have included "1".)
However, the Wikipedia article on Arithmetic shift indiciates that an arithmetic left shift may result in an overflow error:
...Note that arithmetic left shift may cause an overflow; this is the only way it differs from logical left shift.
(This is not the case in Java, but just to keep in mind.)
Happy coding.
Yes it is correct.
The arithmetic left shift of x by n places is equal to x * (2^n). So in your example is the ar-left-shift of 01001001 by 1 place equal to 10010010 (73 * 2ยน = 146).
You are correct when you left shift by 1 bit postion. It equals 10010010.
when you shift 4 bits to the left as follows, you get the following answer.
01001001 << 4 = 10010000
when you shift 4 bits to the right as follows, you get the following answer.
01001001 >> 4 = 00000100
Bits that are left empty as a result of shifting are filled with zeros.

Java Bit Operations (In Radix Sort)

The other day I decided to write an implementation of radix sort in Java. Radix sort is supposed to be O(k*N) but mine ended up being O(k^2*N) because of the process of breaking down each digit to one number. I broke down each digit by modding (%) the preceding digits out and dividing by ten to eliminate the succeeding digits. I asked my professor if there would be a more efficient way of doing this and he said to use bit operators. Now for my questions: Which method would be the fastest at breaking down each number in Java, 1) Method stated above. 2) Convert number to String and use substrings. 3) Use bit operations.
If 3) then how would that work?
As a hint, try using a radix other than 10, since computers handle binary arithmetic better than decimal.
x >>> n is equivalent to x / 2n
x & (2n - 1) is equivalent to x % 2n
By the way, Java's >> performs sign extension, which is probably not what you want in this case. Use >>> instead.
Radix_sort_(Java)
The line of code that does this;
int key = (a[p] & mask) >> rshift;
is the bit manipulation part.
& is the operator to do a bitwise AND and >> is a right-shift.

Categories