I'm new to low level operations like this, I'm hoping someone can point out the obvious mistake I must be making here.
//Input value - 00111100
//I want to get the value of the bits at indexes 1-3 i.e 0111.
byte mask = (byte)0x00001111; // This gives 17 not the 15 I'd expect
byte shifted = (byte)(headerByte >> 3);
//shifted is 7 as expected
byte frameSizeValue = (byte)(shifted & mask); //Gives 1 not 7
It looks like the problem lies with the way the mask is defined, but I can't see how to fix it.
First of all 0x00001111 is in hex, which is a larger number than 255 - 16^3 + 16^2 + 16 + 1 = 4369 and byte overflows. Look here how to represent binary numbers or just use shifted & 15.
Your mask needs to be binary 00001111, which is equal to hex 0x0F.
byte mask = (byte)0x0F;
With java 7 you can create binary literals
byte binaryLit = (byte)0b00001111;
0xsomenumbers is a hex literal, and pre java7 there is no support for binaries.
You say you want to mask the first three bits but as Petar says, 0x001111 are not bits. If you want to mask the three bits you need to mask with 7
Related
I'm tying to make a Linear Congruential random number generator, and I want to select, say the first 16 bits of a 64-bit length hexadecimal value. How Can I do this in java? I've already created a (very) basic generated number based on the time of day.
My formula:
seed = 0x5D588B656C078965L * cal.get(Calendar.HOUR_OF_DAY) + 0x0000000000269EC3;
I just want to select the first 16 bits of this, I tried to think of how I would do this with an integer but the I don't think i can apply the same concepts here. Thanks!
If you want a long that has the first 16 bits and zeroes in the other positions, you can either use a bit mask or shifting.
Assuming by "first 16 bits" you mean the highest-order bits, then the mask looks like this:
long mask = 0xffff000000000000;
so that a '1' is in each bit position you want to retain, and a 0 elsewhere. Then do a logical 'and' of this with your original integer:
long result = seed & mask;
The other way is to shift your original to the right 48 bits, then left again 48 bits.
Bit shift:
long value = seed >>> 48;
or you can save it in an int:
int value = (int)(seed >> 48);
Usually, you would use the last bits of the seed to produce the random number, which makes for a one-way mod operation (ie "more random"), so:
seed & 0xFFFF
I've been getting a weird problem with Java, and have tested on both a Windows and Mac system and have this problem come up consistently.
Take a look at this code
int a = -32; //11100000 as per two's complement
System.out.println(a >>> 2);
I expect this code to produce 56, which is 00111000 in its binary form. However, it produces the value 1073741816, 111111111111111111111111111000 in its binary form. I understand ints are 32-bits in java, but setting the type of byte also does the same thing.
However if I declare a binary literal like this
int b = 0b11100000;
System.out.println(b);
System.out.println(b >>> 2);
The first statement produces the value 224 (expected -32), while the second statement produces the expected value of 56
Am I going insane?
Your bit pattern in b is not 11100000 like you say, but 11111111 11111111 11111111 11100000
You're using an int which is 32 bits - not a byte - and using binary literals doesn't change that.
Even if you used byte rather than int it wouldn't fix your problem, because the Java bitshift operators first promote any type less than 32 bits to 32 bits.
So the resulting bit pattern is 00111111 11111111 11111111 11111000 after the shift.
If you want to mimic working with an 8-bit value, you need to bit-mask the 32-bit value to the lower 8 bits before applying the bitshift (you may as well use >> rather than >>> now)
System.out.println((b & 0xff) >>> 2);
I have made a variable in java, byte a = 0xA6; //10100110
then I made this :
System.out.println(Integer.toHexString( ((short)a<<8)&0xFFFF ));
The result is 0xA600. This is the right result. But when i tried
System.out.println(Integer.toHexString( ((short)a<<3)&0xFFFF ));
The expected result should be : 0x530 (10100110000)
but I got 0xFD30(1111110100110000) Emm... Can somebody explain how I got that wrong result...??
thanks... :-)
The byte value A6 represents a negative number (bytes are signed in Java). When you cast to a short it gets sign extended to FFA6. Moreover the shift operation is executed with integer values so it is again sign extended to FFFFFFA6. Shift left by three bits gives FFFFFD30 and taking the lower 16 bits gives 0000FD30.
This does not matter if you shift by 8 bits because you shift out and mask the additional 1 bits.
When you declare initialize byte variable you have to downcast it from integer:
byte a = (byte) 0xA6;
So, instead of 10100110 you've got 11111111111111111111111110100110.
And, beacuse of this left shift works in that way:
((short)a<<8)&0xFFFF
returns 1010011000000000
((short)a<<3)&0xFFFF
returns 1111110100110000
I need to convert a value declared as a byte data type into a string of 8 bits. Is there any Java library method that can do this? For example, it should take in -128 and output "10000000". Also, input -3 should give "11111101". (I converted these by hand.)
Before you assume this has been answered many times, please let me explain why I am confused.
The name "byte" is a little ambiguous. So, it's been difficult following other answers. For my question, byte is referring to the java data type that takes up 8 bits and whose value ranges from -128 to 127. I also don't mean an "array of bytes". My question is about converting a single value into its 8-bit representation only. Nothing more.
I've tried these:
byte b = -128; //i want 10000000
Integer.toBinaryString(b); //prints 11111111111111111111111110000000
Integer.toString(b, 2); //prints -10000000
If there's no built-in method, can you suggest any approach (maybe bit shifting)?
Try
Integer.toBinaryString(b & 0xFF);
this gives a floating length format e.g. 4 -> 100. There seems to be no standard solution to get a fixed length format, that is 4 -> 00000100. Here is a one line custom solution (prepended with 0b)
String s ="0b" + ("0000000" + Integer.toBinaryString(0xFF & b)).replaceAll(".*(.{8})$", "$1");
Need a solution on how to perform the following: receive a decimal value, convert it to 32-bit Hex, then separate that 32-bit hex and get high 16-bit and low 16-bit values. I have been digging around net and cannot find much info.
Not sure why you are converting to hex but to turn a 32-bit value straight into two 16-bit values.
int x = ...
short high = x >> 16;
short low = x & 0xFFFF;
I expect this is a homework problem. Accordingly, I will give you information that can help you solve it rather than a solution.
Convert the number to hexadecimal. This can be done with the Integer's toHexString() method.
Add enough zeroes to the left to make it eight characters long (8 hexadecimal characters represent 32 bits). You can do this by adding zeroes one by one in a loop until it's 8 characters long, or (better approach) just add 7 zeroes to the left and only deal with the rightmost 8 characters.
Take the the rightmost 4 characters as the lower 16 bits and the 4 characters immediately to the left of that as the higher 16 bits. This can be done with the String's substring() method along with length() and some simple subtraction.
Some APIs you might find useful:
http://download.oracle.com/javase/6/docs/api/java/io/DataInputStream.html
http://download.oracle.com/javase/6/docs/api/java/lang/Integer.html#parseInt(java.lang.String, int)
http://download.oracle.com/javase/6/docs/api/java/lang/Integer.html#toHexString(int)
http://commons.apache.org/codec/apidocs/org/apache/commons/codec/binary/Hex.html