Java Bitshift error with negatives? - java

http://www.fastcgi.com/devkit/doc/fcgi-spec.html
In section 3.4:
typedef struct {
unsigned char nameLengthB0; /* nameLengthB0 >> 7 == 0 */
unsigned char valueLengthB0; /* valueLengthB0 >> 7 == 0 */
unsigned char nameData[nameLength];
unsigned char valueData[valueLength];
} FCGI_NameValuePair11;
typedef struct {
unsigned char nameLengthB0; /* nameLengthB0 >> 7 == 0 */
unsigned char valueLengthB3; /* valueLengthB3 >> 7 == 1 */
unsigned char valueLengthB2;
unsigned char valueLengthB1;
unsigned char valueLengthB0;
unsigned char nameData[nameLength];
unsigned char valueData[valueLength
((B3 & 0x7f) << 24) + (B2 << 16) + (B1 << 8) + B0];
} FCGI_NameValuePair14;
typedef struct {
unsigned char nameLengthB3; /* nameLengthB3 >> 7 == 1 */
unsigned char nameLengthB2;
unsigned char nameLengthB1;
unsigned char nameLengthB0;
unsigned char valueLengthB0; /* valueLengthB0 >> 7 == 0 */
unsigned char nameData[nameLength
((B3 & 0x7f) << 24) + (B2 << 16) + (B1 << 8) + B0];
unsigned char valueData[valueLength];
} FCGI_NameValuePair41;
typedef struct {
unsigned char nameLengthB3; /* nameLengthB3 >> 7 == 1 */
unsigned char nameLengthB2;
unsigned char nameLengthB1;
unsigned char nameLengthB0;
unsigned char valueLengthB3; /* valueLengthB3 >> 7 == 1 */
unsigned char valueLengthB2;
unsigned char valueLengthB1;
unsigned char valueLengthB0;
unsigned char nameData[nameLength
((B3 & 0x7f) << 24) + (B2 << 16) + (B1 << 8) + B0];
unsigned char valueData[valueLength
((B3 & 0x7f) << 24) + (B2 << 16) + (B1 << 8) + B0];
} FCGI_NameValuePair44;
I'm implementing this in Java, and in order to do the valueLengthB3 >> 7 == 1, etc, part, I'm just setting it negative. This doesn't work. How do negatives work in Java, and how do you do this operation in Java?
My current code:
public void param(String name, String value) throws IOException {
if (fp) {
throw new IOException("Params are already finished!");
}
if (name.length() < 128) {
dpout.write(name.length());
}else {
dpout.writeInt(-name.length());
}
if (value.length() < 128) {
dpout.write(value.length());
}else {
dpout.writeInt(-value.length());
}
dpout.write(name.getBytes());
dpout.write(value.getBytes());
}

Java uses pretty routine integer operations. The two main peculiarities relative to C and C++ are
Java has no unsigned integer types other than char (which is 16-bits wide), and
Java has separate arithmetic (>>) and logical (>>>) right-shift operators. The former preserves sign by filling in the needed most-significant bits of the result with copies of the most-significant bit of the left operand, whereas the latter fills in the most-significant bits of the result with zeroes.
Java has the advantage that all primitive types have well-known, consistent sizes and signedness on all platforms, and that its two right-shift operators have well-defined semantics for all valid operands. In contrast, in C, the result of performing a right shift on a negative value is implementation-defined, all of the standard data types have implementation-defined sizes, and some types (char) have implementation-defined signedness.
Now that you have posted some code, however, it appears that none of that is actually your problem. I am at a loss to understand why you think that negating a number would perform any kind of shifting, or indeed, why you think shifting is required at all for what you are trying to do.
Note especially that Java uses two's complement integer representation (as is by far the most common choice of C compilers, too), so negating a number modifies more than just the sign bit. If instead you want to set only the sign bit of an int, then you could spell that
value.length() | 0x80000000

If you were to receive bytes over the wire, they'd be signed meaning that the most significant bit will be the sign bit. If you want to extract the sign bit from byte, there are two sensible ways that come to mind: Test negativity by comparing against 0 or use the >>> operator, rather than the >> operator.
The following code shows how I'd deserialise such an array of signed chars in C. I can't imagine why this wouldn't work in Java, assuming data is instead an array of bytes... though I'm sure it'd be quite hideous.
long offset = 0;
long nameLength = data[offset] >= 0 ? data[offset++] : (-(long)data[offset++] << 24)
+ ( (long)data[offset++] << 16)
+ ( (long)data[offset++] << 8)
+ data[offset++];
long valueLength = data[offset] >= 0 ? data[offset++] : (-(long)data[offset++] << 24)
+ ( (long)data[offset++] << 16)
+ ( (long)data[offset++] << 8)
+ data[offset++];
for (long x = 0; x < nameLength; x++) {
/* XXX: Copy data[offset++] into name */
}
for (long x = 0; x < valueLength; x++) {
/* XXX: Copy data[offset++] into value */
}

Related

How to convert 18 bits two-complements into float number using java

The data is uploaded by 18-bits ADC. One data is split into three bytes and the last 6 bits is useless. The reference voltage is 1 volt, that means 0x1FFFF represents 1 and 0x3FFFF represents -1. How to convert 18-bits twos-complement into float using java. I have written one and it works, but I think it is not efficient enough. My java is terrible.
float data;
int value = ((byte0 & 0xff) << 10) | ((byte1 & 0xff) << 2) | ((byte2 & 0xff) >> 6); // combine 3 bytes into int
int tmp = value & 0x2000; // judge positive or negative
if (tmp != 0) {
value = value - 262144 /* 2^18 */;
data = ((float)value) * 2 / 262143 /* 2^18-1 */;
} else {
data = ((float)value) * 2 / 262143;
}
You could try
double data = (value << 14) / (double) (0x1FFFF << 14);
This will use a shifted 32-bit 2s complement value.
NOTE: If it was 2s complement, 0x3ffff should be the smallest negative number, and 0x20000 is the largest negative number.
Put the sign bit of the ADC in the same place as the sign bit of a 32-bit integer and you can simplify by using the native sign.
public static final float ADC_RANGE = 1.0f; // From -1V to +1V
public static final int ADC_BITS = 18;
public static final int ADC_RANGE = 1 << (ADC_BITS - 1);
public static final int ADC_MASK = (ADC_RANGE - 1) << (32 - ADC_BITS);
int bits = ((byte0 & 0xFF) << 24) | ((byte1 & 0xFF) << 16) | ((byte2 & 0xC0) << 8); // Combine 3 bytes into int, left-aligned
float value = bits / (float)ADC_MASK * ADC_RANGE;
[edit] use constants for 'magic numbers'

Python bitshifting to java

I found a python code on github, and I need to do the same thing in java, I have almost converted it to java but I'm getting an warning saying Shift operation '>>' by overly large constant value
this is the python code that I'm trying to convert
if i > 32:
return (int(((j >> 32) & ((1 << i))))) >> i
return (int((((1 << i)) & j))) >> i
and this is the java code I made trying to convert from the python code
if (i > 32) {
return (j >> 32) & (1 << i) >> i;
}
return ((1 << i) & j) >> i;
the warning is in this line (j >> 32)
Since Java's int is 32 bits (See here), shifting it 32 bits to the right leaves nothing from the original int, and therefore doesn't make much sense
This doesn't really make sense to me because shifting 32 bits in an int leaves nothing, How ever if you want to implant the same method using long, here is the code I wrote to do so.
public int bitShift(long j, int i) {
return i > 32 ? ((int) ((j >> 32) & ((long) (1 << i)))) >> i : ((int) (j & ((long) (1 << i)))) >> i;
}

Sending Java int to C over TCP

I'm trying to send Java's signed integers over TCP to a C client.
At the Java side, I write the integers to the outputstream like so:
static ByteBuffer wrapped = ByteBuffer.allocateDirect(4); // big-endian by default
public static void putInt(OutputStream out, int nr) throws IOException {
wrapped.rewind();
wrapped.putInt(nr);
wrapped.rewind();
for (int i = 0; i < 4; i++)
out.write(wrapped.get());
}
At the C side, I read the integers like so:
int cnt = 0;
char buf[1];
char sizebuf[4];
while(cnt < 4) {
iResult = recv(ConnectSocket, buf, 1, 0);
if (iResult <= 0) continue;
sizebuf[cnt] = buf[0];
cnt++;
}
However, how do I convert the char array to an integer in C?
Edit
I have tried the following (and the reverse):
int charsToInt(char* array) {
return (array[3] << 24) | (array[2] << 16) | (array[1] << 8) | array[0];
}
Edited again, because I forgot the tags.
Data
For example of what happens currently:
I receive:
char 0
char 0
char 12
char -64
the int becomes 2448
and use the function for creating the int from the char array:
int charsToInt(char* array) {
return ntohl(*((int*) array));
}
I expect the signed integer: 3264
Update
I will investigate more after some sleep..
Update
I have a Java client which interprets the integers correctly and receives the exact same bytes:
0
0
12
-64
That depends on endianness, but you want either:
int x = sizebuf[0] +
(sizebuf[1] << 8) +
(sizebuf[2] << 16) +
(sizebuf[3] << 24);
or:
int x = sizebuf[3] +
(sizebuf[2] << 8) +
(sizebuf[1] << 16) +
(sizebuf[0] << 24);
Note that sizebuf needs to have an unsigned type for this to work correctly. Otherwise you need to mask off any sign-extended values you don't want:
int x = (sizebuf[3] & 0x000000ff) +
((sizebuf[2] << 8) & 0x0000ff00) +
((sizebuf[1] << 16) & 0x00ff0000) +
((sizebuf[0] << 24) & 0xff000000);
The classical C library has the method you want already, and it is independent from the machine endianness: ntohl!
// buf is a char */uint8_t *
uint32_t from_network = *((uint32_t *) buf);
uint32_t ret = ntohl(from_network);
This, and htonl for the reverse etc expect that the "network order" is big endian.
(the code above presupposes that buf has at least 4 bytes; the return type, and argument type, of ntohl and htonl are uint32_t; the JLS defines an int as 4 bytes so you are guaranteed the result)
To convert you char array, one possibility is to cast it to int* and to store the result :
int result = *((int*) sizebuf)
This is valid and one line. Other possibility is to compute integer from chars.
for (i = 0 ; i < 4; i++)
result = result << sizeof(char) + buf[0]
Choose the one that you prefer.
Alexis.
Edit :
sizeof(char) is 1 because sizeof return a Byte result. So the right line is :
result = result << (sizeof(char) * 8) + buf[0]

how to read signed int from bytes in java?

I have a spec which reads the next two bytes are signed int.
To read that in java i have the following
When i read a signed int in java using the following code i get a value of 65449
Logic for calculation of unsigned
int a =(byte[1] & 0xff) <<8
int b =(byte[0] & 0xff) <<0
int c = a+b
I believe this is wrong because if i and with 0xff i get an unsigned equivalent
so i removed the & 0xff and the logic as given below
int a = byte[1] <<8
int b = byte[0] << 0
int c = a+b
which gives me the value -343
byte[1] =-1
byte[0]=-87
I tried to offset these values with the way the spec reads but this looks wrong.Since the size of the heap doesnt fall under this.
Which is the right way to do for signed int calculation in java?
Here is how the spec goes
somespec() { xtype 8 uint8 xStyle 16 int16 }
xStyle :A signed integer that represents an offset (in bytes) from the start of this Widget() structure to the start of an xStyle() structure that expresses inherited styles for defined by page widget as well as styles that apply specifically to this widget.
If you value is a signed 16-bit you want a short and int is 32-bit which can also hold the same values but not so naturally.
It appears you wants a signed little endian 16-bit value.
byte[] bytes =
short s = ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN).getShort();
or
short s = (short) ((bytes[0] & 0xff) | (bytes[1] << 8));
BTW: You can use an int but its not so simple.
// to get a sign extension.
int i = ((bytes[0] & 0xff) | (bytes[1] << 8)) << 16 >> 16;
or
int i = (bytes[0] & 0xff) | (short) (bytes[1] << 8));
Assuming that bytes[1] is the MSB, and bytes[0] is the LSB, and that you want the answer to be a 16 bit signed integer:
short res16 = ((bytes[1] << 8) | bytes[0]);
Then to get a 32 bit signed integer:
int res32 = res16; // sign extends.
By the way, the specification should say which of the two bytes is the MSB, and which is the LSB. If it doesn't and if there aren't any examples, you can't implement it!
Somewhere in the spec it will say how an "int16" is represented. Paste THAT part. Or paste a link to the spec so that we can read it ourselves.
Take a look on DataInputStream.readInt(). You can either steel code from there or just use DataInputStream: wrap your input stream with it and then read typed data easily.
For your convenience this is the code:
public final int readInt() throws IOException {
int ch1 = in.read();
int ch2 = in.read();
int ch3 = in.read();
int ch4 = in.read();
if ((ch1 | ch2 | ch3 | ch4) < 0)
throw new EOFException();
return ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));
}
I can't compile it right now, but I would do (assuming byte1 and byte0 are realling of byte type).
int result = byte1;
result = result << 8;
result = result | byte0; //(binary OR)
if (result & 0x8000 == 0x8000) { //sign extension
result = result | 0xFFFF0000;
}
if byte1 and byte0 are ints, you will need to make the `&0xFF
UPDATE because Java forces the expression of an if to be a boolean
do you have a way of finding a correct output for a given input?
technically, an int size is 4 bytes, so with just 2 bytes you can't reach the sign bit.
I ran across this same problem reading a MIDI file. A MIDI file has signed 16 bit as well as signed 32 bit integers. In a MIDI file, the most significant bytes come first (big-endian).
Here's what I did. It might be crude, but it maintains the sign. If the least significant bytes come first (little-endian), reverse the order of the indexes.
pos is the position in the byte array where the number starts.
length is the length of the integer, either 2 or 4. Yes, a 2 byte integer is a short, but we all work with ints.
private int convertBytes(byte[] number, int pos, int length) {
int output = 0;
if (length == 2) {
output = ((int) number[pos]) << 24;
output |= convertByte(number[pos + 1]) << 16;
output >>= 16;
} else if (length == 4) {
output = ((int) number[pos]) << 24;
output |= convertByte(number[pos + 1]) << 16;
output |= convertByte(number[pos + 2]) << 8;
output |= convertByte(number[pos + 3]);
}
return output;
}
private int convertByte(byte number) {
return (int) number & 0xff;
}

How can I convert a 4-byte array to an integer?

I want to perform a conversion without resorting to some implementation-dependent trick. Any tips?
You need to know the endianness of your bytes.
Assuming (like #WhiteFang34) that bytes is a byte[] of length 4, then...
Big-endian:
int x = java.nio.ByteBuffer.wrap(bytes).getInt();
Little-endian:
int x = java.nio.ByteBuffer.wrap(bytes).order(java.nio.ByteOrder.LITTLE_ENDIAN).getInt();
Assuming bytes is a byte[4] of an integer in big-endian order, typically used in networking:
int value = ((bytes[0] & 0xFF) << 24) | ((bytes[1] & 0xFF) << 16)
| ((bytes[2] & 0xFF) << 8) | (bytes[3] & 0xFF);
The & 0xFF are necessary because byte is signed in Java and you need to retain the signed bit here. You can reverse the process with this:
bytes[0] = (byte) ((value >> 24) & 0xFF);
bytes[1] = (byte) ((value >> 16) & 0xFF);
bytes[2] = (byte) ((value >> 8) & 0xFF);
bytes[3] = (byte) (value & 0xFF);
Not sure if this is correct java syntax, but how about:
int value = 0;
for (i = 0; i <= 3; i++)
value = (value << 8) + (bytes[i] & 0xFF);
You need to specify the byte order of the array, but assuming that the bytes[0] is the most significant byte then:
int res = ((bytes[0] & 0xff) << 24) | ((bytes[1] & 0xff) << 16) |
((bytes[2] & 0xff) << 8) | (bytes[3] & 0xff);
This code is 100% portable, assuming that you use the reverse algorithm to create the byte array in the first place.
Byte order problems arise in languages where you can cast between a native integer type and byte array type ... and then discover that different architectures store the bytes of an integer in different orders.
You can't do that cast in Java. So for Java to Java communication, this should not be an issue.
However, if you are sending or receiving packets to some remote application that is implemented in (say) C or C++, you need to "know" what byte order is being used in the network packets. Some alternative strategies for knowing / figuring this out are:
Everyone uses "network order" (big-endian) for stuff on the wire as per the example code above. Non-java applications on little-endian machines need to flip the bytes.
The sender finds out what order the receiver expects and uses that order when assembling the data.
The receiver figures out what order the sender used (e.g. via a flag in the packet) and decodes accordingly.
The first approach is simplest and most widely used, though it does result in 2 unnecessary endian-ness conversions if both the sender and receiver are little-endian.
See http://en.wikipedia.org/wiki/Endianness
Assuming your byte[] come from somewhere e.g. a stream you can use
DataInputStream dis = ... // can wrap a new ByteArrayInputStream(bytes)
int num = dis.readInt(); // assume big-endian.
or
ByteChannel bc = ... // can be a SocketChannel
ByteBuffer bb = ByteBuffer.allocate(64*1024);
bc.read(bb);
bb.flip();
if (bb.remaining()<4) // not enough data
int num = bb.getInt();
When you send data, you should know if you are sending big-endian or little endian. You have to assume other things such as whether you are sending a 4-byte signed integer. A binary protocol is full of assumptions. (Which makes it more compact and faster, but more brittle than text)
If you don't want to be making as many assumptions, send text.
WE can also use following to make it more dynamic byte array size
BigEndian Format:
public static int pareAsBigEndianByteArray(byte[] bytes) {
int factor = bytes.length - 1;
int result = 0;
for (int i = 0; i < bytes.length; i++) {
if (i == 0) {
result |= bytes[i] << (8 * factor--);
} else {
result |= bytes[i] << (8 * factor--);
}
}
return result;
}
Little Endian Format :
public static int pareAsLittleEndianByteArray(byte[] bytes) {
int result = 0;
for (int i = 0; i < bytes.length; i++) {
if (i == 0) {
result |= bytes[i] << (8 * i);
} else {
result |= bytes[i] << (8 * i);
}
}
return result;
}
This will helps you lot for converting bytes to int values
public static int toInt( byte[] bytes ) {
int result = 0;
for (int i=0; i<3; i++) {
result = ( result << 8 ) - Byte.MIN_VALUE + (int) bytes[i];
}
return result;
}

Categories