I have some C++ code I'm trying to port to Java, that looks like this:
struct foostruct {
unsigned char aa : 3;
bool ab : 1;
unsigned char ba : 3;
bool bb : 1;
};
static void foo(const unsigned char* buffer, int length)
{
const unsigned char *end = buffer + length;
while (buffer < end)
{
const foostruct bar = *(reinterpret_cast<const foostruct*>(buffer++));
//read some values from struct and act accordingly
}
}
What is the reinterpret_cast doing?
its basically saying the 8 bits represented at the current pointer should be interpreted as a "foostruct".
In my opinion it would be better written as follows:
const unsigned char aa = *buffer & 0x07;
const bool ab = (*buffer & 0x08) != 0;
const unsigned char ba = (*buffer & 0x70) >> 4;
const bool bb = (*buffer & 0x80) != 0;
I think it is far more obvious what is being done then. I think you may well find it easier to port to Java this way too ...
It does what a classic C-style (const foostruct *)buffer would do at worst: tells C++ to ignore all safety and that you really know what you are doing. In this case, that the buffer actually consists of foostructs, which in turn are bit fields overlain on single 8 bit characters. Essentially, you can do the same in Java by just getting the bytes and doing the shift and mask operations yourself.
you have a pointer to unsigned char right? Now imagine that the bits pointed to by the pointer are treated as though it were an object of type foostruct. That's what reinterpret_cast does - it reinterprets the bit pattern to be a memory representation of another type...
Related
In Java I have a byte array representation which in iOS I have represented as NSData.
Everything works fine - it's just that some access methods seem rather clumsy in iOS compared to Java.
Accessing a single byte in Java: byteArray[i]
while in iOS I keep using this where byteArray is a NSData:
Byte b;
[byteArray getBytes: &b range: NSMakeRange( i, 1 )];
Isn't there a more direct way of writing this similar to Java?
Well considering not using a NSData Object you could transform it to a const void* like this.
NSdata *data = your data stuff
NSUInteger i = 1;
const char * array = [data bytes];
char c = array[i];
Note
This kind of array is read only! (const void *)
Otherwise you'll have to use the functions you already mentioned or some others Apple provides.
Edit
Or you could add some category to NSData
#interface NSData(NSDataAdditions)
- (char)byteAtIndex:(NSUInteger)index;
#end
#implementation NSData(NSDataAdditions)
- (char)byteAtIndex:(NSUInteger)index {
char c;
[self getBytes: &c range: NSMakeRange( index, 1 )];
return c;
}
#end
And then access your Array like this:
NSdata *data = your data stuff
char c = [data byteAtIndex:i];
Java:
byte[] arr1=new byte[]{0x01};
String aa = "helloo";
short[] s1 = new short[1]; // **
s1[0] = 246; // **
Object[] obj = new Object[]{s1,arr1,aa}
C:
signed char a1[] = {0x01};
char *str = "helloo";
short int st1[] = {246}; // **
char* c [] = {st1,str1,c2};
Is short int st1[] = {246} correct? And I am getting this error:
"illegal implicit conversion from 'short *' to 'char *'".
How to assign short to char?
char* c []
is an array of pointers, not an array of chars.
Use something like
short st1[] = { 246 };
char* str = "helloo";
char c [] = {st1[0], str[0], str[1], str[2], str[3], str[4], str[5]};
str[i] gets individual characters, since 'char* str' points to the first element of an array.
If you need an array of string, then make it
char tmp[1024];
// itoa is the conversion of st1[0] to string
char* c[] = { itoa(st1[0], tmp, 10), str };
Cast st1 to a char*. I.e.:
char* c [] = {(char*)st1,str1,c2};
Note that you'll have to cast the pointer back to short* when accessing the elements it points to if you want to get the correct data.
C++ doesn't have a base Object type. You will have to convert your strings all to a specific type.
std::string wtf[]= { std::string(a1, a1+ 1), std::string(st1, st1+ 1), std::string(str) }; // don't forget to #include <string>
In C or C++ there is no common base class for all types (like Java's Object), the best you can use is void* c[]=...; (void* stands for untyped pointers, so it can hold anything) or explicitely cast to the desired type (but then it's undefined to access a short via a char-pointer).
Although its highly not recommended, the rough equivalent of the las line in c is:
void* c [] = {st1,str1,c2};
I get hex strings of 14 bytes, e.g. a55a0b05000000000022366420ec.
I use javax.xml.bind.DatatypeConverter.parseHexBinary(String s) to get an array of 14 bytes.
Unfortunately those are unsigend bytes like the last one 0xEC = 236 for example.
But I would like to compare them to bytes like this:
if(byteArray[13] == 0xec)
Since 235 is bigger than a signed byte this if statement would fail.
Any idea how to solve this in java?
Thx!
Try if(byteArray[13] == (byte)0xec)
You can promote the byte to integer:
if((byteArray[13] & 0xff) == 0xec)
since java doesn't support (atleast with primitives) any unsigned data types, you should look at using int as your data type to parse the string..
String str = "a55a0b05000000000022366420ec";
int[] arrayOfValues = new int[str.length() / 2];
int counter = 0;
for (int i = 0; i < str.length(); i += 2) {
String s = str.substring(i, i + 2);
arrayOfValues[counter] = Integer.parseInt(s, 16);
counter++;
}
work with the arrayOfValues...
I am working on a network programming on linux machine using C++ and I was wondering how I can set 4 bytes bit pattern for magic number in Java (client). Also how do I set the same bit pattern in c++ co I can compare the one from client with the one on the server side.
Thank in advance..
Edit
So now I have this
byte[] signature = new byte[4];
for(int i=0; i<4; i++){
signature[i] = (byte) 0xA9;
}
And if I looked at the inside of the array after for loop from the debugger then I have
{-89, -89, -89, -89}
And I did something like this in C++
uint8_t m_magicNumberBuffer[4];
magicKeyRead = read(m_fd, m_magicNumberBuffer, SIZE_OF_HEADER);
if(m_magicNumberBuffer[0] == 0xA9 && m_magicNumberBuffer[1] == 0xA9 && m_magicNumberBuffer[2] == 0xA9 && m_magicNumberBuffer[3] == 0xA9){
printf("SocketClient::recvMagicKey, Magic key has been found \n");
break;
}
I somehow works but not sure that I have declared m_magicNumberBuffer and unsigned integer but those were in negative 89 in java. Is this ok to do this in this way?
Thanks in advance.
Java has bitwise operators, for example bitwise OR |, bitwise AND & and bit shift operators >>>, >> and <<, very similar to what C++ has. You can use those to manipulate bits exactly as you want.
Since you don't explain in more detail what you want to do, I cannot give you a more detailed answer.
In Java, you would represent it as
byte[] signature=new byte[4];
in C++, it would be
uint8_t signature[4];
You can then access each of the bytes individually as elements of the array.
Both languages support hex codes, so for example, you could do
signature[0]=0xA9;
in either java or C++ and it will set the first bit to A9 in hexadecimal (which is 10101001 in binary)
When you write to a DataOutputStream you write 8-bit bytes (sign is not important)
String filename = "text.dat";
DataOutputStream dos = new DataOutputStream(new FileOutputStream(filename));
for (int i = 0; i < 4; i++)
dos.write(0xA9);
dos.close();
DataInputStream dis = new DataInputStream(new FileInputStream(filename));
for (int i = 0; i < 4; i++)
System.out.println(Integer.toHexString(dis.readUnsignedByte()));
dis.close();
prints
a9
a9
a9
a9
Java assumes a byte is signed by default, however its is just 8-bits of data and used correctly can be unsigned or mean whatever you want it to.
What you're really dealing with is four bytes of raw memory; all you're
concerned with is the bit pattern, not the numeric values. In eight
bits (the size of a byte in Java), -89 and 0xA9 both have the same
bit pattern: 10101001. Because byte is signed in Java, dumping the
value will show a negative value, which is rather counter intuitive, but
Java doesn't have an eight bit unsigned type.
(Technically, 0xA9 isn't representable in a byte, and trying to put
it in a signed char in C++ is illegal. But Java doesn't care about
such niceties.)
I am accessing an ICU4C function through JNI which returns a UChar * (i.e. unicode character array).... I was able to convert that to jbyteArray by equating each member of the UChar array to a local jbyte[] array that I created and then I returned it to Java using the env->SetByteArrayRegion() function... now I have the Byte[] array in Java but it's all gibberish pretty much.. Weird symbols at best... I am not sure where the problem might be... I am working with unicode characters if that matters... how do I convert the byte[] to a char[] in java properly? Something is not being mapped right... Here is a snippet of the code:
--- JNI code (altered slighter to make it shorter) ---
static jint testFunction(JNIEnv* env, jclass c, jcharArray srcArray, jbyteArray destArray) {
jchar* src = env->GetCharArrayElements(srcArray, NULL);
int n = env->getArrayLength(srcArray);
UChar *testStr = new UChar[n];
jbyte destChr[n];
//calling ICU4C function here
icu_function (src, testStr); //takes source characters and returns UChar*
for (int i=0; i<n; i++)
destChr[i] = testStr[i]; //is this correct?
delete testStr;
env->SetByteArrayRegion(destArray, 0, n, destChr);
env->ReleaseCharArrayElements(srcArray, src, JNI_ABORT);
return (n); //anything for now
}
-- Java code --
string wohoo = "ABCD bal bla bla";
char[] myChars = wohoo.toCharArray();
byte[] myICUBytes = new byte[myChars.length];
int value = MyClass.testFunction (myChars, myICUBytes);
System.out.println(new String(myICUBytes)) ;// produces gibberish & weird symbols
I also tried: System.out.println(new String(myICUBytes, Charset.forName("UTF-16"))) and it's just as gebberishy....
note that the ICU function does return the proper unicode characters in the UChar *... somewheres between the conversion to jbyteArray and Java that is is messing up...
Help!
destChr[i] = testStr[i]; //is this correct?
This looks like an issue all right.
JNI types:
byte jbyte signed 8 bits
char jchar unsigned 16 bits
ICU4C types:
Define UChar to be wchar_t if that is
16 bits wide; always assumed to be
unsigned.
If wchar_t is not 16 bits wide, then
define UChar to be uint16_t or
char16_t because GCC >=4.4 can handle
UTF16 string literals. This makes the
definition of UChar platform-dependent
but allows direct string type
compatibility with platforms with
16-bit wchar_t types.
So, aside from anything icu_function might be doing, you are trying to fit a 16-bit value into an 8-bit-wide type.
If you must use a Java byte array, I suggest converting to the 8-bit char type by transcoding to a Unicode encoding.
To paraphrase some C code:
UChar *utf16 = (UChar*) malloc(len16 * sizeof(UChar));
//TODO: fill data
// convert to UTF-8
UConverter *encoding = ucnv_open("UTF-8", &status);
int len8 = ucnv_fromUChars(encoding, NULL, 0, utf16, len16, &status);
char *utf8 = (char*) malloc(len8 * sizeof(char));
ucnv_fromUChars(encoding, utf8, len8, utf16, len16, &status);
ucnv_close(encoding);
//TODO: char to jbyte
You can then transcode this to a Java String using new String(myICUBytes, "UTF-8").
I used UTF-8 because it was already in my sample code and you don't have to worry about endianness. Convert my C to C++ as appropriate.
Have you considered using ICU4J?
Also, when converting your bytes to a string, you will need to specify a character encoding. I'm not familiar with the library in question, so I can't advise you further, but perhaps this will be "UTF-16" or similar?
Oh, and it's also worth noting that you might simply be getting display errors because the terminal you're printing to isn't using the correct character set and/or doesn't have the right glyphs available.