Converting Java byte array into iOS NSData - java

In Java I have a byte array representation which in iOS I have represented as NSData.
Everything works fine - it's just that some access methods seem rather clumsy in iOS compared to Java.
Accessing a single byte in Java: byteArray[i]
while in iOS I keep using this where byteArray is a NSData:
Byte b;
[byteArray getBytes: &b range: NSMakeRange( i, 1 )];
Isn't there a more direct way of writing this similar to Java?

Well considering not using a NSData Object you could transform it to a const void* like this.
NSdata *data = your data stuff
NSUInteger i = 1;
const char * array = [data bytes];
char c = array[i];
Note
This kind of array is read only! (const void *)
Otherwise you'll have to use the functions you already mentioned or some others Apple provides.
Edit
Or you could add some category to NSData
#interface NSData(NSDataAdditions)
- (char)byteAtIndex:(NSUInteger)index;
#end
#implementation NSData(NSDataAdditions)
- (char)byteAtIndex:(NSUInteger)index {
char c;
[self getBytes: &c range: NSMakeRange( index, 1 )];
return c;
}
#end
And then access your Array like this:
NSdata *data = your data stuff
char c = [data byteAtIndex:i];

Related

Warning: while trying to convert java byte[] to C unsigned char*

I am writing a JNI. In that, my Java program takes an Image byte array using ByteOutputStream() and then this array is used to call a function in C that converts byte array to unsigned char*. Here is the code:
JNIEXPORT void JNICALL Java_ImageConversion_covertBytes(JNIEnv *env, jobject obj, jbyteArray array)
{
unsigned char* flag = (*env)->GetByteArrayElements(env, array, NULL);
jsize size = (*env)->GetArrayLength(env, array);
for(int i = 0; i < size; i++) {
printf("%c", flag[i]);}
}
In this I keep getting a warning when I compile:
warning: initializing 'unsigned char *' with an expression of type 'jbyte *' (aka 'signed char *') converts between pointers to integer types with different sign [-Wpointer-sign]
unsigned char* flag = (*env)->GetByteArrayElements(env, array, NULL);
How can I remove this warning? I want to print the all characters.
The warning exists because the sign change might be important. In JNI the jbyte corresponds to Java byte which is a signed 8-bit integer; in C it is explicitly signed char.
However, it is OK to access any object with any character pointer, so you can cast to unsigned char explicitly:
unsigned char* flag = (unsigned char*)(*env)->GetByteArrayElements(env, array, NULL);
Alternatively, you can declare flag as signed char:
signed char* flag = (*env)->GetByteArrayElements(env, array, NULL);
This is fine for printf("%c\n", flag[i]); because %c requires that the argument be an integer; the integer is then converted to unsigned char so both signed and unsigned char will do.
However 3rd option would be to use neither - if you just want to write them to the terminal, use a void * pointer and fwrite:
JNIEXPORT void JNICALL
Java_ImageConversion_covertBytes(JNIEnv *env, jobject obj, jbyteArray array)
{
void *flag = (*env)->GetByteArrayElements(env, array, NULL);
jsize size = (*env)->GetArrayLength(env, array);
fwrite(flag, 1, size, stdout);
}
and let fwrite worry about the looping.

Taking a string representation of a large integer and converting it to a byte array in Java

Basically, my problem is two-fold, and refers pretty specifically to the Bitcoin RPC. I am writing a miner in Java for Litecoin (a spinoff of BTC) and need to take a string that looks like:
000000000000000000000000000000000000000000000000000000ffff0f0000
Convert it to look like
00000fffff000000000000000000000000000000000000000000000000000000
(Which I believe is switching from little endian to big endian)
I then need to turn that string into a byte array --
I've looked at the Hex class from org.apache, String.toByte(), and a piece of code that looks like:
public static byte[] toByta(char[] data) {
if (data == null) return null;
// ----------
byte[] byts = new byte[data.length * 2];
for (int i = 0; i < data.length; i++)
System.arraycopy(toByta(data[i]), 0, byts, i * 2, 2);
return byts;
}
So essentially: What is the best way, in Java to change endianness? And what is the best way to take a string representation of a number and convert it to a byte array to be hashed?
EDIT: I had the wrong result after changing the endian.
Integer and BigInteger both have toString methods taking a radix, so
you can get the hex String.
You can make a StringBuffer from that
String and call reverse().
You then convert back to a String using
toString(), then get the bytes via getBytes();
Don't know if this is "best" but it requires little work on your part.
If you need better speed, call getBytes() on the original wrong direction hex string (from step 1) and reverse it in place using a for loop. e.g.
for (int i=0; i<bytes.length/2; i++) {
byte temp = bytes[i];
bytes[i] = bytes[bytes.length - i];
bytes[bytes.length - i] = temp;
}

Return Arabic from JNI call

I have been trying to return an ARABIC string from a JNI call.
The java method is as follows
private native String ataTrans_CheckWord(String lpszWord, String lpszDest, int m_flag, int lpszReserved);
lpszWord : Input English
lpszDest : Ignore
m_flag : Ignore
lpszReserved :Ignore
Now when I use javah to generate the header file I get a C++ header file with this signature
JNIEXPORT jstring JNICALL Java_MyClass_ataTrans_1CheckWord (JNIEnv* env, jobject, jstring, jstring, jint , jint)
Now in this C++ code I have statements such as this
JNIEXPORT jstring JNICALL Java_MyClass_ataTrans_1CheckWord(JNIEnv* env, jobject, jstring jstrInput, jstring, jint , jint)
{
char aa[10];
char* bb;
char** cc;
bb = aa;
cc = &bb;
jstring tempValue;
const char* strCIn = (env)->GetStringUTFChars(jstrInput , &blnIsCopy);
int retVal = pDllataTrans_CheckWord(strCIn, cc, m_flag, lpszReserved);
printf("Orginal Arabic Conversion Index 0: %s \n",cc[0]); //This prints ARABIC properly
tempValue = (env)->NewString((jchar* )cc[0],10); // convert char array to jstring
printf("JSTRING UNICODE Created : %s \n",tempValue); //This prints junk
return tempValue;
}
I believe the ARABIC content is inside the pointer to a pointer “cc”. Finally in my java code I have a call like this
String temp = myclassInstance.ataTrans_CheckWord("ABCDEFG", "",1, 0);
System.out.println("FROM JAVE OUTPUT : "+temp); //Prints Junk
I just can’t get to return some ARABIC character out into my JAVA code. Is there something wrong I am doing? I have tried out various other alternates such as
tempValue = env->NewStringUTF("شسيشسيشسيشس");
and return tempValue but no luck. Its always garbage on the JAVA side.
Java strings are internally UTF-16, an encoding which uses 2 or 4 bytes per character. Your translation system seems to return strings encoded in a MBCS (Multi-Byte Character Set) - 1-N bytes per character.
The JNI NewString function expects data encoded as UTF-16, and you're passing it something else - so in java you get garbage data. The one thing that is lacking from your information is which encoding your translation system uses. I'll assume it's UTF-8, and use MultiByteToWideChar to convert to the format java expects. The below code assumes that you're doing this on Windows - if not, specify platform, and look at e.g. the iconv library.
int Len = strlen(cc[0])*2+2;
wchar_t * Buffer = (wchar_t *) malloc(Len);
MultiByteToWideChar(CP_UTF8, 0, cc[0], -1, Buffer, Len);
tempValue = (env)->NewString((jchar* )Buffer,wcslen(Buffer));
free(Buffer);
If you get strings as some other codepage, replace CP_UTF8 above.
As a side note, if the encoding actually is UTF-8, you can simply pass your cc[0] to NewStringUTF instead - This function handles the UTF-8 to UTF-16 conversion internally.

Convert ICU4C byte to java char

I am accessing an ICU4C function through JNI which returns a UChar * (i.e. unicode character array).... I was able to convert that to jbyteArray by equating each member of the UChar array to a local jbyte[] array that I created and then I returned it to Java using the env->SetByteArrayRegion() function... now I have the Byte[] array in Java but it's all gibberish pretty much.. Weird symbols at best... I am not sure where the problem might be... I am working with unicode characters if that matters... how do I convert the byte[] to a char[] in java properly? Something is not being mapped right... Here is a snippet of the code:
--- JNI code (altered slighter to make it shorter) ---
static jint testFunction(JNIEnv* env, jclass c, jcharArray srcArray, jbyteArray destArray) {
jchar* src = env->GetCharArrayElements(srcArray, NULL);
int n = env->getArrayLength(srcArray);
UChar *testStr = new UChar[n];
jbyte destChr[n];
//calling ICU4C function here
icu_function (src, testStr); //takes source characters and returns UChar*
for (int i=0; i<n; i++)
destChr[i] = testStr[i]; //is this correct?
delete testStr;
env->SetByteArrayRegion(destArray, 0, n, destChr);
env->ReleaseCharArrayElements(srcArray, src, JNI_ABORT);
return (n); //anything for now
}
-- Java code --
string wohoo = "ABCD bal bla bla";
char[] myChars = wohoo.toCharArray();
byte[] myICUBytes = new byte[myChars.length];
int value = MyClass.testFunction (myChars, myICUBytes);
System.out.println(new String(myICUBytes)) ;// produces gibberish & weird symbols
I also tried: System.out.println(new String(myICUBytes, Charset.forName("UTF-16"))) and it's just as gebberishy....
note that the ICU function does return the proper unicode characters in the UChar *... somewheres between the conversion to jbyteArray and Java that is is messing up...
Help!
destChr[i] = testStr[i]; //is this correct?
This looks like an issue all right.
JNI types:
byte jbyte signed 8 bits
char jchar unsigned 16 bits
ICU4C types:
Define UChar to be wchar_t if that is
16 bits wide; always assumed to be
unsigned.
If wchar_t is not 16 bits wide, then
define UChar to be uint16_t or
char16_t because GCC >=4.4 can handle
UTF16 string literals. This makes the
definition of UChar platform-dependent
but allows direct string type
compatibility with platforms with
16-bit wchar_t types.
So, aside from anything icu_function might be doing, you are trying to fit a 16-bit value into an 8-bit-wide type.
If you must use a Java byte array, I suggest converting to the 8-bit char type by transcoding to a Unicode encoding.
To paraphrase some C code:
UChar *utf16 = (UChar*) malloc(len16 * sizeof(UChar));
//TODO: fill data
// convert to UTF-8
UConverter *encoding = ucnv_open("UTF-8", &status);
int len8 = ucnv_fromUChars(encoding, NULL, 0, utf16, len16, &status);
char *utf8 = (char*) malloc(len8 * sizeof(char));
ucnv_fromUChars(encoding, utf8, len8, utf16, len16, &status);
ucnv_close(encoding);
//TODO: char to jbyte
You can then transcode this to a Java String using new String(myICUBytes, "UTF-8").
I used UTF-8 because it was already in my sample code and you don't have to worry about endianness. Convert my C to C++ as appropriate.
Have you considered using ICU4J?
Also, when converting your bytes to a string, you will need to specify a character encoding. I'm not familiar with the library in question, so I can't advise you further, but perhaps this will be "UTF-16" or similar?
Oh, and it's also worth noting that you might simply be getting display errors because the terminal you're printing to isn't using the correct character set and/or doesn't have the right glyphs available.

How does this reinterpret_cast work? (Porting C++ to Java)

I have some C++ code I'm trying to port to Java, that looks like this:
struct foostruct {
unsigned char aa : 3;
bool ab : 1;
unsigned char ba : 3;
bool bb : 1;
};
static void foo(const unsigned char* buffer, int length)
{
const unsigned char *end = buffer + length;
while (buffer < end)
{
const foostruct bar = *(reinterpret_cast<const foostruct*>(buffer++));
//read some values from struct and act accordingly
}
}
What is the reinterpret_cast doing?
its basically saying the 8 bits represented at the current pointer should be interpreted as a "foostruct".
In my opinion it would be better written as follows:
const unsigned char aa = *buffer & 0x07;
const bool ab = (*buffer & 0x08) != 0;
const unsigned char ba = (*buffer & 0x70) >> 4;
const bool bb = (*buffer & 0x80) != 0;
I think it is far more obvious what is being done then. I think you may well find it easier to port to Java this way too ...
It does what a classic C-style (const foostruct *)buffer would do at worst: tells C++ to ignore all safety and that you really know what you are doing. In this case, that the buffer actually consists of foostructs, which in turn are bit fields overlain on single 8 bit characters. Essentially, you can do the same in Java by just getting the bytes and doing the shift and mask operations yourself.
you have a pointer to unsigned char right? Now imagine that the bits pointed to by the pointer are treated as though it were an object of type foostruct. That's what reinterpret_cast does - it reinterprets the bit pattern to be a memory representation of another type...

Categories