I am making an app that communicates with a specific Bluetooth Low Energy device. It requires a specific handshake and this is all working perfectly in Objective-C for iOS, however, I am having trouble recreating this functionality in Java
Any thoughts greatly appreciated!
WORKING Objective-C code:
uint8_t bytes[] = {0x04,0x08,0x0F,0x66,0x99,0x41,0x52,0x43,0x55,0xAA};
NSData *data = [NSData dataWithBytes:bytes length:sizeof(bytes)];
[_btDevice writeValue:data forCharacteristic:_dataCommsCharacteristic type:CBCharacteristicWriteWithResponse];
So far for android I have the following as an equivalent:
byte[] handshake = {0x04,0x08,0x0F,0x66,(byte)0x99,0x41,0x52,0x43,0x55,(byte)0xAA};
characteristic.setValue(handshake);
boolean writeStatus = gatt.writeCharacteristic(characteristic);
Log.d(TAG,"Handshake sent: " + writeStatus);
As mentioned, iOS works great, but the equivalent in Java is getting no response from the device, leading me to think that the data being sent is wrong/not recognised
UPDATE
So, after plenty of wrestling with this I have a little more insight into what is going on 'I think!'
As Scary Wombat mentioned below the maximum value of an int is 127 so the 2 values in the array of 0x99 and 0xAA are of course out of this range
The below is where I am at with the values:
byte bytes[] = {0x04,0x08,0x0F,0x66,(byte)0x99,0x41,0x52,0x43,0x55,(byte)0xAA};
Log.d(TAG, Arrays.toString(bytes));
Produces
[4, 8, 15, 102, -103, 65, 82, 67, 85, -86]
However the expected values need to be
[4, 8, 15, 102, 153, 65, 82, 67, 85, 170]
I have tried casting these troublesome bytes implicitly and also tried the below below:
byte bytes[] = {0x04,0x08,0x0F,0x66,(byte)(0x99 & 0xFF),0x41,0x52,0x43,0x55,(byte)(0xAA & 0xFF)};
However the resulting values in the array are always the same.
Please help!!! :)
UPDATE 2
After a day of digging it appears that although the values are logging incorrectly the values perceived by the Bluetooth device SHOULD still be correct, so I have modified this question and continuing over here
Why are you not doing it the same way as for C
In this code
String handshakeString = "0x04,0x08,0x0F,0x66,0x99,0x41,0x52,0x43,0x55,0xAA";
byte[] value = handshakeString.getBytes();
this is just making a text String where the first char is 0 and the second is x etc
try
byte arr[] = {0x04,0x08,0x0F,0x66,0x99,0x41,0x52,0x43,0x55,0xAA};
edit
You maybe need to reconsider values such as 0x99 as in java the byte values are as per javadocs
It has a minimum value of -128 and a maximum value of 127 (inclusive).
See Can we make unsigned byte in Java
String handshakeString = "0x04,0x08,0x0F,0x66,0x99,0x41,0x52,0x43,0x55,0xAA";
byte[] value = handshakeString.getBytes();
will also account the , and so creates to much bytesand will not geht the same bytes AS in your c code.
Try to use a byte[] directly.
byte[] value=new byte[]{0x04,0x08,0x0F,0x66,0x99,0x41,0x52,0x43,0x55,0xAA};
Related
I'm currently working with Apache POI to create an excel file. I want to send this file to AWS S3 via multipart upload.
I'm using the SXSSFWorkbook combined with the substitution techniques used by the BigGridDemo in order to create the document itself and send the sheet data. This is where it gets a little tricky. I have something mostly working, but am generating an invalid excel file due to NULs being written into the XML file that composed the sheet data.
In trying to track down why this happens I've stumbled onto this:
import java.io._
import java.util.zip._
val bo = new ByteArrayOutputStream()
val zo = new ZipOutputStream(bo)
zo.putNextEntry(new ZipEntry("1"))
zo.write("hello".getBytes())
zo.write("\nhello".getBytes())
val bytes1 = bo.toByteArray()
// bytes1: Array[Byte] = Array(80, 75, 3, 4, 20, 0, 8, 8, 8, 0, 107, -121, -9, 76, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 49)
bo.reset()
zo.write("hello".getBytes())
val bytes2 = bo.toByteArray() // bytes2: Array[Byte] = Array()
zo.flush()
val bytes2 = bo.toByteArray() // bytes2: Array[Byte] = Array()
bo.size //res11: Int = 0
zo.putNextEntry() // If I make a new entry it works but I can't do this in real code...
bo.size // res17: Int = 66
It seems that when I reset the underlying byte output stream it causes the ZipOutputStream to note write anything anymore. This surprised me, so I went looking into the underlying source code of ZipOutputStream. I noticed the default method is DEFLATED, which just calls DeflaterOutputStream#write, I then looked into the deflater code itself thinking that maybe there's something deeper in the compression algorithm that I don't understand that requires the stream to not be reset or that is somehow affected by it. I found a reference to FULL_FLUSH and noted
The compression state is reset so that the inflater that works on the compressed output data can restart from this point if previous compressed data has been damaged or if random access is desired.
Which sounded good to me since I could imagine that a reset byte stream could be seen as damaged data perhaps. So I repeated my minimal experiment:
import java.io._
import java.util.zip._
val bo = new ByteArrayOutputStream()
val zo = new ZipOutputStream(bo)
zo.setLevel(Deflater.FULL_FLUSH)
zo.putNextEntry(new ZipEntry("1"))
zo.write("hello".getBytes())
val bytes1 = bo.toByteArray()
// bytes1: Array[Byte] = Array(80, 75, 3, 4, 20, 0, 8, 8, 8, 0, 84, 75, -8, 76, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 49)
zo.flush()
bo.reset()
zo.write("\nhello".getBytes())
zo.flush()
val bytes2 = bo.toByteArray() // bytes2: Array[Byte] = Array()
So no dice. My goal here was to keep everything in memory (hence the byte arrays) and keep the memory pressure low by removing the bytes I had already written to the UploadPartRequest, but this really throws a wrench into things since I'm under the impression that the XML file must be compressed since the excel file format is effectively a zip file. My full code is obviously a bit more complicated and is using the Play framework and Scala 2.12.6, I've put it on github here and added some additional comments if you'd like to look at it or run it.
I know I could accomplish uploading this file to s3 in parts by writing the excel file out to disk first and then uploading it, but for my purposes I'm hoping for an all in-memory solution so I don't have to deal with disk space problems on web servers when large temp files are generated. By keeping the rows generated uploaded as they're made I was thinking the memory pressure should stay fairly constant per upload. Here's what the current code generates in the xml file sheet data:
...
Which implies to me that despite my experiment showing no bytes, at some point more bytes happen and are written to the file since the NULs end eventually.
So... why does this happen? Why does ByteArrayOutputStream.reset() cause a problem for writing on the ZipOutputStream? If I don't call .reset() it seems that the ByteArrayOutputStream will expand until it's huge and cause Out of Memory errors? Or should I not worry since the data is getting compressed anyway?
I don't think it's the fault of ByteArrayOutputStream.reset().
Similar to CipherStreams and other filter streams, DeflaterOutputStream and thus ZipOutputStream does not actually write to the underlying stream (your ByteArrayOutputStream) until it can/needs to (sometimes even when you flush).
I believe in this case of a ZipInputStream it might only write to the underlying stream on certain block sizes or upon closing of the ZipEntry; Not exactly sure but that's my guess.
Example:
val bo = new ByteArrayOutputStream()
val zo = new ZipOutputStream(bo)
zo.putNextEntry(new ZipEntry("example entry"))
// v prints the entry header bytes v
println(bo.toString())
zo.write("hello".getBytes())
zo.flush();
// v still only the entry header bytes v
println(bo.toString())
One thing I noticed in ExcelStreamingToS3Service - line 155 you might want to change to zos.write(byteBuffer, offset, offset + bytesRead), or something similar. Writing the full buffer could certainly be what is causing all those NUL characters, since your buffer may not have been filled during the read and still have many empty indices. After all, it looks like the the xml continues where it left off from before the NULs like here: <c r="C1 ... 940" t="inlineStr"> so it does seem like you're writing all the data, just interspersing it with NULs.
Check this PDF, Practical Guidelines for Boosting Java Server Performance, from Bell Laboratories: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.3674&rep=rep1&type=pdf
It talks about everything including the used of the reset method.
Also, Take a look into this post: http://java-performance.info/java-io-bytearrayoutputstream/
Finally, you should always have a try/catch for issues such as Out of Memory.
If I don't call .reset() it seems that the ByteArrayOutputStream will
expand until it's huge and cause Out of Memory errors?
Let me know if this helps or not.
We are implementing a feature to support non printable characters of UTF-8in our Database. Our system stores them in the database and retrieves them. We collect input in the form of base 64, convert them into byte array and store it in database. During retrieval, database gives us the byte array and we convert them to base 64 again.
During the retrieval process (after db gives us the byte array), all the attributes are converted to string arrays and later they are converted back to byte array again and this is converted to base 64 again to give it back to the user.
The below piece of code compiles and works properly in our Windows JDK (Java 8 version). But when this is placed in the SuSe Linux environment, we see strange characters.
public class Tewst {
public static void main(String[] args) {
byte[] attributeValues;
String utfString ;
attributeValues = new byte[]{-86, -70, -54, -38, -6};
if (attributeValues != null) {
utfString = new String(attributeValues);
System.out.println("The string is "+utfString);
}
}
}
The output given is
"The string is ªºÊÚú"
Now when the same file is run on SuSe Linux distribution, it gives me:
"The string is �����"
We are using Java 8 in both Windows and Linux. What is the problem that it doesnt execute properly in Linux?
We have also tried utfString = new String(attributeValues,"UTF-8");. It didnt help in any way. What are we missing?
The characters ªºÊÚú are Unicode 00AA 00BA 00CA 00DA 00FA.
In character set ISO-8859-1, that is bytes AA BA CA DA FA.
In decimal, that would be {-86, -70, -54, -38, -6}, as you have in your code.
So, your string is encoded in ISO-8859-1, not UTF-8, which is also why it doesn't work on Linux, because Linux uses UTF-8, while Windows uses ISO-8859-1.
Never use new String(byte[]), unless you're absolutely sure you want the default character set of the JVM, whatever that might be.
Change code to new String(attributeValues, StandardCharsets.ISO_8859_1).
And of course, in the reverse operation, use str.getBytes(StandardCharsets.ISO_8859_1).
Then is should work consistently on various platforms, since code it no longer using platform defaults.
I have a 48 character AES-192 encryption key which I'm using to decrypt an encrypted database.
However, it tells me the key length is invalid, so I logged the results of getBytes().
When I execute:
final String string = "346a23652a46392b4d73257c67317e352e3372482177652c";
final byte[] utf32Bytes = string.getBytes("UTF-32");
System.out.println(utf32Bytes.length);
Using BlueJ on my mac (Java Virtual Machine), I get 192 as the output.
However, when I use:
Log.d(C.TAG, "Key Length: " + String.valueOf("346a23652a46392b4d73257c67317e352e3372482177652c".getBytes("UTF-32").length));
I get 196 as the output.
Does anybody know why this is happening, and where Dalvik is getting an additional 4 bytes from?
You should specify endianess on both machines
final byte[] utf32Bytes = string.getBytes("UTF-32BE");
Note that "UTF-32BE" is a different encoding, not special .getBytes parameter. It has fixed endianess and doesn't need BOM. More info: http://www.unicode.org/faq/utf_bom.html#gen6
Why would you UTF-32 encode plain a hexidecimal number. Thats 8x larger than it needs to be. :P
String s = "346a23652a46392b4d73257c67317e352e3372482177652c";
byte[] bytes = new BigInteger(s, 16).toByteArray();
String s2 = new BigInteger(1, bytes).toString(16);
System.out.println("Strings match is "+s.equals(s2)+" length "+bytes.length);
prints
Strings match is true length 24
I have a Java servlet that receives data from an upstream system via a HTTP GET request. This request includes a parameter named "text" and another named "charset" that indicates how the text parameter was encoded:
If I instruct the upstream system to send me the text TĀ and debug the servlet request params, I see the following:
request.getParameter("charset") == "UTF-16LE"
request.getParameter("text").getBytes() == [0, 84, 1, 0]
The code points (in hex) for the two characters in this string are:
[T] 0054
[Ā] 0100
I cannot figure out how to convert this byte[] back to the String "TĀ". I should mention that I don't entirely trust the charset and suspect it may be using UTF-16BE.
Use the String(byteArray, charset) constructor:
byte[] bytes = { 0, 84, 1, 0 };
String string = new String(bytes, "UTF-16BE");
Why are you calling getBytes() at all? You already have the parameter as a String. Calling getBytes(), without specifying a charset, is just an opportunity to mangle the data.
I'm using the JSpeex library for audio encoding.
The encoding seems to work fine. But decoding doesn't.(i.e. I get all zeros as decoded data.)
// encoding ///
SpeexEncoder enc = new SpeexEncoder();
// if i use channel as 1 instead of 2 even encoding doesn't work
enc.init(mode, quality, 44100, 2);
enc.processData(b, 0, b.length); // b is byte array i'm trying to encode & then decode
enc.getProcessedData(temp, 0); // save encoded data to temp // temp is byte array
////Decoding /////////
SpeexDecoder dec = new SpeexDecoder();
dec.init(mode,44100,2,true);
dec.processData(temp, 0, temp.length);
dec.getProcessedData(decoded, 0); //decoded is the output byte array which comes only zeros
If anyone has any info on this please reply.
Thanks
I realize this post is a bit old, but ran into a similar problem with Speex.js (a javascript port).
Not sure if the issue is the same for you, but I found that there was an implicit conversion from Float32Array to Int16Array that didn't actually convert the data. This meant that all of the (-1.0,1.0) float data was essentially integer zeros, and was converted as such.
Just needed to do the conversion to Int16Array before passing in the data (so it wouldn't need to do any data conversion within the library) and the output sprang to life :)
Hope that helps. cheers!