My android program need to receive int values from arduino analog sensor via usb and print them on real time graph, i receive byte[] from call back function.
i tried many ways to convert from byte[] to string or int include new String new Integer BigInteger parseInt and some code method that i find in other topics, but nothing work i receive only half of the correct values, other values to much bugger or smaller.
The byte[] length changed from 1 to 4 , their is some empty bytes, it look like this(log):
How i can to convert it to correct values? where the problem?
In ideal i need receive int values between 230 to 300 from sensor.
It seems that your sensor is using text protocol. If I convert your bytes to ASCII chars, it will be:
..
10-LF
50-2
53-5
56-8
..
13-CR
10-LF
50-2
53-5
..
54-6
13-CR
10-LF
etc.
Interpreted as
258
256
so, I thing the best solution is to accumulate received bytes as chars and when CRLF is reveived, read whole string (with stripped CRLF) as int - probably via parseInt.
Arduino code segment?
Guessing badly : int is a 16 bit value byte is 8 bits.
Int_8 is -128 to 127 . uint8_t 0-255 not supported by java as far as i know but you can use the char type unsigned 16 bit(need to cast it).
Related
I been reading about encoding Unicode Java 9 compact Strings in the last two days i am getting quite well. But there is something that i dont understand.
About byte data type
1). Is a 8-bit storage ranges from -128 to 127
Questions
1). Why Java didn't implement it like char unsigned 16 bits? i mean it would be in a range of 0.256 because from 0 to 127 only can i hold a Ascii value but what would happen if i set the value 200 a extended ascii would overflow to -56.
2). Does the negative value mean something i mean i have try a simple example using Java 11
final char value = (char)200;//in byte would overflow
final String stringValue = new String(new char[]{value});
System.out.println(stringValue);//THE SAME VALUE OF JAVA 8
I have checked the String.value variable and i see a byte array of
System.out.println(value[0]);//-56
The same questions like before arise does the -56 mean something i mean the (negative value) in other languages this overflow is detected to return to the value 200? How can Java know that -56 value is the same as 200 in char.
I have try hardest examples like codepoint 128048 and i see in String.value variable a array of bytes like this.
0 = 61
1 = -40
2 = 48
3 = -36
I know this codepoint takes 4 bytes but i get it how is transformed char[] to byte[] but i dont know how String handle this byte[] data.
Sorry if this question is simple and sorry any typing english is not my natural language thanks a lot.
Why Java didn't implement it like char unsigned 16 bits? i mean it would be in a range of 0.256 because from 0 to 127 only can i hold a Ascii value but what would happen if i set the value 200 a extended ascii would overflow to -56.
Java’s primitive data types were settled with Java 1.0 a quarter century ago. The compact strings were introduced in Java 9, less than two years ago. This new feature, which is merely an implementation detail, did not justify fundamental changes at Java’s type system.
Besides that, you are looking at one interpretation of the data stored in a byte. For the sake of representing iso-latin-1 units, it is entirely irrelevant whether interpreting the same data as Java’s built-in signed byte would result in a positive or negative number.
Likewise Java’s I/O API allows reading a file into a byte[] array and write byte[] arrays back to files and these two operations are already sufficient to copy a file losslessly, regardless of its file format which would be relevant when interpreting its content.
So the following works since Java 1.1:
byte[] bytes = "È".getBytes("iso-8859-1");
System.out.println(bytes[0]);
System.out.println(bytes[0] & 0xff);
-56
200
The two numbers, -56 and 200 are just different interpretations of the bit pattern 11001000 whereas the iso-latin-1 interpretation of a byte containing the bit pattern 11001000 is the character È.
A char value is also just an interpretation of a two byte quantity, i.e. as UTF-16 code unit. Likewise, a char[] array is a sequence of bytes in the computer’s memory with a standard interpretation.
We can also interpret other byte sequences this way.
StringBuilder sb = new StringBuilder().appendCodePoint(128048);
byte[] array = new byte[4];
StandardCharsets.UTF_16LE.newEncoder()
.encode(CharBuffer.wrap(sb), ByteBuffer.wrap(array), true);
System.out.println(Arrays.toString(array));
will print the value you’ve seen, [61, -40, 48, -36].
The advantage of using a byte[] array inside the String class is, that now, the interpretation can be chosen, to use iso-latin-1 when all characters are representable with this encoding or utf-16 otherwise.
The possible numeric interpretations are irrelevant to the string. However, when you ask “How can Java know that -56 value is the same as 200”, you should ask yourself, how does it know that the bit pattern 11001000 of a byte is -56 in the first place?
System.out.println(value[0]);
bears an actually expensive operation, compared to ordinary computer arithmetic, the conversion of a byte (or an int) to a String. This conversion operation is often overlooked as it has been defined as the default way of printing a byte, but is not more natural than a conversion to a String interpreting the value as an unsigned quantity. For further reading, I recommend Two's complement.
This is because not all bytes in a string are interpreted the same. This depends to the string's character encoding.
Example:
if a string is an UTF-8 string, its characters will be 8-bits in size.
in an UTF-16 string, its characters will be 16-bits in size.
etc...
This means, if the string is to be represented as UTF-8, the characters will be made by reading 1 byte at a time; if 16-bits, the characters will made by reading 2 bytes at a time.
Look at this code: a single byte array data is transformed to string using UTF-8 and UTF-16.
byte[] data = new byte[] {97, 98, 99, 100};
System.out.println(new String(data, StandardCharsets.UTF_8));
System.out.println(new String(data, StandardCharsets.UTF_16));
The output of this code is:
abcd // 4 bytes = 4 chars, 1 byte per char
慢捤 // 4 bytes = 2 chars, 2 byte per char
Going back to the question, what motivated the developers to do so is to reduce memory footprint on strings. Not all strings uses all the 16-bits a char offers.
EDIT: Code here
Let's say I have a byte array and I try to encode it to UTF_8 using the following
String tekst = new String(result2, StandardCharsets.UTF_8);
System.out.println(tekst);
//where result2 is the byte array
Then, I get the bytes using getBytes() with values from 0 to 128
byte[] orig = tekst.getBytes();
And then, I wish to do a frequency count of my byte[] orig using the ff:
int frequencies = new int[256];
for (byte b: orig){
frequencies[b]++;
}
Everything goes well till I encounter an error which states
java.lang.ArrayIndexOutOfBoundsException: -61
Does that mean that my byte still contains negative values despite converting it to UTF-8? Is there something wrong that I'm doing? Can someone please give me clarity on this cause I'm still a beginner on the subject. Thank you.
Answering the specific question
Does that mean that my byte still contains negative values despite converting it to UTF-8?
Yes, absolutely. That's because byte is signed in Java. A byte value of -61 would be 195 as an unsigned value. You should expect to get bytes which aren't in the range 0-127 when you encode any non-ASCII text with UTF-8.
The fix is easy: just clamp the range to 0-255 with a bit mask:
frequencies[b & 0xff]++;
Addressing what you're attempting to do
This line:
String tekst = new String(result2, StandardCharsets.UTF_8);
... is only appropriate if result2 is genuinely UTF-8-encoded text. It's not appropriate if result2 is some arbitrary binary data such as an image, compressed data, or even text encoded in some other encoding.
If you want to preserve arbitrary binary data as a string, you should use something like Base64 or hex. Basically, you need to determine whether your data is inherently textual (in which case, you should use strings for as much of the time as possible, and use an appropriate Charset to convert to binary where necessary) or inherently binary (in which case you should use bytes for as much of the time as possible, and use base64 or hex to convert to text where necessary).
This line:
byte[] orig = tekst.getBytes();
... is almost always a bad idea. It uses the platform-default encoding to convert a string to bytes. If you really, really want to use the platform-default encoding, I would make that explicit:
byte[] orig = tekst.getBytes(Charset.defaultCharset());
... but this is an extremely unusual requirement these days. It's almost always better to stick to UTF-8 everywhere.
I am receiving some numerical data from a Java client via socket connection on C++ server. When I receive 4 byte int type data, what I need is just using ntohl() function or reverse the bit order to convert to c++ int type. However, I'am having trouble trying to convert long data type from Java. No matter what I tried, I could not recover the correct value. I used LONG64, ULONG64 and int64_t as well, and none of them worked.
For example, when I send long s = 1 from Java, on C++ side I did:
int64_t size;
recv(client, (char *)&size, sizeof int64_t, 0);
if I do
size = ntohl(size)
Then size will become 0 whatever the original long value is in Java !
If I don't do ntohl() conversion, then size = 72057594037927936 for s = 1
I have hardly found any useful information on this topic and I would appreciate any suggestion.
The value 72057594037927936 is 0x0100000000000000 in Hex. As you may have guessed, that's simply backwards byte ordering, the 1 is in front instead of back.
ntohl() is 32-bit, so it is throwing out those top four bytes (the first 8 hex digits), giving you zero. You could possibly use htonll instead, but that isn't quite right. The best thing is to reverse the order of the bytes yourself.
int64_t size;
recv(client, (char *)&size, sizeof int64_t, 0);
char *start = (char *)&size, *end = start + sizeof(size);
std::reverse(start, end);
There are a ton of ways of reversing the bytes, and a ton of ways of dealing with little/big endian problems in general.
I am working on an Smart Card where there is a method in javax.smartcardio.CommandAPDU.
CommandAPDU(int cla, int ins, int p1, int p2, byte[] data, int ne)
I need to send data as byte[] (5th argument). Now my problem is that, as Java primitive data types are signed the max value of a byte can not exceed 127. I need to send a value bigger than 127. To be precise, the hex value 94 which is equal to 148.
As some solution suggests that we can cast it to integer.
byte b = -108;
int i = b & 0xff;
I can't do that as the CommandAPDU(); constructor doesn't take an []. So how to do it?
Depending on how it is interpreted by the smart card, you could just send the correct negative value. If the smart card interprets value as unsigned, you could for example send -1 for 255.
You're calculating the APDU with unsigned bytes, while Java uses signed bytes.
It's just a matter of how the data is interpreted, sending -108 to the smart card will be interpreted in exactly the same way as sending 148 from a platform using unsigned bytes. The bit combination is exactly the same.
Java can even do the conversion itself so that you can write the code using unsigned numbers;
byte data = (byte)0x94; // stores -108 in "data", which will be interpreted
// as 148 on an unsigned platform
For long blocks of data, it is probably best to use a hexadecimal encoder/decoder. But be sure that you handle the data as bytes internally (directly decode and don't look back to the hex String). The Apache codec library contains a good encoder/decoder, or you can use Bouncy Castle or Guava or use one of the many examples on SO.
How do I get the following int to a byte array. I have been reading other SO questions and everything is confusing, could someone explian what is happening in the code.
int val = 1023; // the int will vary from 0 to 1023 (it's the analogRead value from an Arduino board)
the purpose of me wanting this as a byte array is so that i can use it for Arduino's server.write().
This is what I've come up with so far:
int val = analogRead(A0);
Serial.println(val);
byte value[2];
value[1] = val & 0x000000ff;
value[0] = (val & 0x0000ff00) >> 8;
server.write(value[0]);
server.write(value[1]);
I am trying to communicate TCP with an Android application I have written, here is the recieving end:
mmInStream = mmSocket.getInputStream();
final byte[] buffer = new byte[16384]; // two bytes
int bytes;
bytes = mmInStream.read(buffer);
Log.d(null,buffer[0]+buffer[1]);
is this correct?
The Arduino console is spitting out the values, example:
870
870
870
872
However my Android application is spitting out the following, example:
3102
3105
1033
1035
I must be doing something wrong here!
ANSWER: Arduino sends unsigned bytes, java recieves only signed bytes. I fixed it with a little code on the java end.
Whenever you're going to be dealing with some binary communications protocol, be it a TCP connection, Serial Port, USB, etc. you need to save yourself some future headaches and define your protocol.
Specifically, this means field widths, and byte order. When sending binary data over a network protocol, we almost always send it in "network order", which is "big-endian", meaning the most-significant byte first.
Example: I want to send a four-byte int a = 0x12345678 over the network. If you do it correctly, the bytes go out in the order 12 34 56 78.
However, I believe your ATMEGA chip is little-endian, which means that bytes are stored (in memory) with the least-significant byte first. So if you were to just cast that int to a unsigned char* and send 4 bytes, they would go out in the order 78 56 34 12.
So in order to send that value out, you should first decide how many bytes it's going to be. Since you've limited it to the range 0 - 1023, you've observed that it will fit into just two bytes. Good. So your protocol is:
Offset 0: value Size: 2
Now, you need to send it in network order. Your example code:
byte value [2];
value[1] = val & 0x000000ff;
value[0] = (val & 0x0000ff00) >> 8;
is putting the most significant byte at position 0, and the least significant byte at position 1. So a value of 0x1234 will go out in the order 12 34. Perfect.
Your code is correct (on the Arduino side).
Now, on the receiving side, you need to make sure that you're receiving data in network order also. I haven't done any Java network programming, so you'll need to check with the documentation to see how it handles network binary streams. In particular, when you go to read that "16-bit unsigned integer" from the network, the byte order needs to be respected.
Perhaps this will help you on the Java side:
network byte order to host byte order in java
This will encode an int as a byte array.
// Encoding
byte value[2] = { highByte(val), lowByte(val) };
This will convert a byte array back to an integer
// Decoding
int val = value[0] << 8 + value[1];