Here is my code :
int availableBytes = inputStream.available();
if (availableBytes > 0) {
inputStream.read(readBuffer, 0, availableBytes);
System.out.println(new String(readBuffer, 0, availableBytes));
Reponse = new String(readBuffer, "UTF-8");
System.out.println(Reponse);
My question :
So I get in my "Reponse" variable, of type String, ascii value well I think because when I do the sysout of "Reponse" it shows me a 3 "squares with a question mark in".
So is it possible to convert this String value with ascii value in integer ?
Java String is not a sequence of bytes - signed 8-bit, but of chars - unsigned 16-bit. Also read the javadoc for your constructor:
Constructs a new String by decoding the specified subarray of bytes
using the platform's default charset. The length of the new String is
a function of the charset, and hence may not be equal to the length of
the subarray.
It does not work as you probably expect - it does not convert each byte into a character!
Related
I have a function for hashing passwords, that returns a byte[] with entries using the full range of the byte datatype from -128 to 127. I have tried to convert the byte[] to a String using new String(byte_array, StandardCharsets.UTF_8);. This does return a String - however it can not properly encode negative numbers - hence it encodes them to a "�" character. When comparing two of those characters using: new String(new byte[]{-1}, StandardCharsets.UTF_8).equals(new String(new byte[]{-2}, StandardCharsets.UTF_8)) it turns out the String representation for all negative numbers is equal as the expression above returns true. While this doesn't fully ruin my hashing functionality as the hash of the same expression will still always yield the same result, this is obviously not what I want as it increases the chance of two different inputs yielding the same output drastically.
Is there some easy fix for this or any alternative idea how to convert the byte[] to a String? For context I want to use the String to later write it to a file to store it in a file and later read it again to compare it to other hashes.
Edit: After a bit of trying around with the tips from the comments my solution is to convert the byte[] to a char[] and add 128 to every value. The char array can then easily be converted to a String or be written to a file directly (byteHash is the byte[]):
char[] charHash = new char[byteHash.length];
for(int i = 0; i < byteHash.length; i++){
charHash[i] = (char) (byteHash[i]+128);
}
return new String(charHash);
I do not really like the solution but it works.
The appropriate solution to this is to use an encoding like hexadecimal (https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/HexFormat.html) or Base64 (https://docs.oracle.com/javase/8/docs/api/java/util/Base64.html) to convert an arbitrary byte sequence to a string reversibly.
I have tried numerous Strings with random characters, and except empty string "", their .getBytes() byte arrays seem to never contain any 0 values (like {123, -23, 54, 0, -92}).
Is it always the case that their .getBytes() byte arrays always contain no nero except an empty string?
Edit: the previous test code is as follows. Now I learned that in Java 8 the result seems always "contains no 0" if the String is made up of (char) random.nextInt(65535) + 1; and "contains 0" if the String contains (char) 0.
private static String randomString(int length){
Random random = new Random();
char[] chars = new char[length];
for (int i = 0; i < length; i++){
int integer = random.nextInt(65535) + 1;
chars[i] = (char) (integer);
}
return new String(chars);
}
public static void main(String[] args) throws Exception {
for (int i = 1; i < 100000; i++){
String s1 = randomString(10);
byte[] bytes = s1.getBytes();
for (byte b : bytes) {
if (b == 0){
System.out.println("contains 0");
System.exit(0);
}
}
}
System.out.println("contains no 0");
}
It does depend on your platform local encoding. But in many encodings, the '\0' (null) character will result in getBytes() returning an array with a zero in it.
System.out.println("\0".getBytes()[0]);
This will work with the US-ASCII, ISO-8859-1 and the UTF-8 encodings:
System.out.println("\0".getBytes("US-ASCII")[0]);
System.out.println("\0".getBytes("ISO-8859-1")[0]);
System.out.println("\0".getBytes("UTF-8")[0]);
If you have a byte array and you want the string that corresponds to it, you can also do the reverse:
byte[] b = { 123, -23, 54, 0, -92 };
String s = new String(b);
However this will give different results for different encodings, and in some encodings it may be an invalid sequence.
And the characters in it may not be printable.
Your best bet is the ISO-8859-1 encoding, only the null character cannot be printed:
byte[] b = { 123, -23, 54, 0, -92 };
String s = new String(b, "ISO-8859-1");
System.out.println(s);
System.out.println((int) s.charAt(3));
Edit
In the code that you posted, it's also easy to get "contains 0" if you specify the UTF-16 encoding:
byte[] bytes = s1.getBytes("UTF-16");
It's all about encoding, and you haven't specified it. When you haven't passed it as an argument to the getBytes method, it takes your platform default encoding.
To find out what that is on your platform, run this:
System.out.println(System.getProperty("file.encoding"));
On MacOS, it's UTF-8; on Windows it's likely to be one of the Windows codepages like Cp-1252. You can also specify the platform default on the command line when you run Java:
java -Dfile.encoding=UTF16 <the rest>
If you run your code that way you'll also see that it contains 0.
Is it always the case that their .getBytes() byte arrays always contain no nero except an empty string?
No, there is no such guarantee. First, and most importantly, .getBytes() returns "a sequence of bytes using the platform's default charset". As such there is nothing preventing you from defining your own custom charset that explicitly encodes certain values as 0s.
More practically, many common encodings will include zero-bytes, notably to represent the NUL character. But even if your strings don't include NUL's its possible for the byte sequence to include 0s. In particular UTF-16 (which Java uses internally) represents all characters in two bytes, meaning ASCII characters (which only need one) are paired with a 0 byte.
You could also very easily test this yourself by trying to construct a String from a sequence of bytes containing 0s with an appropriate constructor, such as String(byte[] bytes) or String(byte[] bytes, Charset charset). For example (notice my system's default charset is UTF-8):
System.out.println("Default encoding: " + System.getProperty("file.encoding"));
System.out.println("Empty string: " + Arrays.toString("".getBytes()));
System.out.println("NUL char: " + Arrays.toString("\0".getBytes()));
System.out.println("String constructed from {0} array: " +
Arrays.toString(new String(new byte[]{0}).getBytes()));
System.out.println("'a' in UTF-16: " +
Arrays.toString("a".getBytes(StandardCharsets.UTF_16)));
prints:
Default encoding: UTF-8
Empty string: []
NUL char: [0]
String constructed from {0} array: [0]
'a' in UTF-16: [-2, -1, 0, 97]
I'm writing a Simplified DES algorithm to encrypt and subsequently decrypt a string. Suppose I have the initial character ( which has the binary value 00101000 which I get using the following algorithm:
public void getBinary() throws UnsupportedEncodingException {
byte[] plaintextBinary = text.getBytes("UTF-8");
for(byte b : plaintextBinary){
int val = b;
int[] tempBinRep = new int[8];
for(int i = 0; i<8; i++){
tempBinRep[i] = (val & 128) == 0 ? 0 : 1;
val <<= 1;
}
binaryRepresentations.add(tempBinRep);
}
}
After I perform the various permutations and shifts, ( and it's binary equivalent is transformed into 10001010 and it's ASCII equivalent Š. When I come around to decryption I pass the same character through the getBinary() method I now get the binary string 11000010 and another binary string 10001010 which translates into ASCII as x(.
Where is this rogue x coming from?
Edit: The full class can be found here.
You haven't supplied the decrypting code, so we can't know for sure, but I would guess you missed the encoding either when populating your String. Java Strings are encoded in UTF-16 by default. Since you're forcing UTF-8 when encrypting, I'm assuming you're doing the same when decrypting. The problem is, when you convert your encrypted bytes to a String for storage, if you let it default to UTF-16, you're probably ending up with a two-byte character because the 10001010 is 138, which is beyond the 127 range for ASCII charaters that get represented with a single byte.
So the "x" you're getting is the byte for the code page, followed by the actual character's byte. As suggested in the comments, you'd do better to just store the encrypted bytes as bytes, and not convert them to Strings until they're decrypted.
The problem I am facing occurs when I try to type cast some ASCII values to char.
For example:
(char)145 //returns ?
(char)129 //also returns ?
but it is supposed to return a different character. It happens to many other values as well.
I hope I have been clear enough.
ASCII is a 7-bit encoding system. Some programs even use this to detect if a file is binary or textual. Characters below 32 are escape characters and are used as directives (for instance new lines, print command)
The program however will still work. A character is simply stored as a short (thus sixteen bits). But the values in that range don't have an interpretation. This means that the textual output of both values will lead to nothing. On the other hand comparisons like (char) 145 == (char) 129 will still work (return false). Simply because for a processor, there is no difference between a short and a character.
If you are interested in converting your value such that only the lowest seven bits count (this modifying the value such that it is in the valid range), you can use masking:
int value = 145;
value &= 0x7f;
char c = (char) value;
The char type is Unicode 16 bit, UTF-16. So you could do (char) 265 for c-with-circumflex. ASCII is 7 bits 0 - 127.
String s = "" + ((char)145) + ((char)129);
The above is a string of two Unicode characters (each 2 bytes, UTF-16).
byte[] bytes = s.getBytes(StandardCharsets.US_ASCII); // ASCII with '?' as 7bit
s = new String(bytes, StandardCharsets.US_ASCII); // "??"
byte[] bytes = s.getBytes(StandardCharsets.ISO_8859_1); // ISO-8859-1 with Latin1
byte[] bytes = s.getBytes("Windows-1252"); // With Windows Latin1
byte[] bytes = s.getBytes(StandardCharsets.UTF_8); // No information loss.
s = new String(bytes, StandardCharsets.UTF_9); // Orinal string.
In java String/char/Reader/Writer tackle text (in Unicode), whereas byte[]/InputStream/OutputStream tackle binary data, bytes.
And for bytes must always be associated with an encoding to give text.
Answer: as soon as there is a conversion from text to some encoding that does not represent that char, a question mark can be written.
These expressions evaluate to true:
((char) 145) == '\u0091';
((char) 129) == '\u0081';
These UTF-16 values map to the Unicode code points U+0091 and U+0081:
0091;<control>;Cc;0;BN;;;;;N;PRIVATE USE ONE;;;;
0081;<control>;Cc;0;BN;;;;;N;;;;;
These are both control characters without visible graphemes (the question mark acts as a substitution character) and one of them is private use so has no designated purpose. Neither are in the ASCII set.
Is it possible to convert a byte array to a string but where the length of the string is exactly the same length as the number of bytes in the array? If I use the following:
byte[] data; // Fill it with data
data.toString();
The length of the string is different than the length of the array. I believe that this is because Java and/or Android takes some kind of default encoding into account. The values in the array can be negative as well. Theoretically it should be possible to convert any byte to some character. I guess I need to figure out how to specify an encoding that generates a fixed single byte width for each character.
EDIT:
I tried the following but it didn't work:
byte[] textArray; // Fill this with some text.
String textString = new String(textArray, "ASCII");
textArray = textString.getBytes("ASCII"); // textArray ends up with different data.
You can use the String constructor String(byte[] data) to create a string from the byte array. If you want to specify the charset as well, you can use String(byte[] data, Charset charset) constructor.
Try your code sample with US-ASCII or ISO-8859-1 in place of ASCII. ASCII is not a built-in Character encoding for Java or Android, but one of those two are. They are guaranteed single-byte encodings, with a caveat that characters not in the character set will be silently truncated.
This should work fine!
public static byte[] stringToByteArray(String pStringValue){
int length= pStringValue.length();
byte[] bytes = new byte[length];
for(int index=0; index<length; index++){
char ch= pStringValue.charAt(index);
bytes[index]= (byte)ch;
}
return bytes;
}
since JDK 1.6:
You can also use:
stringValue.getBytes() which will return you a byte array.
In case of passing a NULL string, you need to handle that by either throwing the nullPointerException or handling it inside the method itself.