Java: Display unicode chars as chars when printing string [duplicate] - java

I have a string with escaped Unicode characters, \uXXXX, and I want to convert it to regular Unicode letters. For example:
"\u0048\u0065\u006C\u006C\u006F World"
should become
"Hello World"
I know that when I print the first string it already shows Hello world. My problem is I read file names from a file, and then I search for them. The files names in the file are escaped with Unicode encoding, and when I search for the files, I can't find them, since it searches for a file with \uXXXX in its name.

The Apache Commons Lang StringEscapeUtils.unescapeJava() can decode it properly.
import org.apache.commons.lang.StringEscapeUtils;
#Test
public void testUnescapeJava() {
String sJava="\\u0048\\u0065\\u006C\\u006C\\u006F";
System.out.println("StringEscapeUtils.unescapeJava(sJava):\n" + StringEscapeUtils.unescapeJava(sJava));
}
output:
StringEscapeUtils.unescapeJava(sJava):
Hello

Technically doing:
String myString = "\u0048\u0065\u006C\u006C\u006F World";
automatically converts it to "Hello World", so I assume you are reading in the string from some file. In order to convert it to "Hello" you'll have to parse the text into the separate unicode digits, (take the \uXXXX and just get XXXX) then do Integer.ParseInt(XXXX, 16) to get a hex value and then case that to char to get the actual character.
Edit: Some code to accomplish this:
String str = myString.split(" ")[0];
str = str.replace("\\","");
String[] arr = str.split("u");
String text = "";
for(int i = 1; i < arr.length; i++){
int hexVal = Integer.parseInt(arr[i], 16);
text += (char)hexVal;
}
// Text will now have Hello

You can use StringEscapeUtils from Apache Commons Lang, i.e.:
String Title = StringEscapeUtils.unescapeJava("\\u0048\\u0065\\u006C\\u006C\\u006F");

This simple method will work for most cases, but would trip up over something like "u005Cu005C" which should decode to the string "\u0048" but would actually decode "H" as the first pass produces "\u0048" as the working string which then gets processed again by the while loop.
static final String decode(final String in)
{
String working = in;
int index;
index = working.indexOf("\\u");
while(index > -1)
{
int length = working.length();
if(index > (length-6))break;
int numStart = index + 2;
int numFinish = numStart + 4;
String substring = working.substring(numStart, numFinish);
int number = Integer.parseInt(substring,16);
String stringStart = working.substring(0, index);
String stringEnd = working.substring(numFinish);
working = stringStart + ((char)number) + stringEnd;
index = working.indexOf("\\u");
}
return working;
}

Shorter version:
public static String unescapeJava(String escaped) {
if(escaped.indexOf("\\u")==-1)
return escaped;
String processed="";
int position=escaped.indexOf("\\u");
while(position!=-1) {
if(position!=0)
processed+=escaped.substring(0,position);
String token=escaped.substring(position+2,position+6);
escaped=escaped.substring(position+6);
processed+=(char)Integer.parseInt(token,16);
position=escaped.indexOf("\\u");
}
processed+=escaped;
return processed;
}

StringEscapeUtils from org.apache.commons.lang3 library is deprecated as of 3.6.
So you can use their new commons-text library instead:
compile 'org.apache.commons:commons-text:1.9'
OR
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-text</artifactId>
<version>1.9</version>
</dependency>
Example code:
org.apache.commons.text.StringEscapeUtils.unescapeJava(escapedString);

With Kotlin you can write your own extension function for String
fun String.unescapeUnicode() = replace("\\\\u([0-9A-Fa-f]{4})".toRegex()) {
String(Character.toChars(it.groupValues[1].toInt(radix = 16)))
}
and then
fun main() {
val originalString = "\\u0048\\u0065\\u006C\\u006C\\u006F World"
println(originalString.unescapeUnicode())
}

It's not totally clear from your question, but I'm assuming you saying that you have a file where each line of that file is a filename. And each filename is something like this:
\u0048\u0065\u006C\u006C\u006F
In other words, the characters in the file of filenames are \, u, 0, 0, 4, 8 and so on.
If so, what you're seeing is expected. Java only translates \uXXXX sequences in string literals in source code (and when reading in stored Properties objects). When you read the contents you file you will have a string consisting of the characters \, u, 0, 0, 4, 8 and so on and not the string Hello.
So you will need to parse that string to extract the 0048, 0065, etc. pieces and then convert them to chars and make a string from those chars and then pass that string to the routine that opens the file.

For Java 9+, you can use the new replaceAll method of Matcher class.
private static final Pattern UNICODE_PATTERN = Pattern.compile("\\\\u([0-9A-Fa-f]{4})");
public static String unescapeUnicode(String unescaped) {
return UNICODE_PATTERN.matcher(unescaped).replaceAll(r -> String.valueOf((char) Integer.parseInt(r.group(1), 16)));
}
public static void main(String[] args) {
String originalMessage = "\\u0048\\u0065\\u006C\\u006C\\u006F World";
String unescapedMessage = unescapeUnicode(originalMessage);
System.out.println(unescapedMessage);
}
I believe the main advantage of this approach over unescapeJava by StringEscapeUtils (besides not using an extra library) is that you can convert only the unicode characters (if you wish), since the latter converts all escaped Java characters (like \n or \t). If you prefer to convert all escaped characters the library is really the best option.

Updates regarding answers suggesting using The Apache Commons Lang's:
StringEscapeUtils.unescapeJava() - it was deprecated,
Deprecated.
as of 3.6, use commons-text StringEscapeUtils instead
The replacement is Apache Commons Text's StringEscapeUtils.unescapeJava()

Just wanted to contribute my version, using regex:
private static final String UNICODE_REGEX = "\\\\u([0-9a-f]{4})";
private static final Pattern UNICODE_PATTERN = Pattern.compile(UNICODE_REGEX);
...
String message = "\u0048\u0065\u006C\u006C\u006F World";
Matcher matcher = UNICODE_PATTERN.matcher(message);
StringBuffer decodedMessage = new StringBuffer();
while (matcher.find()) {
matcher.appendReplacement(
decodedMessage, String.valueOf((char) Integer.parseInt(matcher.group(1), 16)));
}
matcher.appendTail(decodedMessage);
System.out.println(decodedMessage.toString());

I wrote a performanced and error-proof solution:
public static final String decode(final String in) {
int p1 = in.indexOf("\\u");
if (p1 < 0)
return in;
StringBuilder sb = new StringBuilder();
while (true) {
int p2 = p1 + 6;
if (p2 > in.length()) {
sb.append(in.subSequence(p1, in.length()));
break;
}
try {
int c = Integer.parseInt(in.substring(p1 + 2, p1 + 6), 16);
sb.append((char) c);
p1 += 6;
} catch (Exception e) {
sb.append(in.subSequence(p1, p1 + 2));
p1 += 2;
}
int p0 = in.indexOf("\\u", p1);
if (p0 < 0) {
sb.append(in.subSequence(p1, in.length()));
break;
} else {
sb.append(in.subSequence(p1, p0));
p1 = p0;
}
}
return sb.toString();
}

one easy way i know using JsonObject:
try {
JSONObject json = new JSONObject();
json.put("string", myString);
String converted = json.getString("string");
} catch (JSONException e) {
e.printStackTrace();
}

Fast
fun unicodeDecode(unicode: String): String {
val stringBuffer = StringBuilder()
var i = 0
while (i < unicode.length) {
if (i + 1 < unicode.length)
if (unicode[i].toString() + unicode[i + 1].toString() == "\\u") {
val symbol = unicode.substring(i + 2, i + 6)
val c = Integer.parseInt(symbol, 16)
stringBuffer.append(c.toChar())
i += 5
} else stringBuffer.append(unicode[i])
i++
}
return stringBuffer.toString()
}

UnicodeUnescaper from Apache Commons Text does exactly what you want, and ignores any other escape sequences.
String input = "\\u0048\\u0065\\u006C\\u006C\\u006F World";
String output = new UnicodeUnescaper().translate(input);
assert("Hello World".equals(output));
assert("\u0048\u0065\u006C\u006C\u006F World".equals(output));
Where input would be the string you are reading from a file.

try
private static final Charset UTF_8 = Charset.forName("UTF-8");
private String forceUtf8Coding(String input) {return new String(input.getBytes(UTF_8), UTF_8))}

Actually, I wrote an Open Source library that contains some utilities. One of them is converting a Unicode sequence to String and vise-versa. I found it very useful. Here is the quote from the article about this library about Unicode converter:
Class StringUnicodeEncoderDecoder has methods that can convert a
String (in any language) into a sequence of Unicode characters and
vise-versa. For example a String "Hello World" will be converted into
"\u0048\u0065\u006c\u006c\u006f\u0020 \u0057\u006f\u0072\u006c\u0064"
and may be restored back.
Here is the link to entire article that explains what Utilities the library has and how to get the library to use it. It is available as Maven artifact or as source from Github. It is very easy to use. Open Source Java library with stack trace filtering, Silent String parsing Unicode converter and Version comparison

Here is my solution...
String decodedName = JwtJson.substring(startOfName, endOfName);
StringBuilder builtName = new StringBuilder();
int i = 0;
while ( i < decodedName.length() )
{
if ( decodedName.substring(i).startsWith("\\u"))
{
i=i+2;
builtName.append(Character.toChars(Integer.parseInt(decodedName.substring(i,i+4), 16)));
i=i+4;
}
else
{
builtName.append(decodedName.charAt(i));
i = i+1;
}
};

I found that many of the answers did not address the issue of "Supplementary Characters". Here is the correct way to support it. No third-party libraries, pure Java implementation.
http://www.oracle.com/us/technologies/java/supplementary-142654.html
public static String fromUnicode(String unicode) {
String str = unicode.replace("\\", "");
String[] arr = str.split("u");
StringBuffer text = new StringBuffer();
for (int i = 1; i < arr.length; i++) {
int hexVal = Integer.parseInt(arr[i], 16);
text.append(Character.toChars(hexVal));
}
return text.toString();
}
public static String toUnicode(String text) {
StringBuffer sb = new StringBuffer();
for (int i = 0; i < text.length(); i++) {
int codePoint = text.codePointAt(i);
// Skip over the second char in a surrogate pair
if (codePoint > 0xffff) {
i++;
}
String hex = Integer.toHexString(codePoint);
sb.append("\\u");
for (int j = 0; j < 4 - hex.length(); j++) {
sb.append("0");
}
sb.append(hex);
}
return sb.toString();
}
#Test
public void toUnicode() {
System.out.println(toUnicode("😊"));
System.out.println(toUnicode("🥰"));
System.out.println(toUnicode("Hello World"));
}
// output:
// \u1f60a
// \u1f970
// \u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064
#Test
public void fromUnicode() {
System.out.println(fromUnicode("\\u1f60a"));
System.out.println(fromUnicode("\\u1f970"));
System.out.println(fromUnicode("\\u0048\\u0065\\u006c\\u006c\\u006f\\u0020\\u0057\\u006f\\u0072\\u006c\\u0064"));
}
// output:
// 😊
// 🥰
// Hello World

#NominSim
There may be other character, so I should detect it by length.
private String forceUtf8Coding(String str) {
str = str.replace("\\","");
String[] arr = str.split("u");
StringBuilder text = new StringBuilder();
for(int i = 1; i < arr.length; i++){
String a = arr[i];
String b = "";
if (arr[i].length() > 4){
a = arr[i].substring(0, 4);
b = arr[i].substring(4);
}
int hexVal = Integer.parseInt(a, 16);
text.append((char) hexVal).append(b);
}
return text.toString();
}

An alternate way of accomplishing this could be to make use of chars() introduced with Java 9, this can be used to iterate over the characters making sure any char which maps to a surrogate code point is passed through uninterpreted. This can be used as:-
String myString = "\u0048\u0065\u006C\u006C\u006F World";
myString.chars().forEach(a -> System.out.print((char)a));
// would print "Hello World"

Solution for Kotlin:
val sourceContent = File("test.txt").readText(Charset.forName("windows-1251"))
val result = String(sourceContent.toByteArray())
Kotlin uses UTF-8 everywhere as default encoding.
Method toByteArray() has default argument - Charsets.UTF_8.

Related

convert string contains ISO 8859-1 hex characters code to UTF-8 java

I have a string, which I believed contains some of ISO-8859-1 hex character code
String doc = "#xC1;o thun b#xE9; g#xE1;i c#x1ED9;t d#xE2;y xanh bi#x1EC3;n"
And I want to change it into this,
Áo thun bé gái cột dây xanh biển
I have tried this method but no luck
byte[] isoBytes = doc.getBytes("ISO-8859-1");
System.out.println(new String(isoBytes, "UTF-8"));
What is the proper way to convert it? Many thanks for your help!
On the assumption that the #nnnn; sequences are plain old Unicode character representation, I suggest the following approach.
class Cvt {
static String convert(String in) {
String str = in;
int curPos = 0;
while (curPos < str.length()) {
int j = str.indexOf("#x", curPos);
if (j < 0) // no more #x
curPos = str.length();
else {
int k = str.indexOf(';', curPos + 2);
if (k < 0) // unterminated #x
curPos = str.length();
else { // convert #xNNNN;
int n = Integer.parseInt(str.substring(j+2, k), 16);
char[] ch = { (char)n };
str = str.substring(0, j) + new String(ch) + str.substring(k+1);
curPos = j + 1; // after ch
}
}
}
return str;
}
static public void main(String... args) {
String doc = "#xC1;o thun b#xE9; g#xE1;i c#x1ED9;t d#xE2;y xanh bi#x1EC3;n";
System.out.println(convert(doc));
}
}
This is very similar to the approach of the previous answer, except for the assumption that the character is a Unicode codepoint and not an 8859-1 codepoint.
And the output is
Áo thun bé gái cột dây xanh biển
There is no hex literal syntax for strings in Java. If you need to support that String format, I would make a helper function which parses that format and builds up a byte array and then parse that as ISO-8859-1.
import java.io.ByteArrayOutputStream;
public class translate {
private static byte[] parseBytesWithHexLiterals(String s) throws Exception {
final ByteArrayOutputStream baos = new ByteArrayOutputStream();
while (!s.isEmpty()) {
if (s.startsWith("#x")) {
s = s.substring(2);
while (s.charAt(0) != ';') {
int i = Integer.parseInt(s.substring(0, 2), 16);
baos.write(i);
s = s.substring(2);
}
} else {
baos.write(s.substring(0, 1).getBytes("US-ASCII")[0]);
}
s = s.substring(1);
}
return baos.toByteArray();
}
public static void main(String[] args) throws Exception {
String doc = "#xC1;o thun b#xE9; g#xE1;i c#x1ED9;t d#xE2;y xanh bi#x1EC3;n";
byte[] parsedAsISO88591 = parseBytesWithHexLiterals(doc);
doc = new String(parsedAsISO88591, "ISO-8859-1");
System.out.println(doc); // Print out the string, which is in Unicode internally.
byte[] asUTF8 = doc.getBytes("UTF-8"); // Get a UTF-8 version of the string.
}
}
This is a case where the code can really obscure the requirements. The requirements are a bit uncertain but seem to be to decode a specialized Unicode character entity reference similar to HTML and XML, as documented in the comments.
It is also a somewhat rare case where the advantage of the regular expression engine outweighs any studying needed to understand the pattern language.
String input = "#xC1;o thun b#xE9; g#xE1;i c#x1ED9;t d#xE2;y xanh bi#x1EC3;n";
// Hex digits between "#x" and ";" are a Unicode codepoint value
String text = java.util.regex.Pattern.compile("(#x([0-9A-Fa-f]+);)")
.matcher(input)
// group 2 is the matched input between the 2nd ( in the pattern and its paired )
.replaceAll(x -> new String(Character.toChars(Integer.parseInt(x.group(2), 16))));
System.out.println(text);
The matcher function finds candidate strings to replace that match the pattern. The replaceAll function replaces them with the calculated Unicode codepoint. Since a Unicode codepoint might be encoded as two char (UTF-16) values the desired replacement string must be constructed from a char[].

java string replaceall first character after certain string to lower case

I have a requirement to replace all the character within a string to lower case if it is followed by some string like "is".
For example:
String a = "name=xyz,isSalaried=Y,address=abc,isManager=N,salary=1000";
it should get converted to
"name=xyz,salaried=Y,address=abc,manager=N,salary=1000"
I am not very good at regular expression but I think can use it to achieve the required output.
It will be great if someone can help me out.
Your solution requires basic understanding of String and String methods in java.
Here is one working example. Although, it might not be the most efficient one.
NOTE:- YOU ASKED FOR A REGEX SOLUTION.BUT THIS IS USING PURE STRING METHODS
public class CheckString{
public static void main(String[] ar){
String s = "name=xyz,isSalaried=Y,address=abc,isManager=N,salary=1000";
String[] arr = s.split(",");
String ans = "";
int i = 0;
for(String text : arr){
int index = text.indexOf("=");
String before = text.substring(0,index).replace("is","").toLowerCase();
String after = text.substring(index);
if(i!=(arr.length-1)){
ans += before + after + ",";
i++;
}
else{
ans += before + after;
}
}
System.out.println(ans);
}
}
Try this.
first match the string and replace in a loop
String a = "name=xyz,isSalaried=Y,address=abc,isManager=N,salary=1000";
Matcher matcher = Pattern.compile("is(.*?)=").matcher(a);//.matcher(a).replaceAll(m -> m.group(1).toLowerCase());
while (matcher.find()) {
String matchedString = matcher.group(1);
a = a.replace("is"+matchedString,matchedString.toLowerCase());
}
System.out.printf(a);

Remove Arabic non-alpha-numeric characters from a string in Java

How can I remove all non-alpha-numeric Arabic characters from a string in Java?
use regex [^A-Za-z0-9 ] the regex will only allow alphabets from A to Z and a to z also numericals from 0 to 9. nothing else
Here is the complete answer:
String patternString = "";
Pattern pattern = null;
Matcher matcher = null;
String normalizedString = "";
patternString = "[^A-Za-zأ-ْ-9 ]";
pattern = Pattern.compile(patternString);
matcher = pattern.matcher(string);
normalizedString = matcher.replaceAll("");
I tried multiple solutions and nothing works prominently. I tried all the solution from the current thread as well as from here - how could i remove arabic punctuation form a String in java.
As no other solution works completely, I have created method which will retain only arabic characters and rest all chars will be removed as below -
public static String findArabicString(String s) {
StringBuilder finalValue = new StringBuilder();
if (null != s) {
for (int i = 0; i < s.length();) {
int c = s.codePointAt(i);
if ((c >= 0x0600 && c <= 0x06E0))
finalValue.append((char) c);
i += Character.charCount(c);
}
}
System.out.println(finalValue.toString());
return finalValue.toString();
}
The method can be customized as required, for example I want to retain space and arabic characters, then there is a slight chnage required in the testing condition as below -
public static String findArabicString(String s) {
StringBuilder finalValue = new StringBuilder();
if (null != s) {
for (int i = 0; i < s.length();) {
int c = s.codePointAt(i);
// 32 is unicode for white space
if ((c >= 0x0600 && c <= 0x06E0) || c == 32)
finalValue.append((char) c);
i += Character.charCount(c);
}
}
System.out.println(finalValue.toString());
return finalValue.toString();
}
I hope this will help to anyone facing similar issue as I do.
To remove arabic alpha from a string you can use the method below :
public void removeArabicChars() {
String input = "This string contains Arabic characters هذا النص يحتوي على حروف عربية";
String output = input.replaceAll("\\p{InArabic}", "");
System.out.println(output);
}

Insert in between the letters of String

I have to split it and put %20 in between of the digits
var code="247834"
to
2%204%207%208%203%204
looks simple but i am not able to convert it.
any answer in scala or java is appreciated.
In Scala,
code.mkString("%20")
In whatever language, your general technique should be to split the string into an array and then join it with the string '%20'. In PHP this would be
$array = str_split( $code);
$result = join( '%20', $array);
In Javascript:
var code="247834";
var myArray = code.split();
var result = myArray.join( '%20');
In one statement:
var result = code.split().join('%20');
// or "247834".split().join('%20');
Java??:
char[] myArray = "247844".toCharArray();
char[] result = StringUtils.join( myArray, '%20');
(the latter may need minor changes e.g. a join method is in TextUtils for Android and swaps the parameter order)
In Javascript
var code = "247834";
var output = '';
for (var i = 0; i < code.length; ++i)
{
output = output + code.charAt(i) + ((i < code.length - 1) ? '%20' : '');
}
alert(output);
http://jsfiddle.net/sNrU7/1/
Java solution
Starting from the end, you can insert a delimiter using string buffer. This is the easiest solution and may not be the fastest.
Output
2%204%207%208%203%204
2-4-7-8-3-4
Code
import java.lang.StringBuffer;
public class Test {
public static void main(String[] args) {
String str = "247834";
System.out.println(separate(str, "%20"));
System.out.println(separate(str, "-"));
}
public static String separate(String str, String delim) {
StringBuffer buff = new StringBuffer(str);
for (int i = str.length(); i > 0 ; i--) {
if (i < str.length()) {
buff.insert(i, delim);
}
}
return buff.toString();
}
}

How to convert a string with Unicode encoding to a string of letters

I have a string with escaped Unicode characters, \uXXXX, and I want to convert it to regular Unicode letters. For example:
"\u0048\u0065\u006C\u006C\u006F World"
should become
"Hello World"
I know that when I print the first string it already shows Hello world. My problem is I read file names from a file, and then I search for them. The files names in the file are escaped with Unicode encoding, and when I search for the files, I can't find them, since it searches for a file with \uXXXX in its name.
The Apache Commons Lang StringEscapeUtils.unescapeJava() can decode it properly.
import org.apache.commons.lang.StringEscapeUtils;
#Test
public void testUnescapeJava() {
String sJava="\\u0048\\u0065\\u006C\\u006C\\u006F";
System.out.println("StringEscapeUtils.unescapeJava(sJava):\n" + StringEscapeUtils.unescapeJava(sJava));
}
output:
StringEscapeUtils.unescapeJava(sJava):
Hello
Technically doing:
String myString = "\u0048\u0065\u006C\u006C\u006F World";
automatically converts it to "Hello World", so I assume you are reading in the string from some file. In order to convert it to "Hello" you'll have to parse the text into the separate unicode digits, (take the \uXXXX and just get XXXX) then do Integer.ParseInt(XXXX, 16) to get a hex value and then case that to char to get the actual character.
Edit: Some code to accomplish this:
String str = myString.split(" ")[0];
str = str.replace("\\","");
String[] arr = str.split("u");
String text = "";
for(int i = 1; i < arr.length; i++){
int hexVal = Integer.parseInt(arr[i], 16);
text += (char)hexVal;
}
// Text will now have Hello
You can use StringEscapeUtils from Apache Commons Lang, i.e.:
String Title = StringEscapeUtils.unescapeJava("\\u0048\\u0065\\u006C\\u006C\\u006F");
This simple method will work for most cases, but would trip up over something like "u005Cu005C" which should decode to the string "\u0048" but would actually decode "H" as the first pass produces "\u0048" as the working string which then gets processed again by the while loop.
static final String decode(final String in)
{
String working = in;
int index;
index = working.indexOf("\\u");
while(index > -1)
{
int length = working.length();
if(index > (length-6))break;
int numStart = index + 2;
int numFinish = numStart + 4;
String substring = working.substring(numStart, numFinish);
int number = Integer.parseInt(substring,16);
String stringStart = working.substring(0, index);
String stringEnd = working.substring(numFinish);
working = stringStart + ((char)number) + stringEnd;
index = working.indexOf("\\u");
}
return working;
}
Shorter version:
public static String unescapeJava(String escaped) {
if(escaped.indexOf("\\u")==-1)
return escaped;
String processed="";
int position=escaped.indexOf("\\u");
while(position!=-1) {
if(position!=0)
processed+=escaped.substring(0,position);
String token=escaped.substring(position+2,position+6);
escaped=escaped.substring(position+6);
processed+=(char)Integer.parseInt(token,16);
position=escaped.indexOf("\\u");
}
processed+=escaped;
return processed;
}
StringEscapeUtils from org.apache.commons.lang3 library is deprecated as of 3.6.
So you can use their new commons-text library instead:
compile 'org.apache.commons:commons-text:1.9'
OR
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-text</artifactId>
<version>1.9</version>
</dependency>
Example code:
org.apache.commons.text.StringEscapeUtils.unescapeJava(escapedString);
With Kotlin you can write your own extension function for String
fun String.unescapeUnicode() = replace("\\\\u([0-9A-Fa-f]{4})".toRegex()) {
String(Character.toChars(it.groupValues[1].toInt(radix = 16)))
}
and then
fun main() {
val originalString = "\\u0048\\u0065\\u006C\\u006C\\u006F World"
println(originalString.unescapeUnicode())
}
It's not totally clear from your question, but I'm assuming you saying that you have a file where each line of that file is a filename. And each filename is something like this:
\u0048\u0065\u006C\u006C\u006F
In other words, the characters in the file of filenames are \, u, 0, 0, 4, 8 and so on.
If so, what you're seeing is expected. Java only translates \uXXXX sequences in string literals in source code (and when reading in stored Properties objects). When you read the contents you file you will have a string consisting of the characters \, u, 0, 0, 4, 8 and so on and not the string Hello.
So you will need to parse that string to extract the 0048, 0065, etc. pieces and then convert them to chars and make a string from those chars and then pass that string to the routine that opens the file.
For Java 9+, you can use the new replaceAll method of Matcher class.
private static final Pattern UNICODE_PATTERN = Pattern.compile("\\\\u([0-9A-Fa-f]{4})");
public static String unescapeUnicode(String unescaped) {
return UNICODE_PATTERN.matcher(unescaped).replaceAll(r -> String.valueOf((char) Integer.parseInt(r.group(1), 16)));
}
public static void main(String[] args) {
String originalMessage = "\\u0048\\u0065\\u006C\\u006C\\u006F World";
String unescapedMessage = unescapeUnicode(originalMessage);
System.out.println(unescapedMessage);
}
I believe the main advantage of this approach over unescapeJava by StringEscapeUtils (besides not using an extra library) is that you can convert only the unicode characters (if you wish), since the latter converts all escaped Java characters (like \n or \t). If you prefer to convert all escaped characters the library is really the best option.
Updates regarding answers suggesting using The Apache Commons Lang's:
StringEscapeUtils.unescapeJava() - it was deprecated,
Deprecated.
as of 3.6, use commons-text StringEscapeUtils instead
The replacement is Apache Commons Text's StringEscapeUtils.unescapeJava()
Just wanted to contribute my version, using regex:
private static final String UNICODE_REGEX = "\\\\u([0-9a-f]{4})";
private static final Pattern UNICODE_PATTERN = Pattern.compile(UNICODE_REGEX);
...
String message = "\u0048\u0065\u006C\u006C\u006F World";
Matcher matcher = UNICODE_PATTERN.matcher(message);
StringBuffer decodedMessage = new StringBuffer();
while (matcher.find()) {
matcher.appendReplacement(
decodedMessage, String.valueOf((char) Integer.parseInt(matcher.group(1), 16)));
}
matcher.appendTail(decodedMessage);
System.out.println(decodedMessage.toString());
I wrote a performanced and error-proof solution:
public static final String decode(final String in) {
int p1 = in.indexOf("\\u");
if (p1 < 0)
return in;
StringBuilder sb = new StringBuilder();
while (true) {
int p2 = p1 + 6;
if (p2 > in.length()) {
sb.append(in.subSequence(p1, in.length()));
break;
}
try {
int c = Integer.parseInt(in.substring(p1 + 2, p1 + 6), 16);
sb.append((char) c);
p1 += 6;
} catch (Exception e) {
sb.append(in.subSequence(p1, p1 + 2));
p1 += 2;
}
int p0 = in.indexOf("\\u", p1);
if (p0 < 0) {
sb.append(in.subSequence(p1, in.length()));
break;
} else {
sb.append(in.subSequence(p1, p0));
p1 = p0;
}
}
return sb.toString();
}
one easy way i know using JsonObject:
try {
JSONObject json = new JSONObject();
json.put("string", myString);
String converted = json.getString("string");
} catch (JSONException e) {
e.printStackTrace();
}
Fast
fun unicodeDecode(unicode: String): String {
val stringBuffer = StringBuilder()
var i = 0
while (i < unicode.length) {
if (i + 1 < unicode.length)
if (unicode[i].toString() + unicode[i + 1].toString() == "\\u") {
val symbol = unicode.substring(i + 2, i + 6)
val c = Integer.parseInt(symbol, 16)
stringBuffer.append(c.toChar())
i += 5
} else stringBuffer.append(unicode[i])
i++
}
return stringBuffer.toString()
}
UnicodeUnescaper from Apache Commons Text does exactly what you want, and ignores any other escape sequences.
String input = "\\u0048\\u0065\\u006C\\u006C\\u006F World";
String output = new UnicodeUnescaper().translate(input);
assert("Hello World".equals(output));
assert("\u0048\u0065\u006C\u006C\u006F World".equals(output));
Where input would be the string you are reading from a file.
try
private static final Charset UTF_8 = Charset.forName("UTF-8");
private String forceUtf8Coding(String input) {return new String(input.getBytes(UTF_8), UTF_8))}
Actually, I wrote an Open Source library that contains some utilities. One of them is converting a Unicode sequence to String and vise-versa. I found it very useful. Here is the quote from the article about this library about Unicode converter:
Class StringUnicodeEncoderDecoder has methods that can convert a
String (in any language) into a sequence of Unicode characters and
vise-versa. For example a String "Hello World" will be converted into
"\u0048\u0065\u006c\u006c\u006f\u0020 \u0057\u006f\u0072\u006c\u0064"
and may be restored back.
Here is the link to entire article that explains what Utilities the library has and how to get the library to use it. It is available as Maven artifact or as source from Github. It is very easy to use. Open Source Java library with stack trace filtering, Silent String parsing Unicode converter and Version comparison
Here is my solution...
String decodedName = JwtJson.substring(startOfName, endOfName);
StringBuilder builtName = new StringBuilder();
int i = 0;
while ( i < decodedName.length() )
{
if ( decodedName.substring(i).startsWith("\\u"))
{
i=i+2;
builtName.append(Character.toChars(Integer.parseInt(decodedName.substring(i,i+4), 16)));
i=i+4;
}
else
{
builtName.append(decodedName.charAt(i));
i = i+1;
}
};
I found that many of the answers did not address the issue of "Supplementary Characters". Here is the correct way to support it. No third-party libraries, pure Java implementation.
http://www.oracle.com/us/technologies/java/supplementary-142654.html
public static String fromUnicode(String unicode) {
String str = unicode.replace("\\", "");
String[] arr = str.split("u");
StringBuffer text = new StringBuffer();
for (int i = 1; i < arr.length; i++) {
int hexVal = Integer.parseInt(arr[i], 16);
text.append(Character.toChars(hexVal));
}
return text.toString();
}
public static String toUnicode(String text) {
StringBuffer sb = new StringBuffer();
for (int i = 0; i < text.length(); i++) {
int codePoint = text.codePointAt(i);
// Skip over the second char in a surrogate pair
if (codePoint > 0xffff) {
i++;
}
String hex = Integer.toHexString(codePoint);
sb.append("\\u");
for (int j = 0; j < 4 - hex.length(); j++) {
sb.append("0");
}
sb.append(hex);
}
return sb.toString();
}
#Test
public void toUnicode() {
System.out.println(toUnicode("😊"));
System.out.println(toUnicode("🥰"));
System.out.println(toUnicode("Hello World"));
}
// output:
// \u1f60a
// \u1f970
// \u0048\u0065\u006c\u006c\u006f\u0020\u0057\u006f\u0072\u006c\u0064
#Test
public void fromUnicode() {
System.out.println(fromUnicode("\\u1f60a"));
System.out.println(fromUnicode("\\u1f970"));
System.out.println(fromUnicode("\\u0048\\u0065\\u006c\\u006c\\u006f\\u0020\\u0057\\u006f\\u0072\\u006c\\u0064"));
}
// output:
// 😊
// 🥰
// Hello World
#NominSim
There may be other character, so I should detect it by length.
private String forceUtf8Coding(String str) {
str = str.replace("\\","");
String[] arr = str.split("u");
StringBuilder text = new StringBuilder();
for(int i = 1; i < arr.length; i++){
String a = arr[i];
String b = "";
if (arr[i].length() > 4){
a = arr[i].substring(0, 4);
b = arr[i].substring(4);
}
int hexVal = Integer.parseInt(a, 16);
text.append((char) hexVal).append(b);
}
return text.toString();
}
An alternate way of accomplishing this could be to make use of chars() introduced with Java 9, this can be used to iterate over the characters making sure any char which maps to a surrogate code point is passed through uninterpreted. This can be used as:-
String myString = "\u0048\u0065\u006C\u006C\u006F World";
myString.chars().forEach(a -> System.out.print((char)a));
// would print "Hello World"
Solution for Kotlin:
val sourceContent = File("test.txt").readText(Charset.forName("windows-1251"))
val result = String(sourceContent.toByteArray())
Kotlin uses UTF-8 everywhere as default encoding.
Method toByteArray() has default argument - Charsets.UTF_8.

Categories