Reading any text file having strange encoding? - java

I have a text file with a strange encoding "UCS-2 Little Endian" that I want to read its contents using Java.
As you can see in the above screenshot the file contents appear fine in Notepad++, but when i read it using this code, just garbage is being printed in the console:
String textFilePath = "c:\strange_file_encoding.txt"
BufferedReader reader = new BufferedReader( new InputStreamReader( new FileInputStream( filePath ), "UTF8" ) );
String line = "";
while ( ( line = reader.readLine() ) != null ) {
System.out.println( line ); // Prints garbage characters
}
The main point is that the user selects the file to read, so it can be of any encoding, and since I can't detect the file encoding I decode it using "UTF8" but as in the above example it fails to read it right.
Is there away to read such strange files in a right way ? Or at least can i detect if my code will fail to read it right ?

You are using UTF-8 as your encoding in the InputStreamReader constructor, so it will try to interpret the bytes as UTF-8 instead of UCS-LE. Here is the documentation: Charset
I suppose you need to use UTF-16LE according to it.
Here is more info on the supported character sets and their Java names:
Supported Encodings

You're providing the wrong encoding in InputStreamReader. Have you tried using UTF-16LE instead if UTF8?
BufferedReader reader = new BufferedReader( new InputStreamReader( new FileInputStream( filePath ), "UTF-16LE" ) );
According to Charset:
UTF-16LE Sixteen-bit UCS Transformation Format, little-endian byte
order

You cannot use UTF-8 encoding for all files, especially if you do not know which file encoding to expect. Use a library which can detect the file encoding before your read the file, for example: juniversalchardet or jChardet
For more info see Java : How to determine the correct charset encoding of a stream

Related

Shift_JIS to UTF_8 conversion of full width tilde [~] character returns a thicker character

I'm processing Shift_JIS files and outputting UTF-8 files. Most of the characters are displayed as expected when viewed in a text editor, except for the full width tilde character [~]. It becomes thicker similar to this: [~].
note: this is not the same character, I just don't know how to type it here so I bolded it
When I type it manually in the UTF-8 file, I get the regular version.
Here is my code:
try (BufferedReader in = new BufferedReader(new InputStreamReader (
new FileInputStream(inFile), Charset.forName("Shift_JIS")))) {
try (BufferedWriter out = new BufferedWriter(new OutputStreamWriter (
new FileOutputStream(outFile), StandardCharsets.UTF_8))) {
IOUtils.copy(in, out);
}
}
I also tried using "MS932" and also tried not using IOUtils.
To read Shift_JIS files made with Windows, you have to use Charset.forName("Windows-31j") rather than Charset.forName("Shift_JIS").
Java distinguish Shift_JIS and Windows-31j. "Shift_JIS" in documents for Windows means Windows-31J(MS932) in Java. On the other hand, "Shift_JIS" in documents for AIX means Shift_JIS in Java.
Character mappings for Windows-31J and Shift_JIS are slightly different. For example, ~ (0x8160 in Shift_JIS) is mapped to U+301C in Shift_JIS, and U+FF5E in Windows-31j. Microsoft IME uses U+FF5E (FULLWIDTH TILDE to represent the character ~.

character ° encoding and visualization in txt file

I have a field in a table that contains the string "Address Pippo p.2 °".
My program read this value and write it into txt file, but the output is:
"Address Pippo p.2 °" ( is unwanted)
I have a problem because the txt file is a positional file.
I open the file with these Java istructions:
FileWriter fw = new FileWriter(file, true);
pw = new PrintWriter(fw);
I want to write the string without strange characters
Any help for me ?
Thanks in advance
Try encoding the string into UTF-8 like this,
File file = new File("D://test.txt");
FileWriter fw = new FileWriter(file, true);
PrintWriter pw = new PrintWriter(fw);
String test = "Address Pippo p.2 °";
ByteBuffer byteBuffer = Charset.forName("UTF-8").encode(test);
test = StandardCharsets.UTF_8.decode(byteBuffer).toString();
pw.write(test);
pw.close();
Java uses Unicode. When you write text to a file, it gets encoded using a particular character encoding. If you don't specify it explicitly, it will use a "system default encoding" which is whatever is configured as default for your particular JVM instance. You need to know what encoding you've used to write the file. Then you need to use the same encoding to read and display the file content. The funny characters you are seeing are probably due to writing the file using UTF-8 and then trying to read and display it in e.g. Notepad using Windows-1252 ("ANSI") encoding.
Decide what encoding you want and stick to it for both reading and writing. To write using Windows-1252, use:
Writer w = new OutputStreamWriter(new FileInputStream(file, true), "windows-1252");
And if you write in UTF-8, then tell Notepad that you want it to read the file in UTF-8. One way to do that is to write the character '\uFEFF' (Byte Order Mark) at the beginning of the file.
If you use UTF-8, be aware that non-ASCII characters will throw the subsequent bytes out of position. So if, for example, a telephone field must always start at byte position 200, then having a non-ASCII character in an address field before it will make the telephone field start at byte position 201 or 202. Using windows-1252 encoding you won't have this issue, but that encoding can't encode all Unicode characters.

Reading file with bad encoding. CP1252 vs UTF-8

I have byte array, which put in InputStreamReader and do some manipulations with it.
Reader reader = new InputStreamReader(new ByteArrayInputStream(byteArr));
JVM has default cp1252 encoding, but file, which I translating to byte array has utf-8 encoding. Also this file has german umlauts. And when I put byte array in InputStreamReader, java decode umlauts to wrong symbols. For example ü represent as ü. I'm tried to put "UTF-8" and Charset.forName("UTF-8").newDecoder()); to InputStreamReader constructor, translate strings from reader to string with new encoding via new String(oldStr.getBytes("cp1252"), "UTF-8); but it's not helped. In debugger in reader variable I see StreamDecoder parameter, which has "decoder" with MS1252$Decoder value. Maybe It's solving of my problem, but I not understand, how I can fix it.
Try to use InputStreamReader(InputStream in, String charsetName) constructor and set charset by yourself.
Reader reader = new InputStreamReader(new ByteArrayInputStream(byteArr), "UTF-8");
I had exactly the same error and finally solved the issue by adding this to the JVM startup options :
-Dfile.encoding=UTF8

Java Unicode to readable text conversion decoding

I am developing a Java application where I am consuming a web service. The web service is created using a SAP server, which encodes the data automatically in Unicode. I get a Unicode string from the web service.
"
倥䙄ㄭ㌮਍쿣ී㈊〠漠橢਍圯湩湁楳湅潣楤杮਍湥潤橢਍″‰扯൪㰊഼┊敄瑶灹⁥佐呓′†䘠湯⁴佃剕䕉⁒渠牯慭慌杮䔠ൎ⼊祔数⼠潆瑮਍匯扵祴数⼠祔数റ⼊慂敳潆瑮⼠潃牵敩൲⼊慎敭⼠う㄰਍䔯据摯湩⁧′‰൒㸊ാ攊摮扯൪㐊〠漠橢਍㰼਍䰯湥瑧⁨‵‰൒㸊ാ猊牴慥൭ 䘯〰‱⸱2
"
above is the response.
I want to convert it to readable text format like String. I am using core Java.
倥䙄ㄭ㌮਍쿣ී㈊〠漠橢਍圯湩湁楳湅潣楤杮਍湥潤橢਍″‰扯൪㰊഼┊敄瑶灹⁥佐呓′†䘠湯⁴佃剕䕉⁒渠牯慭慌杮䔠ൎ⼊祔数⼠潆瑮਍匯扵祴数⼠祔数റ⼊慂敳潆瑮⼠潃牵敩൲⼊慎敭⼠う㄰਍䔯据摯湩⁧′‰൒㸊ാ攊摮扯൪㐊〠漠橢਍㰼਍䰯湥瑧⁨‵‰൒㸊ാ猊牴慥൭ 䘯〰‱⸱2
That's a PDF file that has been interpreted as UTF-16LE.
You need to look at what component is receiving the response and how it's dealing with the input to stop it being decoded as UTF-16LE, but ultimately there isn't a 'readable' version of it as such, as it's a binary file. Extracting the document text out of a PDF file is a much bigger problem!
(Note: Unicode is a character set, UTF-16LE is an encoding of that set into bytes. Microsoft call the UTF-16LE encoding "Unicode" due to a historical accident, but that's misleading.)
If you have byte[] or an InputStream (both binary data) you can get a String or a Reader (both text) with:
final String encoding = "UTF-8"; // "UTF16LE" or "UTF-16BE"
byte[] b = ...;
String s = new String(b, encoding);
InputStream is = ...;
BufferedReader reader = new BufferedReader(new InputStreamReader(is, encoding));
for (;;) {
String line = reader.readLine();
}
The reverse process uses:
byte[] b = s.geBytes(encoding);
OutputStream os = ...;
BufferedWriter writer = new BufferedWriter(new OuputStreamWriter(os, encoding));
writer.println(s);
Unicode is a numbering system for all characters. The UTF variants implement Unicode as bytes.
Your problem:
In normal ways (web service), you would already have received a String. You could write that string to a file using the Writer above for instance. Either to check it yourself with a full Unicode font, or to pass the file on for a check.
You need (?) to check, which UTF variant the text is in. For Asiatic scripts UTF-16 (little endian or big endian) are optimal. In XML it would be defined already.
Addition:
FileWriter writes to a file using the default encoding (from operating system on your machine). Instead use:
new OutputStreamWriter(new FileOutputStream(new File("...")), "UTF-8")
If it is a binary PDF, as #bobince said, use just a FileOutputStream on byte[] or InputStream.
This is definitely not a valid string. This looks like mangled UTF-16.
UPDATE
Indeed #Bobince is right, this is a PDF file (most probably in UTF-8 / or plain ASCII) displayed in UTF-16. When Displayed in UTF-8 this string indeed shows PDF source code. Good catch.

BufferedReader returns ISO-8859-15 String - how to convert to UTF16 String?

I have an FTP client class which returns InputStream pointing the file. I would like to read the file row by row with BufferedReader. The issue is, that the client returns the file in binary mode, and the file has ISO-8859-15 encoding.
If the file/stream/whatever really contains ISO-8859-15 encoded text, you just need to specify that when you create the InputStreamReader:
BufferedReader br = new BufferedReader(
new InputStreamReader(ftp.getInputStream(), "ISO-8859-15"));
Then readLine() will create valid Strings in Java's native encoding (which is UTF-16, not UTF-8).
Try this:
BufferedReader br = new BufferedReader(
new InputStreamReader(
ftp.getInputStream(),
Charset.forName("ISO-8859-15")
)
);
String row = br.readLine();
The original string is in ISO-8859-15, so the byte stream read by your InputStreamReader will be in this encoding. So read in using that encoding (specify this in the InputStreamReader constructor). That tells the InputStreamReader that the incoming byte stream is in ISO-8859-15 and to perform the appropriate byte-to-character conversions.
Now it will be in the standard Java UTF-16 format, and you can then do what you wish.
I think the current problem is that you're reading it using your default encoding (by not specifying an encoding in InputStreamReader), and then trying to convert it, by which time it's too late.
Using default behaviour for these sort of classes often ends in grief. It's a good idea to specify encodings wherever you can, and/or default the VM encoding via -Dfile.encoding
Have you tried:
BufferedReader r = new BufferedReader(new InputStreamReader("ISO-8859-1"))
...

Categories