I have a text file with Chinese words written to a line. The line is surrounded with "\r\n", and written using fileOutputStream.write(string.getBytes()).
I have no problems reading lines of English words, my buffered reader parses it with readLine() perfectly. However, it recognizes the Chinese sentence as multiple lines, thus screwing up my programme flow.
Any solutions?
Using string.getBytes() encodes the String using the platform default encoding. That is rarely what you want, especially when you're trying to write characters that are not native to your current locale.
Specify the encoding instead (using string.getBytes("UTF-8"), for example).
A cleaner and more Java-esque way would be to wrap your OutputStream in an OutputStreamWriter like this:
Writer w = new OutputStreamWriter(out, "UTF-8");
Then you can simply call writer.write(string) and don't need to repeat the encoding each time you want to write a String.
And, as commented below, specify the same encoding when reading the file (using a Reader, preferably).
If you're outputting the text via fileOutputStream.write(string.getBytes()), you're outputting with the default encoding for the platform. It's important to ensure you're then reading with the appropriate encoding, and using methods that are encoding-aware. The problem won't be in your BufferedReader instance, but whatever Reader you have under it that's converting bytes into characters.
This article may be of use: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Related
I have a file which contains the following string:
AAdοbe Dοcument Clοud
if viewed in Notepad++. In hex view the string looks like this:
If I read the file with Java the string looks like this:
AAdοbe Dοcument Clοud
How I can get the same encoding in Java as with Notepad++?
Your file is encoded as UTF-8, and the CE BF bytes is the UTF-8 encoding of the character ο ('GREEK SMALL LETTER OMICRON' (U+03BF)).
If you use the Encoding pull-down menu in Notepad++ to specify UTF-8, you should see the content as:
AAdοbe Dοcument Clοud
You might want to replace those Greek ο's with regular Latin o's ('LATIN SMALL LETTER O' (U+006F)).
If you decide to keep the Greek ο's, you need to make sure your Java program reads the file using UTF-8, which is best done using one of these:
BufferedReader reader = Files.newBufferedReader(Paths.get("file.txt")); // UTF-8 is the default
BufferedReader reader = Files.newBufferedReader(Paths.get("file.txt"), StandardCharsets.UTF_8);
If you look at the text with a debugger, you should see that it is now read correctly. If you print the text, make sure the console window you're using can handle UTF-8 characters, otherwise it might just print wrong, even though it was read correctly.
You must set encoding in file reader ilke this.
new FileReader(fileName, StandardCharsets.UTF_8)
You must read the file in java using the same encoding as the file has.
If you are working with non standard encodings, even trying to read the encoding with something like:
InputStreamReader r = new InputStreamReader(new FileInputStream(theFile));
r.getEncoding()
Can output with wrong values.
There's little library which handles recognition of encoding a bit better: https://code.google.com/archive/p/juniversalchardet/
It also has some holes in obtaining proper encoding, but I've used it.
And while using it I found out that most of non-standard encodings can be read with UTF-16 like:
new FileReader(fileName, StandardCharsets.UTF_16)
Since a while, Java supports usage of UTF-16 encoding. It's defined in Java standard API as StandardCharsets.UTF_16. That character set covers lots of language specific characters and emojis.
I am trying to determine whether to use
PrintWriter pw = new PrintWriter(outputFilename, "ISO-8859-1");
or
PrintWriter pw = new PrintWriter(outputFilename, "US-ASCII");
I was reading All about character sets to determine the character set of an example file which I must create in the same encoding via java code.
When my example file contains "European" letters (Norwegian: å ø æ), then the following command tells me the file encoding is "iso-8859-1"
file -bi example.txt
However, when I take a copy of the same example file and modify it to contain different data, without any Norwegian text (let's say, I replace "Bjørn" with "Bjorn"), then the same command tells me the file encoding is "us-ascii".
file -bi example-no-european-letters.txt
What does this mean? Is ISO-8859-1 in practise the same as US-ASCII if there are no "European" characters in it?
Should I just use a charset "ISO-8559-1" and everything will be ok?
If the file contains only the 7-bit US-ASCII characters it can be read as US-ASCII. It doesn't tell anything about what was intended as the charset. It may be just a coincidence that there were no characters that would require a different coding.
ISO-8859-1 (and -15) is a common european encoding, able to encode äöåéü and other characters, the first 127 characters being the same as in US-ASCII (as often is, for convenience reasons).
However you can't just pick an encoding and assume that "everything will be OK". The very common UTF-8 encoding also contains the US-ASCII charset, but it will encode for example äöå characters as two bytes instead of ISO-8859-1's one byte.
TL;DR: Don't assume things with encodings. Find out what was intended and use that. If you can't find it out, observe the data to try to figure out what is a correct charset to use (as you noted yourself, multiple encodings may work at least temporarily).
It depends on different types of characters we use in the respective document. ASCII is 7-bit charset and ISO-8859-1 is 8-bit charset which supports some additional characters. But, mostly, if you are going to reproduce the document from inputstream, I recommend the ISO-8859-1 charset. It will work for textfile like notepad and MS word.
If you are using some different international characters, we need to check the corresponding charset which supports that particular character like UTF-8..
I write a program that implements a file structure, the program prints out a product file based on the structure. Product names include letters Æ, Ø and Å. These letters are not displayed correctly in the output file. I use
PrintWriter printer = new PrintWriter(new FileOutputStream(new File("products.txt")));
IS0 8859 - 1 or Windows ANSI (CP 1252) is the character sets that the implementation requiers.
There are two possibilities:
Java is using the wrong encoding when outputting the file.
The file is actually correct, and whatever you are using to display the file is using the wrong encoding.
Assuming that the problem is the first one, the root cause is that Java has figured out that the default encoding for the platform is something other than the one you want / expect. There are three ways to solve this:
Figure out why Java has the got default locale and encoding "wrong" and remedy that. It will be something to do with your operating system's locale settings ...
Read this FAQ for details on how you can override the default locale settings at the command line.
Use a PrintWriter constructor that specifies the encoding explicitly so that your application doesn't rely on the default encoding. For example:
PrintWriter pw = new PrintWriter("filename", "ISO-8859-1");
In response to this comment:
Don’t PrintWriters all have the bug that you can’t know you had an error with them?
It is not a bug, it is a design feature.
You can find out if there was an error. You just can't find out what it was.
If you don't like it, you can use Writer instead.
They won’t raise an exception or even return failure if you try to shove a codepoint at them that can’t fit in the designated encoding.
Neither will a regular Writer I believe ... unless you specifically construct it to do this. The normal behaviour is to replace any unmappable codepoint with a specific character, though this is not specified in the javadocs (IIRC).
Do they even tell if you the filesystem fills up; I seem to recall that they don’t.
That is true. However:
For the kind of file you typically write using a PrintWriter this is not a critical issue.
If it is a critical issue AND you still want to use PrintWriter, you can always call checkError() (IIRC) to find out if there was an error.
I always end up writing my out OutputStreamWriter constructor with the explicit Charset.forName("UTF-8").newEncoder() second argument. It’s kind of tedious, so perhaps there’s a better way.
I dunno.
In previous versions of Java, I would read a file by creating a buffered reader like this:
BufferedReader in = new BufferedReader(new FileReader("file.txt"));
In Java 7, I would like to use Files.newBufferedReader, but I need to pass in a charset as well. For example:
BufferedReader in = Files.newBufferedReader(Paths.get("file.txt"),
Charset.forName("US-ASCII"));
Previously, I did not have to worry about charsets when reading plain text files. What charset shall I use? Do you know what charset was used by default in previous versions of Java? I simply want to be able to find and replace the old statement with the new one.
Previously, I did not have to worry about charsets when reading plain text files.
Well, you should have done. If you were just using FileReader, it was using the default character encoding for the system. That was a bad idea, which is why I always used FileInputStream and an InputStreamReader. You should always be explicit about it. If you really want the default character encoding for the system, you should use Charset.defaultCharset() - but I strongly suggest that you don't.
If you're going to read a file, you should know the character encoding, and specify that. If you get to decide what character encoding to use when writing a file, UTF-8 is a good default choice.
PrintWriter/PrintStream in Java has by default Charset.defaultCharset()
java.nio.charset.Charset.defaultCharset()
I understand that Java character streams wrap byte streams such that the underlying byte stream is interpreted as per the system default or an otherwise specifically defined character set.
My systems default char-set is UTF-8.
If I use a FileReader to read in a text file, everything looks normal as the default char-set is used to interpret the bytes from the underlying InputStreamReader. If I explicitly define an InputStreamReader to read the UTF-8 encoded text file in as UTF-16, everything obviously looks strange. Using a byte stream like FileInputStream and redirecting its output to System.out, everything looks fine.
So, my questions are;
Why is it useful to use a character stream?
Why would I use a character stream instead of directly using a byte stream?
When is it useful to define a specific char-set?
Code that deals with strings should only "think" in terms of text - for example, reading an input source line by line, you don't want to care about the nature of that source.
However, storage is usually byte-oriented - so you need to create a conversion between the byte-oriented view of a source (encapsulated by InputStream) and the character-oriented view of a source (encapsulated by Reader).
So a method which (say) counts the lines of text in an input source should take a Reader parameter. If you want to count the lines of text in two files, one of which is encoded in UTF-8 and one of which is encoded in UTF-16, you'd create an InputStreamReader around a FileInputStream for each file, specifying the appropriate encoding each time.
(Personally I would avoid FileReader completely - the fact that it doesn't let you specify an encoding makes it useless IMO.)
An InputStream reads bytes, while a Reader reads characters. Because of the way bytes map to characters, you need to specify the character set (or encoding) when you create an InputStreamReader, the default being the platform character set.
When you are reading/writing text which contains characters which could be > 127 , use a char stream. When you are reading/writing binary data use a byte stream.
You cna read text as binary if you wish, but unless you make alot of assumptions it rarely gains you much.