Encoding conversion in java - java

Is there any free java library which I can use to convert string in one encoding to other encoding, something like iconv? I'm using Java version 1.3.

You don't need a library beyond the standard one - just use Charset. (You can just use the String constructors and getBytes methods, but personally I don't like just working with the names of character encodings. Too much room for typos.)
EDIT: As pointed out in comments, you can still use Charset instances but have the ease of use of the String methods: new String(bytes, charset) and String.getBytes(charset).
See "URL Encoding (or: 'What are those "%20" codes in URLs?')".

CharsetDecoder should be what you are looking for, no ?
Many network protocols and files store their characters with a byte-oriented character set such as ISO-8859-1 (ISO-Latin-1).
However, Java's native character encoding is Unicode UTF16BE (Sixteen-bit UCS Transformation Format, big-endian byte order).
See Charset. That doesn't mean UTF16 is the default charset (i.e.: the default "mapping between sequences of sixteen-bit Unicode code units and sequences of bytes"):
Every instance of the Java virtual machine has a default charset, which may or may not be one of the standard charsets.
[US-ASCII, ISO-8859-1 a.k.a. ISO-LATIN-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16]
The default charset is determined during virtual-machine startup and typically depends upon the locale and charset being used by the underlying operating system.
This example demonstrates how to convert ISO-8859-1 encoded bytes in a ByteBuffer to a string in a CharBuffer and visa versa.
// Create the encoder and decoder for ISO-8859-1
Charset charset = Charset.forName("ISO-8859-1");
CharsetDecoder decoder = charset.newDecoder();
CharsetEncoder encoder = charset.newEncoder();
try {
// Convert a string to ISO-LATIN-1 bytes in a ByteBuffer
// The new ByteBuffer is ready to be read.
ByteBuffer bbuf = encoder.encode(CharBuffer.wrap("a string"));
// Convert ISO-LATIN-1 bytes in a ByteBuffer to a character ByteBuffer and then to a string.
// The new ByteBuffer is ready to be read.
CharBuffer cbuf = decoder.decode(bbuf);
String s = cbuf.toString();
} catch (CharacterCodingException e) {
}

I would just like to add that if the String is originally encoded using the wrong encoding it might be impossible to change it to another encoding without errors.
The question does not state that the conversion here is made from wrong encoding to correct encoding but I personally stumbled to this question just because of this situation so just a heads up for others as well.
This answer in other question gives an explanation why the conversion does not always yield correct results
https://stackoverflow.com/a/2623793/4702806

It is a whole lot easier if you think of unicode as a character set (which it actually is - it is very basically the numbered set of all known characters). You can encode it as UTF-8 (1-3 bytes per character depending) or maybe UTF-16 (2 bytes per character or 4 bytes using surrogate pairs).
Back in the mist of time Java used to use UCS-2 to encode the unicode character set. This could only handle 2 bytes per character and is now obsolete. It was a fairly obvious hack to add surrogate pairs and move up to UTF-16.
A lot of people think they should have used UTF-8 in the first place. When Java was originally written unicode had far more than 65535 characters anyway...

Related

Converting String from One Charset to Another

I am working on converting a string from one charset to another and read many example on it and finally found below code, which looks nice to me and as a newbie to Charset Encoding, I want to know, if it is the right way to do it .
public static byte[] transcodeField(byte[] source, Charset from, Charset to) {
return new String(source, from).getBytes(to);
}
To convert String from ASCII to EBCDIC, I have to do:
System.out.println(new String(transcodeField(ebytes,
Charset.forName("US-ASCII"), Charset.forName("Cp1047"))));
And to convert from EBCDIC to ASCII, I have to do:
System.out.println(new String(transcodeField(ebytes,
Charset.forName("Cp1047"), Charset.forName("US-ASCII"))));
The code you found (transcodeField) doesn't convert a String from one encoding to another, because a String doesn't have an encoding¹. It converts bytes from one encoding to another. The method is only useful if your use case satisfies 2 conditions:
Your input data is bytes in one encoding
Your output data needs to be bytes in another encoding
In that case, it's straight forward:
byte[] out = transcodeField(inbytes, Charset.forName(inEnc), Charset.forName(outEnc));
If the input data contains characters that can't be represented in the output encoding (such as converting complex UTF8 to ASCII) those characters will be replaced with the ? replacement symbol, and the data will be corrupted.
However a lot of people ask "How do I convert a String from one encoding to another", to which a lot of people answer with the following snippet:
String s = new String(source.getBytes(inputEncoding), outputEncoding);
This is complete bull****. The getBytes(String encoding) method returns a byte array with the characters encoded in the specified encoding (if possible, again invalid characters are converted to ?). The String constructor with the 2nd parameter creates a new String from a byte array, where the bytes are in the specified encoding. Now since you just used source.getBytes(inputEncoding) to get those bytes, they're not encoded in outputEncoding (except if the encodings use the same values, which is common for "normal" characters like abcd, but differs with more complex like accented characters éêäöñ).
So what does this mean? It means that when you have a Java String, everything is great. Strings are unicode, meaning that all of your characters are safe. The problem comes when you need to convert that String to bytes, meaning that you need to decide on an encoding. Choosing a unicode compatible encoding such as UTF8, UTF16 etc. is great. It means your characters will still be safe even if your String contained all sorts of weird characters. If you choose a different encoding (with US-ASCII being the least supportive) your String must contain only the characters supported by the encoding, or it will result in corrupted bytes.
Now finally some examples of good and bad usage.
String myString = "Feng shui in chinese is 風水";
byte[] bytes1 = myString.getBytes("UTF-8"); // Bytes correct
byte[] bytes2 = myString.getBytes("US-ASCII"); // Last 2 characters are now corrupted (converted to question marks)
String nordic = "Här är några merkkejä";
byte[] bytes3 = nordic.getBytes("UTF-8"); // Bytes correct, "weird" chars take 2 bytes each
byte[] bytes4 = nordic.getBytes("ISO-8859-1"); // Bytes correct, "weird" chars take 1 byte each
String broken = new String(nordic.getBytes("UTF-8"), "ISO-8859-1"); // Contains now "Här är några merkkejä"
The last example demonstrates that even though both of the encodings support the nordic characters, they use different bytes to represent them and using the wrong encoding when decoding results in Mojibake. Therefore there's no such thing as "converting a String from one encoding to another", and you should never use the broken example.
Also note that you should always specify the encoding used (with both getBytes() and new String()), because you can't trust that the default encoding is always the one you want.
As a last issue, Charset and Encoding aren't the same thing, but they're very much related.
¹ Technically the way a String is stored internally in the JVM is in UTF-16 encoding up to Java 8, and variable encoding from Java 9 onwards, but the developer doesn't need to care about that.
NOTE
It's possible to have a corrupted String and be able to uncorrupt it by fiddling with the encoding, which may be where this "convert String to other encoding" misunderstanding originates from.
// Input comes from network/file/other place and we have misconfigured the encoding
String input = "Här är några merkkejä"; // UTF-8 bytes, interpreted wrongly as ISO-8859-1 compatible
byte[] bytes = input.getBytes("ISO-8859-1"); // Get each char as single byte
String asUtf8 = new String(bytes, "UTF-8"); // Recreate String as UTF-8
If no characters were corrupted in input, the string would now be "fixed". However the proper approach is to use the correct encoding when reading input, not fix it afterwards. Especially if there's a chance of it becoming corrupted.

Java internal String representation: is it UTF-16?

I have found on SO, that Java strings are represented as UTF-16 internally. Out of curiosity I have developed and ran following snippet (Java 7):
public class StringExperiment {
public static void main(String...args) throws UnsupportedEncodingException {
System.out.println(Arrays.toString("ABC".getBytes()));
}
}
which resulted in:
[65, 66, 67]
being printed to the console output.
How does it match with UTF-16?
Update. Is there a way to write a program that prints internal bytes of the string as is?
Java's internal string-representation is based on their char and thus UTF-16.
Unless it isn't: A modern VM (since Java 6 Update 21 Performance Release) might try to save space by using basic ASCII (single-byte-encoding) where that suffices.
And serialization / java-native-interface is done in a modified CESU-8 (a surrogate-agnostic variant of UTF-8) encoding, with NUL represented as two bytes to avoid embedded zeroes.
All of that is irrelevant for your "test" though:
You are asking Java to encode the string in the platform's default-charset, and that's not the internal charset:
public byte[] getBytes()
Encodes this String into a sequence of bytes using the platform's default charset, storing the result into a new byte array.
The behavior of this method when this string cannot be encoded in the default charset is unspecified. The CharsetEncoder class should be used when more control over the encoding process is required.
You seem to be misunderstanding something.
For all the system cares, and, MOST OF THE TIME, the developer cares, chars could as well be carrier pigeons, and Strings sequence of said carrier pigeons. Although yes, internally, strings are sequences of chars (which are more precisely UTF-16 code units), this is not the problem at hand here.
You don't write chars into files, neither do you read chars from files. You write, and read, bytes.
And in order to read a sequence of bytes as a sequence of chars/carrier pigeons, you need a decoder; similarly (and this is what you do here), in order to turn chars/carrier pigeons into bytes, you need an encoder. In Java, both of these are available from a Charset.
String.getBytes() just happens to use an encoder with the default platform character coding (obtained using Charset.defaultCharset()), and it happens that for your input string "ABC" and your JRE implementation, the sequence of bytes generated is 65, 66, 67. Hence the result.
Now, try and String.getBytes(Charset.forName("UTF-32LE")), and you'll get a different result.
Java Strings are indeed represented as UTF-16 internally, but you are calling the getBytes method, which does the following (my emphasis)
public byte[] getBytes()
Encodes this String into a sequence of bytes using the platform's
default charset, storing the result into a new byte array.
And your platform's default encoding is probably not UTF-16.
If you use the variant that lets you specify an encoding, you can see how the string would look in other encodings:
public byte[] getBytes(Charset charset)
If you look at the source code for java.lang.String, you can see that the String is stored internally as an array of (16-bit) chars.

Platform Dependent Encoding issues in Java

Noticed this behavior while troubleshooting a file generation issue in a piece of java code that's moved from AIX to LINUX sever.
Charset.defaultCharset();
returns ISO-8859-1 on AIX, UTF-8 on Linux, and windows-1252 on my Windows 7. With that said, I am trying to figure out why on the Linux machine, nlength = 24 (3 bytes per alphanumeric character) whereas on AIX and Windows it is 8.
String inString = "ABC12345";
byte[] ebcdicByte = new byte[inString.length()];
System.out.println("Length:"+inString.getBytes("Cp1047").length);
ebcdicByte = inString.getBytes("Cp1047").);
String ebcdicString = new String( ebcdicByte);
int nlength = ebcdicString.getBytes().length;
You are misunterstanding things.
This is Java.
There are bytes. There are chars. And there is the default encoding.
When translating from bytes to chars, you have to decode.
When translating from chars to bytes, you have to encode.
And of course, apart from very limited charsets you will never have a 1-1 char-byte mapping.
If you see problems with encoding/decoding, the cause is pretty simple: somewhere in your code (with luck, in only one place; if not lucky, in several places) you failed to specify the charset to use when decoding and encoding.
Also note that by default, the encoding/decoding behaviour on failure it to replace unmappable char/byte sequences.
All this to say: a String does not have an encoding. Sure, it is a series of chars and a char is a primitive type; but it could just as well have been a stream of carrier pigeons, the two basic processes remain the same: you need to decode from bytes and you need to encode to bytes; if either part fails you end with meaningless byte sequences/mutant carrier pigeons.
Building on fge's answer...
Your observation is occurring because new String(ebcdicByte) and ebcdicString.getBytes() use the platform's default charset.
ISO-8859-1 and windows-1252 are one-byte charsets. In those charsets, one byte always represents one character. So in AIX and Windows, when you do new String(ebcdicByte), you will always get a String whose character count is identical to your byte array's length. Similarly, converting a String back to bytes will use a one-to-one mapping.
But in UTF-8, one character does not necessarily correspond to one byte. In UTF-8, bytes 0 through 127 are single-byte representations of characters, but all other values are part of a multi-byte sequence.
However, not just any sequence of bytes with their high bit set is a valid UTF-8 sequence. If you give an UTF-8 decoder a sequence of bytes that isn't a properly encoded UTF-8 byte sequence, it is considered malformed. new String will simply map malformed sequences to a special default character, usually "�" ('\ufffd'). That behavior can be changed by explicitly creating your own CharsetDecoder and calling its onMalformedInput method, rather than just relying on new String(byte[]).
So, the ebcdicByte array contains this EBCDIC representation of "ABC12345":
C1 C2 C3 F1 F2 F3 F4 F5
None of those are valid UTF-8 byte sequences, so ebcdicString ends up as "\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd" which is "��������".
Your last line of code calls ebcdicString.getBytes(), which again does not specify a character set, which means the default charset will be used. Using UTF-8, "�" gets encoded as three bytes, EF BF BD. Since there are eight of those in ebcdicString, you get 3×8=24 bytes.
You have to specify the charset in the second to last line.
String ebcdicString = new String( ebcdicByte,"Cp1047");
as already pointed out, you always have to specify the charset when encoding/decoding.

Will String.getBytes("UTF-16") return the same result on all platforms?

I need to create a hash from a String containing users password. To create the hash, I use a byte array which I get by calling String.getBytes(). But when I call this method with specified encoding, (such as UTF-8) on a platform where this is not the default encoding, the non-ASCII characters get replaced by a default character (if I understand the behaviour of getBytes() correctly) and therefore on such platform, I will get a different byte array, and eventually a different hash.
Since Strings are internally stored in UTF-16, will calling String.getBytes("UTF-16") guarantee me that I get the same byte array on every platform, regardless of its default encoding?
Yes. Not only is it guaranteed to be UTF-16, but the byte order is defined too:
When decoding, the UTF-16 charset interprets the byte-order mark at the beginning of the input stream to indicate the byte-order of the stream but defaults to big-endian if there is no byte-order mark; when encoding, it uses big-endian byte order and writes a big-endian byte-order mark.
(The BOM isn't relevant when the caller doesn't ask for it, so String.getBytes(...) won't include it.)
So long as you have the same string content - i.e. the same sequence of char values - then you'll get the same bytes on every implementation of Java, barring bugs. (Any such bug would be pretty surprising, given that UTF-16 is probably the simplest encoding to implement in Java...)
The fact that UTF-16 is the native representation for char (and usually for String) is only relevant in terms of ease of implementation, however. For example, I'd also expect String.getBytes("UTF-8") to give the same results on every platform.
It is true, java uses Unicode internally so it may combine any script/language. String and char use UTF-16BE but .class files store there String constants in UTF-8. In general it is irrelevant what String does, as there is a conversion to bytes specifying the encoding the bytes have to be in.
If this encoding of the bytes cannot represent some of the Unicode characters, a placeholder character or question mark is given. Also fonts might not have all Unicode characters, 35 MB for a full Unicode font is a normal size. You might then see a square with 2x2 hex codes or so for missing code points. Or on Linux another font might substitute the char.
Hence UTF-8 is a perfect fine choice.
String s = ...;
if (!s.startsWith("\uFEFF")) { // Add a Unicode BOM
s = "\uFEFF" + s;
}
byte[] bytes = s.getBytes(StandardCharsets.UTF_8);
Both UTF-16 (in both byte orders) and UTF-8 always are present in the JRE, whereas some Charsets are not. Hence you can use a constant from StandardCharsets not needing to handle any UnsupportedEncodingException.
Above I added a BOM for Windows Notepad esoecially, to recognize UTF-8. It certainly is not good practice. But as a small help here.
There is no disadvantage to UTF16-LE or UTF-16BE. I think UTF-8 is a bit more universally used, as UTF-16 also cannot store all Unicode code points in 16 bits. Text is Asian scripts would be more compressed, but already HTML pages are more compact in UTF-8 because of the HTML tags and other latin script.
For Windows UTF-16LE might be more native.
Problem with placeholders for non-Unicode platforms, especially Windows, might happen.
I just found this:
https://github.com/facebook/conceal/issues/138
which seems to answer negatively your question.
As per Jon Skeet's answer: the specification is clear. But I guess Android/Mac implementations of Dalvik/JVM don't agree.

Is there a simple way to append a byte to a StringBuffer and specify the encoding?

Question
What is the simplest way to append a byte to a StringBuffer (i.e. cast a byte to a char) and specify the character encoding used (ASCII, UTF-8, etc)?
Context
I want to append a byte to a stringbuffer. Doing so requires casting the byte to a char:
myStringBuffer.append((char)nextByte);
However, the code above uses the default character encoding for my machine (which is MacRoman). Meanwhile, other components in the system/network require UTF-8. So I need to so something like:
try {
myStringBuffer.append(new String(new Byte[]{nextByte}, "UTF-8"));
} catch (UnsupportedEncodingException e) {
//handle error
}
Which, frankly, is pretty ugly.
Surely, there's a better way (other than breaking the same code into multiple lines)???????
The simple answer is 'no'. What if the byte is the first byte of a multi-byte sequence? Nothing would maintain the state.
If you have all the bytes of a logical character in hand, you can do:
sb.append(new String(bytes, charset));
But if you have one byte of UTF-8, you can't do this at all with stock classes.
It would not be terribly difficult to build a juiced-up StringBuffer that uses java.nio.charset classes to implement byte appending, but it would not be one or two lines of code.
Comments indicate that there's some basic Unicode knowledge needed here.
In UTF-8, 'a' is one byte, 'á' is two bytes, '丧' is three bytes, and '𝌎' is four bytes. The job of CharsetDecoder is to convert these sequences into Unicode characters. Viewed as a sequential operation over bytes, this is obviously a stateful process.
If you create a CharsetDecoder for UTF-8, you can feed it only byte at a time (in a ByteBuffer) via this method. The UTF-16 characters will accumulate in the output CharBuffer.
I think the error here is in dealing with bytes at all. You want to deal with strings of characters instead.
Just interpose a reader on the input and output stream to do the mapping between bytes and characters for you. Use the InputStreamReader(InputStream in, CharsetDecoder dec) form of the constructor for the input, though, so that you can detect input encoding errors via an exception. Now you have strings of characters instead of buffers of bytes. Put an OutputStreamWriter on the other end.
Now you no longer have to worry about bytes or encodings. It’s much simpler this way.

Categories