When is encoding being relevant in Java? - java

This might be a bit beginner question but it's fairly relevant considering debbuging encoding in Java: At what point is an encoding being relevant to a String object?
Consider I have a String object that I want to save to a file. Is the String object itself using some sort of encoding I should manipulate or this encoding will only be informed by me when I create a stream of bytes to save?
The same applies to importing: when I open a file and get it's bytes, I assume there's no encoding at hand, only bytes. When I parse this bytes to a String, I got to use an encoding to understand what characters are they. After I parse those bytes, the String (in memory) has some sort of meta information with the encoding or this is only being handled by the JVM?
This is vital considering I'm having file import/export issues and I got to understand at which point I should worry about getting the right encoding.
Hope I explained my doubt well, and thank you in advance!

Java strings do not have explicit encoding information. They don't know where they came from, and they don't know where they are going. All Java strings are stored internally as UTF-16.
You (optionally) specify what encoding to use whenever you want to turn a String into a sequence of bytes (e.g., to save to a file), or when you want to turn a sequence of bytes (e.g., read from a file) into a String.

Encoding is important to String when you are de/serializing from disk or the web. There are multiple text file formats: ascii, latin-1, utf-8/16 (I believe there may be two utf-16 formats, but I'm not 100%)
See InputStreamReader for how to load a String from text encoded in a non-default format

Related

decoding and encoding strings, ISO-8859-1 to UTF-8 in Java

I have read the other posts on this issue, but the solutions they presented did not work for me. Actually, the official Java documentation also did not work as intended (I am using Java 11) : https://docs.oracle.com/javase/tutorial/i18n/text/string.html
My problem is that I am reading one byte at a time from a byte buffer, putting that in a byte array, and making a String out of that byte array. The bytes I read are from an embedded system that can only send ISO-8859-1 bytes, so I end up with a byte array with ISO-8859-1 bytes and the Java String I end up getting is thus ISO-8859-1 encoded. No problem here. The String in IntelliJ looks like this :
The bytes I am trying to convert from ISO-8859-1 to UTF-8 are the ones in yellow. I want them to be UTF-8, so in the end the "C9" byte should be replace by the "C3A9" bytes.
The first step works correctly, I do this : maintenanceResponseString.getBytes(StandardCharsets.UTF_8) and I get the right bytes that I want, the UTF-8 encoding of the string, that's good :
The problem comes in here , when I try to make a STRING out of these new (and GOOD) bytes, like this :
new String(maintenanceResponseString.getBytes(StandardCharsets.UTF_8), StandardCharsets.UTF_8)
The old bytes are back ?!! It's like the "getBytes(UTF-8)" never actually happened. That is NOT what the documentation says should happen... what am I missing here ? I have done tests and the string really is still ISO-8859-1 encoded... I don't know what is going on here. Where are the bytes from "getBytes" ?
How do you convert a String that contains ISO-8859-1 bytes to UTF-8 bytes ? I'm out of alternatives and I need to get it done real bad for a pro project... this should be easy !
Note : I have tried alternatives like
ByteBuffer buffer = StandardCharsets.UTF_8.encode(s);
return StandardCharsets.UTF_8.decode(buffer).toString();
But the exact same thing happens.
Thank you in advance for your help.
EDIT :
With some info in the comments about how Strings in Java 9+ get represented internally not as UTF-16 only anymore, but Latin-1 (why...), I think that is what made me think the Strings were "internally encoded in Latin-1" when it is just the default representation of the String if we don't specify the encoding we want to use when displaying the String.
From what I undestand now the String itself is not bound to any encoding, and you can CHOOSE the encoding you want to display it in when it gets written.
Actually my issue is that the String ends up written to an XML file via JAXB marshalling in LATIN-1, and I now think the issues lies over there... I will dig further when I access my work computer again and report here
It turns out there was nothing wrong with Strings and "their encoding". What happened is I got really confused because the debugger shows the contents of the String in a "default internal storage encoding", and that is ISO-8859-1 (but can be UTF-16, depends on the content of the String).
Quote from the JEP-254 :
We propose to change the internal representation of the String class
from a UTF-16 char array to a byte array plus an encoding-flag field.
The new String class will store characters encoded either as
ISO-8859-1/Latin-1 (one byte per character), or as UTF-16 (two bytes
per character), based upon the contents of the string. The encoding
flag will indicate which encoding is used.
But actually it doesn't matter the internal encoding storage. When it is time to be written, the String will use whatever encoding you want at the time of writing.
My issue actually was when I was sending the String in an HTTP request with Spring RestTemplate. I didn't have the header specifying the "charset" to use in the request, and RestTemplate defaults to ISO-8859-1 if not told otherwise. I added the charset=utf-8, and the String was correctly written as UTF-8 in the request.
Thank you to #VGR #Eugene #skomisa for the help

difference between bytes returned from reading a file and getbytes from string in Java?

Reading file directly into byte array gives a different output compared to reading data into a String and then getting bytes from it.
What is the form for the bytes read directly from a file and how is it different from the get bytes in String.
Reading file directly into byte array gives a different output compared to reading data into a String and then getting bytes from it.
Well, it might. And it might not. It depends on how you've read the file as text, and how you've converted the text back into bytes.
If you use the same encoding in both directions and the file originally contained text in that encoding, then you're likely to get the same bytes back. But if you use the wrong encoding (e.g. you read ISO-8859-1-encoded text as UTF-8) or if you use different encodings for the two conversions, then you're very likely to get back different results.
Think of text as being a bit like an image format - if you read a .png file and then write out a .jpeg file, you wouldn't expect that to have the same bytes, would you? Likewise if you tried to read a .png file with a JPEG decoder, you'd expect to get garbage out (or more likely an error).
Basically, don't think of text as a sequence of bytes - it's not. Think of it as being entirely separate, with encodings used to convert between text and binary representations. See Marc Gravell's blog post on IO for more details.

Java library to fix incorrectly encoded text using heuristics

I'm dealing with an external web service that is giving me incorrectly encoded (and or corrupted) Strings (UTF-8) that were most likely either ISO LATIN or WINDOWS-1252 but are now UTF-8 (and or a mixture of ISO/WINDOWS/UTF-8). Lovely A hats (Â) abound.
I obviously cannot fix how the external web service stores its strings so the information is lost. Thus hopes of a 100% translation I know are not possible.
But I was hoping that someone had written a heuristic character mapping library in Java (its unlikely some one would type A hats).
If not I guess I can port this guys PHP code: https://stackoverflow.com/a/3521340/318174
UPDATE and Explanation: A simple conversion like #VGR answered with will not work. I do not have the original bytes. The data was converted incorrectly at the endpoint (SOAP server maybe getBytes(/*with out correct encoding*/) was done or maybe the data is stored in the incorrect format). When you convert bytes to Strings in Java back forth the data is not retained unless the encoding is the same everywhere. This is easy to understand if you think of something like ASCII <-> UTF-8. With Windows-1252 or ISO Latin its much more complicated because data is not lost but often confused. That is because those encodings can be two bytes and are not a subset of UTF-8.
If you don't believe me you can try doing getBytes() back in forth with various encodings and will see data corruption and data loss.
I may be misunderstanding the nature of the incorrectly encoded data, but that PHP code seems like overkill to me. If you have UTF-8 bytes that were passed as individual characters, you should be able to just do:
String fix(String s) {
byte[] bytes = s.getBytes(Charset.forName("windows-1252"));
return new String(bytes, StandardCharsets.UTF_8);
}

Can a file be encoded in multiple charsets in Java?

I'm working on a Java plugin which would allow people to write to and read from a file by specifying a charset encoding they would wish to use. However, I was confused as to how I would encode multiple encodings in a single file. For example, suppose that A characters come from one charset and B characters come from another, would it be possible to write "AAAAABBBBBAAAAA" to a file?
If it is not possible, is this generally true for any programming language, or specifically for Java? And if it is possible, how would I then proceed to read (decode) the file?
I do not want to use the encode() and decode() methods of Charset since tests with them have failed (some charsets were not decoded properly). I also don't want to use third-party programs for various reasons, so the scope of this question is purely in the standard java packages/code.
Thanks a lot!
N.S.
You'd need to read it as a byte stream and know beforehand at which byte positions the characters start and end, or to use some special separator character/byterange which indicates the start and end of the character group. This way you can get the bytes of the specific character group and finally decode it using the desired character encoding.
This problem is not specific to Java. The requirement is just strange. I wonder how it makes sense to mix character encodings like that. Just use one uniform encoding all the time, for example UTF-8 which supports practically all characters the mankind is aware of.
Ofcourse it is in principle possible to write text that is encoded in different character sets into one file, but why would you ever want to do this?
A character encoding is simply a mapping from text characters to bytes and vice versa. A file consists of bytes. When writing a file, the character encoding determines how the characters are converted to bytes, and when reading, it determines how the bytes are converted back to characters.
You could have one part of the file encoded with one character encoding, and another part with another character encoding. You'd have to have some mechanism to keep track of what parts are encoded with what encoding, because the file doesn't automatically keep track of that for you.
I was wondering about this as well, because my client just asked a similar question. Like BalusC mentioned this is not a java specific problem.
After a few back and forth, I found the real question might be 'multiple encoding of information', instead multiple encoding file.
i.e. we have a xml string text needs to be encoded with 8859-1, if we save it as a file, then we need encode it. The default encoding for xml is UTF-8, we might not necessary to encode the whole xml as 8859-1. Since the xml node is just a vehicle of passing information over to other system and the content (value of the xml node, which needs to be persisted with 8859-1). So do we need multiple encoding in this case? probably not. We can still encode the xml with UTF-8, then pass it over. once the client receives the xml, then they need read the information out of the UTF-8 encoded file, and persist value of the xml node as 8859-1.

How can I find the byte encoding of a TIBCO Rendezvous message?

In my Java application, I am archiving TIBCO RV messages to a file as bytes.
I am writing a small utility app that will play the messages back. This way I can just create a TibrvMsg object from the bytes without having to parse the file and construct the object manually.
The problem I am having is that I am reading a file that was created on a Linux box, and attempting to run my app on a Windows machine. I get an error due to the different charset the file was written in.
So now, what I want to do is log each message in a specific charset (UTF-8), so that I don't care what platform I run my playback app in. The app should just read in the file knowing before-hand the charset the file is written in. I am planning on using java.nio packages for this, to transform the bytes from one charset to another.
Do I need to know what charset the TIBRV message bytes are encoded in to do the transformation? If so, how can I find this out?
You are taking opaque data and, it would appear, attempting to write it to a file as textual data without escaping the non textual portions of it (alternatively you are writing it as raw bytes and then trying to read it as if it were character based which is much the same problem).
This is flawed from the very start.
Opaque data should be treated as meaningless and simply stored without modification to give back to an API that does know how to deal with it. If the data must be stored in a textual form then you must losslessly convert the bytes into text. Appropriate encodings are things like base64. Encoding in the sense of character set encoding is NOT lossless if you apply it to raw binary data.
Simply storing the bytes in a file as bytes (not characters) along with a fixed length prefix indicating the length of the message and the subject it was sent on is sufficient to replay RV messages through the system.
In relation to any text based fields inside the message if the encoding matters (I strongly suggest avoiding this mattering in general when designing the app) then you have the same problem on replay as you would have had at the original receipt time which is to convert from the source encoding to the desired encoding (hopefully using exactly the same code) so this should be a non issue in relation to the replaying.
This is probably related to Java string encoding, not TIBRV. Though there's this in the documentation:
Strings and Character Encodings
--------------------------------------------------------------------------------
Rendezvous software uses strings in several roles:
* String data inside message fields
* Field names
* Subject names (and other associated strings that are not
strictly inside the message)
* Certified delivery correspondent names
* Group names (fault tolerance)
All these strings (both in C and in wire format) use the character
encoding appropriate to the ISO locale of the sender. For example,
the United States is locale en_US, and uses the Latin-1 character
encoding (also called ISO 8859-1); Japan is locale ja_JP, and uses
the Shift-JIS character encoding.
When two programs exchange messages within the same locale, strings
are always correct. However, when a message sender and receiver use
different character encodings, the receiving program must convert
between encodings as needed. Rendezvous software does not convert
automatically.
EBCDIC
For information about string encoding in EBCDIC environments,
see tibrv_SetCodePages() .
So you might want to look at the locale of the machines.
As this (admittedly rather old) mailing list message indicates, little is known about the internal structure of that network protocol. This might make it quite a challenge to do what you're after.
That said, if the messages are just binary blocks of data (as captured from the network), they shouldn't even have a charset. Charsets is for textual data, where it matters since a single character can be encoded in many different ways. Binary data is not composed out of characters, so there cannot be an encoding in that sense.
Do I need to know what charset the
TIBRV message bytes are encoded in to
do the transformation?
Yes. A charset is a method of transforming text into a byte stream and vice versa. Your network data is a byte stream, so when you interpret parts of it as text, you ARE (implicitly or explicitly) using a charset - the question is whether it is the correct one.
Transforming bytes from one charset to another basically means convering them to text using one charset and then back to bytes using another. Note that this can result in the length of the data changing, since many charsets use more than 1 byte for some characters. In the context of network messages, this could be problematic when it invalidates length fields or causes text fields to overflow. It's probably better not to do any transformation and instead teach the reading app to learn how to deal with varying charsets.
If so, how can I find this out?
Look at the protocol specification.
Read everything inte a byte[] from a inputStream, write the byte[] to a a FileOutputStream.
NO Reader or Writer should be involved, they do character conversion and that is wrong.
Stay away from java.nio until you understand java.io.

Categories