How do I decode data from a TCP socket - java

I am trying to make a very simplistic chat program with a server made in python and the client in java. However I have no idea how to decode the data which the server receives from the client. The client sends and encodes to UTF-8.
Just printing it looks like this: http://i.imgur.com/0usK6j7.jpg
And decoding from UTF-8 first it looks like this: http://i.imgur.com/Ctwivl4.jpg
I assume that the NUL character or \x00 can be removed. the same going for the b'' which wraps the entire message. The second character seems to specify the length of the message. But how do I decode this? Should I just remove characters manually? I know this is quite a basic question and has probably been asked before but I don't even know what to search for.

In the java client I have a DataOutputStream object which i use with this method: out.writeUTF(input);
According to the documentation of that method, it doesn't write UTF-8 to the output stream. It says "First, two bytes are written to the output stream", which explains your 16-bit lengths that precede the strings. And even after that it doesn't write UTF-8, it writes in Java's own idiosyncratic encoding which it calls Modified UTF-8 and which is a actually variant of CESU-8, not UTF-8.
So first of all, you need to clarify what format exactly you wish to use to communicate between the client and server: the protocol. Is it plain UTF-8? Is it the bizarre structured encoding that writeUTF emits? Is it something else? Then write both your client and server to follow that specification.

Related

Comparing strings passed through socket UTF8

I have an interesting problem here.
First I have a UI in Java. The UI at one point connects to a rpi4 on the network via a socket. From there data is sent over the socket using .writeUTF(string).
On the rpi4 side, I'm running a simple Python 3 script. Its sole purpose is to spit out anything that comes over the socket and it does. But before it does I use recv.decode('utf-8') to decode the string.
From Java I send "fillOpen"
In python after decoding it prints "fillOpen"
The issue:
Performing a string compare in the python script on the decoded string always results in false. I have set it up as such:
Command = recv.decode('utf-8')
If Command == "fillOpen":
#Do work
I have also tried to not decode the string and compare to an encoded string. As such:
Command = recv
FillOpenCommand =
("fillOpen").encode('utf-8')
If fillOpenCommand == Command:
#Do work
None of these comparisons result in true.
I have read that the Java writeUTF is a UTF8 encoding but slightly "different"?
Can I adjust the .writeUTF to work with the Python 3 decoder? Is there an alternative for sending data that can be parsed then have a string comp applied via Python that would work?
Thank you guys.
Assuming you are using the writeUTF method as defined in the Java DataOutput interface:
The output from writeUTF starts with two bytes of length information. You can skip it or you can use it to make sure you have received a complete message.
The easiest thing to do is to skip it:
Command = recv[2:].decode('utf-8')
If your commands are simply ASCII and don't contain things like user input, emojis, musical notation, this is good enough. Otherwise, you still have a problem. The way writeUTF handles "surrogate pair" characters is not valid "utf-8", and decode('utf-8') will throw a UnicodeDecodeError. If I were you, in this case I would stop using writeUTF and start using methods that produce standard UTF-8 encoded data.

ISO-8859-1 to UTF-8 in Java

An XML containing 哈瓦那 (UTF-8) is sent to Service A.
Service A sends it to Service B.
The string was encoded to 哈瓦那 (ISO-8859-1).
How do I encode it back to 哈瓦那? Considering that all strings in Java are UTF-16. Service B has to compare it as 哈瓦那 not 哈瓦那.
Thanks.
When you read a text file, you have to read it using the actual encoding used to create the file. If you specify the appropriate encoding, you'll get the correct characters in memory. So, if the same file (semantically) exists in two versions (UTF-8 encoded and ISO-8859-1), reading the first one with UTF-8 and the second one with ISO-8859-1 will lead to exactly the same chars in memory.
The above is true only if it made sense to encode the file in ISO-8859-1 in the first place. UTF-8 is able to store every unicode character. But ISO-8859-1 is able to encode only a small subset of the unicode characters (western languages characters). The characters you posted literally look like Chinese to me, and I don't think encoding them in ISO-8859-1 is even possible without losing everything.
I think you are misdiagnosing the problem:
An XML containing 哈瓦那 (UTF-8) is sent to Service A.
OK ...
Service A sends it to Service B.
OK ...
The string was converted to 哈瓦那 (ISO-8859-1).
This is not correct. The string has not been "converted". Rather, it has been decoded with the wrong character encoding. Specifically, it looks very much like something has taken UTF-8 encoded bytes, and assumed that they are ISO-8859-1 encoded, and decoded them accordingly.
Can you unpick this? It depends where the mistaken decoding first occurred. If it happens in Service B, then you should be able to relabel the data source as UTF-8, and then decode it correctly. On the other hand, if the first mistaken decoding happens in service A, then you could be out of luck. A mistaken decoding can result in loss of data as unrecognized codes are replaced with some other character. If that happens, the original data will be gone forever.
In either case, the best way to deal with this is to figure out what is getting the wrong character encoding mixed up, and fix that. Perhaps the XML needs to be fixed to specify the charset / encoding. Perhaps, the transport mechanism (e.g. HTTP request or response) needs to be corrected to include the proper document encoding.
Use writers and readers to encode/decode your input/output streams:
String yourText = "...";
InputStream yourInputStream = ...;
Writer out = new OutputStreamWriter(youInputStream, "UTF-8");
out.write(yourText);
Same for reader.

How to "fix" broken Java Strings (charset-conversion)

I'm running a Servlet that takes POST requests from websites that aren't necessarily encoded in UTF-8. These requests get parsed with GSON and information (mainly strings) end up in objects.
Client side charset doesn't seem to be used for any of this, as Java just stores Strings in Unicode internally.
Now if a page sending a request has a non-unicode-charset, the information in the strings is garbled up and doesn't represent what was sent - it seems to be misinterpreted somewhere either in the process of being stringified by the servlet, or parsed by gson.
Assuming there is no easy way of fixing the root of the issue, is there a way of recovering that information, given the (misinterpreted) Java Strings and the charset identifier (i.e. "Shift_JIS", "Windows-1255") used to display it on the client's side?
I haven't had need to do this before, but I believe that
final String realCharsetName = "Shift_JIS"; // for example
new String(brokenString.getBytes(), realCharsetName);
stands a good chance of doing the trick.
(This does however assume that encoding issues were entirely ignored while reading, and so the platform's default character set was used (a likely assumption since if people thought about charsets they probably would have got it right). It also assumes you're decoding on a machine with the same default charset as the one that originally read the bytes and created the String.)
If you happen to know exactly which charset was incorrectly used to read the string, you can pass it into the getBytes() call to make this 100% reliable.
Assuming that it's obtained as a POST request parameter the following way
String string = request.getParameter("name");
then you need to URL-encode the string back to the original query string parameter value using the charset which the server itself was using to decode the parameter value
String original = URLEncoder.encode(string, "UTF-8");
and then URL-decode it using the intended charset
String fixed = URLDecoder.decode(original, "Shift_JIS");
As the better alternative, you could also just instruct the server to use the given charset directly before obtaining any request parameter by ServletRequest#setCharacterEncoding().
request.setCharacterEncoding("Shift_JIS");
String string = request.getParameter("name");
There's by the way no way to know about the charset which the client used to URL-encode the POST request body. Almost no of the clients specifies it in the Content-Type request header, otherwise the ServletRequest#setCharacterEncoding() call would be already implicitly done by the servlet API based on that. You could determine it by checking getCharacterEncoding(), if it returns null then the client has specified none.
However, this does of course not work if the client has already properly encoded the value as UTF-8 or for any other charset. The Shift_JIS massage would break it again. There exist tools/API's to guess the original charset used based on the obtained byte sequence, but that's not 100% reliable. If your servlet concerns a public API, then you should document properly that it only accepts UTF-8 encoded parameters whenever the charset is not specified in the request header. You can then move the problem to the client side and point them on their mistake.
Am I correct that what you get is a string that was parsed as if it were UTF-8 but was encoded in Windows-1255? The solution would be to encode your string in UTF-8 and decode the result as Windows-1255.
The correct way to fix the problem is to ensure that when you read the content, you do so using the correct character encoding. Most frameworks and libraries will take care of this for you, but if you're manually writing servlets, it's something you need to be aware of. This isn't a shortcoming of Java. You just need to pay attention to the encodings. Specifically, the Content-Type header should contain useful information.
Any time you convert from a byte stream to a character stream in Java, you should supply a character encoding so that the bytes can be properly decoded into characters. See for example the InputStreamReader constructors.

Problem transmitting null character over sockets

I am writing a small Java server, and a matching client in C++, which implement a simple IM service over the STOMP protocol.
The protocol specifies that every frame (message that passes between server and client, if you will) must end with a null character, which in code I refer to as '\0', both in Java and in C++.
However, when I transmit a frame over TCP via sockets, the null character simply does not show up, on either side. I am working with UTF-8 encoding, and tried switching to ASCII, didn't help.
What am I doing wrong?
Wether you are encoding text in ASCII or UTF-8, you convert your "letters" to a stream of bytes (byte encodings). You need to add a ZERO byte to the end of the message strings.
[Guessing] You may be using a high-level library with a method like "WriteLine(String line)" to send the data over the network. The documentation for that method with describe what bytes are actually sent, which typically includes the message text encoding in the current encoding (ASCII, UTF-8, etc) followed by a line termination sequence, which is typically either the byte 13, 10m or a combination of them ('\n', '\r\n').
Use the low-level Write() method or WriteBytes() method (depending on your libraries). Convert the text to ASCII or UTF-8, add the zero byte to the end, and send exactly what you want to send.
I'd recommend downloading Wireshark and monitoring the transmission to see if the problem is on the sending or the receiving end.
Are you transmitting a buffer or a string? If you transmit a string, the null character will be terminating the string and won't be transmitted. Using a buffer, you can specify how many bytes you want to transmit and include the null character.
Of course, the problem can be both on the transmission and the receiving side.
The first thing you need to do is use Wireshark (or something similar) as suggested by Spencer.
If it's a transmit-side issue, double check that you are encoding the message properly by adding appropriate diagnostic traces to your code.
If it's a receive-side issue, is there a way to set up the delimiting character on the receive socket? There might be a socket option that says whether to include or exclude the delimiting character. Maybe it's being transmitted properly, but the receive socket is stripping it off.
You are in C++?
A newbie mistake is putting the NUL at the end of the string: "Foo!\x00".
Of course, the ting that writes the string treats that first NUL as the end of the string, and dos not transmit it. You need to white the nul character '\x00' explicitly as a character (with putChar or however c++ does it), not as part of a string.

Decoding split 16-bit character in Java

In my application, I receive a URL-UTF8 encoded string of characters, which is split up by the sending client. After splitting, each message part includes some header information which is meant to be used to reconstruct the message.
With English characters, it's pretty straightforward
String content = new String(request.getParameter("content").getBytes("UTF-8"));
I store this in along with the header information in a buffer for each received part. When all parts have been received, I simply recompose the message by concatenating each individual part according to header information.
With languages that use 16-bit encodings this is sometimes not working as expected. Everything works fine if the split does NOT happen in the middle of a single character.
For instance here's a string of three Hebrew characters being sent by the client:
%D7%93%D7%99%D7%91
If this winds up split as follows: {%D7%93%D7%99} {%D7%91}, reconstruction isn't a problem.
However sometimes the client splits it up in the middle (example: {%D7%93%D7} {%99%D7%91})
When this happens, after reconstruction I get two � characters at the boundary point instead of the single correct Hebrew character.
I thought the inability to correctly retain the single byte information was related to passing around strings, so I tried passing around byte array from request.getParameter("content").getBytes("UTF-8") to the buffer without wrapping in the string joining together the byte arrays. In the buffer I joined all these arrays BEFORE converting the final array to a string.
Even after doing this, it appears I still "lost" that information held by the single bytes. I'm guessing this is because the getBytes("UTF-8") method can't correctly resolve the single bytes since they are not valid characters. Is that right?
Is there any way I can get around this and preserve these tail/head bytes?
Your client is the problem here. Apparently it treats the text data as a byte array for the purpose of splitting it up, and then sending the invalid fragments as text (HTTP request parameters are inherently textual). At that point, you have already lost.
You either have to change the client to split the data as text (i.e. along character boundaries), or change your protocol to send the fragments as binary data, i.e. not as a parameter but as the request body, to be retrieved via ServletRequest.getInputStream() - then, concatenating the data before decoding it should work.
(Caveat: the above assumes that you are indeed writing Servlet code, which I inferred from the request.getParameter() method; but even if that's a coincidence the same principles apply: either split the data as a String before any conversion to byte[] happens on the client side, or make sure you concatenate the byte arrays on the server before any conversion to String happens.)
You must first collect all bytes and then convert them all at once into a string.
Following scheme is a hack but it should work in your case,
Set you server/page in Latin-1 mode. If this is a GET, client has no way to set encoding. You have to do this on server's end. For example, you need to add URIEncoding="iso-8859-1" in connector for Tomcat.
Get content as Latin1. It will be wrong value at this point but don't worry,
String content = request.getParameter("content");
Concatenate the string as Latin-1.
data = data + content;
When you get the whole thing, you need to re-encode the string as UTF-8 like this,
String value = new String(data.getBytes("iso-8859-1"), "utf-8");
The value should contain the correct characters.
You never need to convert a string to bytes and then to a String java, it is completely pointless. Once a series of bytes have been decoded to a String it is in Java String encoding (UTF-16E I think).
The problem you have is that the application server is making an assumation about the encoding of the incoming HTTP request, usually the platform encoding. You can give the application server a hint as to the expected encoding by calling ServletRequest.setCharacterEncoding(String) before anything else calls getParameter().
Browser's assume that form fields should be submitted back to the server using the same encoding that the page was served with. This is a general rule as the HTTP spec doesn't have a way to specify the encoding of the incoming request, only the response.
Spring has a nice Filter to do this for you CharacterEncodingFilter if you define this as the every first filter in web.xml most of your encoding issue will go away.

Categories