Java: toLowercase messes up the unicode symbols - java

My Code:
// Read the turkish file contents to the variable currentLine
currentLine = currentLine+"\n\n"+currentLine.toLowerCase();
// Write the contents to a new file
Output:
Yukar mavi gök asağı yağız yer yaratıldık iki arası in oğlu yaratılmış İnsan oğulları üzer ecdadı Bumın haka İste haka tah oturmuş oturarak Türk millet ülke türe idar edivermiş tanz edivermis Dört taraf hep düşman imiş Asker sevk edip dört taraf kavmi hep itaa altına almış hep muti kılmış Başlı baş eğdirmiş dizli diz çöktürmüş
yukar mavi g�k asa��� ya���z yer yarat�ld�k iki aras� in o��lu yarat�lm��� �nsan o��ullar� �zer ecdad� bum�n haka �ste haka tah oturmu�� oturarak t�rk millet �lke t�re idar edivermi�� tanz edivermis d�rt taraf hep d���man imi�� asker sevk edip d�rt taraf kavmi hep itaa alt�na alm��� hep muti k�lm��� ba��l� ba�� e��dirmi�� dizli diz ��kt�rm���
I tried toLowercase(Locale.getDefault()) and toLowercase(Locale.ROOT). I still get the same output.
Why is the function returning invalid symbols?
Thanks.

I think the problem comes from not declaring the character encoding when reading and writing the file. In this case Java assumes your platform default character set, which may not be appropriate.
If unsure, use UTF-8, that also covers Turkish (of course, it needs to match the file you actually have to read from).
You may also have to specify the Turkish Locale when calling toLowercase, since the exact rules may depend on the language this text is in (I'm not familiar with Turkish, it may just work already with the defaults).
But then how is it that half of the file has proper encoding?
The first line has the same symbols that you read in. There was no computation done. That can work even with the wrong encoding. For lowercase transformation, Java needs to know the proper encoding.
Now the strange characters have vanished. New '?' characters appeared all over the output
Half-way there. Now that you specified the input character set on your Reader, Java can understand your Turkish characters. But it still cannot output them, so it replaces them with "?". You also need to set the output character set on your Writer.

I think you will need to pass local info in toString() method. Here is an example in Java official documentation with Turkish as an example. Without Locale info, the toString() method will use default locale.
Here is how to create Turkish Locale,
Locale trlocale= Locale.forLanguageTag("tr_TR");

Related

U+FFFD is not available in this font's encoding: WinAnsiEncoding

I'm using PDFBox 2.0.1.
I try to dynamically add some (user provided) UTF8 text to the form fields and show the result to the user. Unfortunately either the pdf library is not capable of properly encoding special characters such as "äöü"... or I was not able find any useful documentation that could help me with this issue.
Can someone tell me what is wrong with the given code sample?
try (PDDocument document = PDDocument.load(pdfTemplate)) {
PDDocumentCatalog catalog = document.getDocumentCatalog();
PDAcroForm form = catalog.getAcroForm();
List<PDField> fields = form.getFields();
for (PDField field : fields) {
switch (field.getPartialName()) {
case "devices":
// Frontend (JS): userInput = btoa('Gerät')
String userInput = ...
String name = new String(Base64.getDecoder().decode(base64devices), "UTF-8");
field.setReadOnly(true);
break;
}
}
form.flatten(fields, true);
document.save(bos);
}
And here the stacktrace of the error:
java.lang.IllegalArgumentException: U+FFFD is not available in this font's encoding: WinAnsiEncoding
org.apache.pdfbox.pdmodel.font.PDTrueTypeFont.encode(PDTrueTypeFont.java:368)
org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:286)
org.apache.pdfbox.pdmodel.font.PDFont.getStringWidth(PDFont.java:315)
org.apache.pdfbox.pdmodel.interactive.form.PlainText$Paragraph.getLines(PlainText.java:169)
org.apache.pdfbox.pdmodel.interactive.form.PlainTextFormatter.format(PlainTextFormatter.java:182)
org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.insertGeneratedAppearance(AppearanceGeneratorHelper.java:373)
org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.setAppearanceContent(AppearanceGeneratorHelper.java:237)
org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.setAppearanceValue(AppearanceGeneratorHelper.java:144)
org.apache.pdfbox.pdmodel.interactive.form.PDTextField.constructAppearances(PDTextField.java:263)
org.apache.pdfbox.pdmodel.interactive.form.PDAcroForm.refreshAppearances(PDAcroForm.java:324)
org.apache.pdfbox.pdmodel.interactive.form.PDAcroForm.flatten(PDAcroForm.java:213)
my.application.service.PDFService.generatePDF(PDFService.java:201)
I also found those (related) issues on SO:
pdfbox: ... is not available in this font's encoding
But that does not help me choose the right encoding or how. IIRC Java uses UTF16 internally for character encoding why is the default not enough though?
Is that an issue of the PDF-document itself or the code I use to set it?
PdfBox encode symbol currency euro
Well its dynamic user input, so there are way to many things I would have to replace myself.
Thus, if the PDFBox people decided to fix the broken PDFBox method, this seemingly clean work-around code here would start to fail as it would then feed the fixed method broken input data.
Admittedly, I doubt they will fix this bug before 2.0.0 (and in 2.0.0 the fixed method has a different name), but one never knows...
Unfortunately I was not able to find this other setter method, but it might also be a different scope it does apply to.
EDIT
Updated example code to better represent the problem.
U+FFFD is used to replace an incoming character whose value is unknown or unrepresentable in Unicode compare the use of U+001A as a control character to indicate the substitute function (source).
That said it is likely that that character gets messed up somewhere. Maybe the encoding of the file is not UTF-8 and that's why the character is messed up.
As a general rule you should only write ASCII characters in the source code. You can still represent the whole Unicode range using the escaped form \uXXXX. In this case ä -> \u00E4.
-- UPDATE --
Apparently the problem is in how the user input get encoded/decoded from client/server side using the JS function btoa. A solution to this problem can be found at this link:
Using Javascript's atob to decode base64 doesn't properly decode utf-8 strings

How do I detect if a text file is encoded using Windows-1256?

I would really like to get if the file is Windows-1256 or not. Is there a way to recognize if text file is Windows-1256 in Java?
You could use this API to check the encoding:
http://jchardet.sourceforge.net/
And have a look at this question:
Java : How to determine the correct charset encoding of a stream
Add an encoding header to the file. Many text editors do this:
# -*- coding: cp1256 -*-
Other than that, there is no reliable way to do this.
The problem is that the cp12xx encodings aren't very different from each other. They look different on the screen but in the data of the files, there is nothing which says 0x8a means arabic ٹ (1256) or Š (1250 and 1252) or nothing (1255).
PS: the last sentence looks wrong because of right-to-left issues. The code "(1256)" is actually after the arabic character.
Say you have the choice of Windows-1256 (Arabic), UTF-8 and Windows-1252 (part of Western Europe). Then you can register proofs of wrong encoding for say UTF-8 (unsensible sequence) and Windows-1252. Some sequences of Windows-1252 would throw an unparsable exception for UTF-8 anyway-
try {
readInUTF8(file);
} catch (IsWindows1256Exception e {
readInWindow1256(file);
}
(Pseudo-code)

I have UTF-8 - but still get "Invalid byte 1 of 1-byte UTF-8 sequence"

I create a XML String on the fly (NOT reading from a file). Then I use Cocoon 3 to transform it via FOP to a PDF. Somewhere in the middle Xerces runs. When I use the hardcoded stuff everything works. As soon as I put a german Umlaut into the database and enrich my xml with that data I get:
Caused by: org.apache.cocoon.pipeline.ProcessingException: Can't parse the XML string.
at org.apache.cocoon.sax.component.XMLGenerator$StringGenerator.execute(XMLGenerator.java:326)
at org.apache.cocoon.sax.component.XMLGenerator.execute(XMLGenerator.java:104)
at org.apache.cocoon.pipeline.AbstractPipeline.invokeStarter(AbstractPipeline.java:146)
at org.apache.cocoon.pipeline.AbstractPipeline.execute(AbstractPipeline.java:76)
at de.grobmeier.tab.webapp.modules.documents.InvoicePipeline.generateInvoice(InvoicePipeline.java:74)
... 87 more
Caused by: com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: Invalid byte 1 of 1-byte UTF-8 sequence.
at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.invalidByte(UTF8Reader.java:684)
at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.read(UTF8Reader.java:554)
I have then debugged my app and found out, my "Ä" (which comes frome the database) has the byte value of 196, which is C4 in hex. This is what I have expected according to this: http://www.utf8-zeichentabelle.de/
I do not know why my code fails.
I have then tried to add a BOM manually, like that:
byte[] bom = new byte[3];
bom[0] = (byte) 0xEF;
bom[1] = (byte) 0xBB;
bom[2] = (byte) 0xBF;
String myString = new String(bom) + inputString;
I know this is not exactly good, but I tried it - of course it failed. I have tried to add a xml header in front:
<?xml version="1.0" encoding="UTF-8"?>
Which failed too. Then I combined it. Failed.
After all I tried something like that:
xmlInput = new String(xmlInput.getBytes("UTF8"), "UTF8");
Which is doing nothing in fact, because it is already UTF-8. Still it fails.
So... any ideas what I am doing wrong and what Xerces is expecting from me?
Thanks
Christian
If your database contains only a single byte (with value 0xC4) then you aren't using UTF-8 encoding.
The character "LATIN CAPITAL LETTER A WITH DIAERESIS" has a code-point value U+00C4, but UTF-8 can't encode that in a single byte. If you check the third column "UTF-8 (hex.)" on UTF8-zeichentabelle.de you'll see that UTF-8 encodes that as 0xC3 84 (two bytes).
Please read Joel's article "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" for more info.
EDIT: Christian found the answer himself; turned out it was a problem in the Cocoon 3 SAX component (I guess it's the alpha 3 version). It turns out that if you pass an XML as a String into the XMLGenerator class, something will go wrong during SAX parsing causing this mess.
I looked up the code to find the actual problem in Cocoon-stax:
if (XMLGenerator.this.logger.isDebugEnabled()) {
XMLGenerator.this.logger.debug("Using a string to produce SAX events.");
}
XMLUtils.toSax(new ByteArrayInputStream(this.xmlString.getBytes()), XMLGenerator.this.getSAXConsumer();
As you can see, the call getBytes() will create a Byte array with the JRE's default encoding which will then fail to parse. This is because the XML declares itself to be UTF-8 whereas the data is now in bytes again, and likely using your Windows codepage.
As a workaround, one can use the following:
new org.apache.cocoon.sax.component.XMLGenerator(xmlInput.getBytes("UTF-8"),
"UTF-8");
This will trigger the right internal actions (as Christian found out by experimenting with the API).
I've opened an issue in Apache's bug tracker.
EDIT 2: The issue is fixed and will be included in an upcoming release.
The C4 you see on that page refers to the unicode code point, U+00C4. The byte sequence used to represent such a code point in UTF-8 is NOT "\xC4". What you want is what's in the UTF-8 (hex.) column, namely "\xC3\x84".
Therefore, your data is not in UTF-8.
You can read about how data is encoded in UTF-8 here.
I'm running Windows 7 with TextPad as a text editor for manually building the xml data file. I was getting the MalformedByteSequenceException. My spec in the xml file was UTF-8. After poking around, I found that my editor had a tool "Tools ... Convert to DOS". I did that, re-saved the file, and the exception went away and my code ran fine.
I then looked at the default encoding for that file type in my editor. It was ASCII, though when I changed the xml encoding parameter to ASCII, I got another different MalformedByteSequenceException.
So on Windows systems, you might try keeping the xml encoding to UTF-8, but save the file encoded DOS. I did not dig any further as to why this works.

Converting from Java String to Windows-1252 Format

I want to send a URL request, but the parameter values in the URL can have french characters (eg. è). How do I convert from a Java String to Windows-1252 format (which supports the French characters)?
I am currently doing this:
String encodedURL = new String (unencodedUrl.getBytes("UTF-8"), "Windows-1252");
However, it makes:
param=Stationnement extèrieur into param=Stationnement extérieur .
How do I fix this? Any suggestions?
Edit for further clarification:
The user chooses values from a drop down. When the language is French, the values from the drop down sometimes include French characters, like 'è'. When I send this request to the server, it fails, saying it is unable to decipher the request. I have to figure out how to send the 'è' as a different format (preferably Windows-1252) that supports French characters. I have chosen to send as Windows-1252. The server will accept this format. I don't want to replace each character, because I could miss a special character, and then the server will throw an exception.
Use URLEncoder to encode parameter values as application/x-www-form-urlencoded data:
String param = "param="
+ URLEncoder.encode("Stationnement extr\u00e8ieur", "cp1252");
See here for an expanded explanation.
Try using
String encodedURL = new String (unencodedUrl.getBytes("UTF-8"), Charset.forName("Windows-1252"));
As per McDowell's suggestion, I tried encoding doing:
URLEncoder.encode("stringValueWithFrechCharacters", "cp1252") but it didn't work perfectly. I replayced "cp1252" with HTTP.ISO_8859_1 because I believe Android does not have the support for Windows-1252 yet. It does allow for ISO_8859_1, and after reading here, this supports MOST of the French characters, with the exception of 'Œ', 'œ', and 'Ÿ'.
So doing this made it work:
URLEncoder.encode(frenchString, HTTP.ISO_8859_1);
Works perfectly!

UTF-8 character encoding in Java

I am having some problems getting some French text to convert to UTF8 so that it can be displayed properly, either in a console, text file or in a GUI element.
The original string is
HANDICAP╔ES
which is supposed to be
HANDICAPÉES
Here is a code snippet that shows how I am using the jackcess Database driver to read in the Acccess MDB file in an Eclipse/Linux environment.
Database database = Database.open(new File(filepath));
Table table = database.getTable(tableName, true);
Iterator rowIter = table.iterator();
while (rowIter.hasNext()) {
Map<String, Object> row = this.rowIter.next();
// convert fields to UTF
Map<String, Object> rowUTF = new HashMap<String, Object>();
try {
for (String key : row.keySet()) {
Object o = row.get(key);
if (o != null) {
String valueCP850 = o.toString();
// String nameUTF8 = new String(valueCP850.getBytes("CP850"), "UTF8"); // does not work!
String valueISO = new String(valueCP850.getBytes("CP850"), "ISO-8859-1");
String valueUTF8 = new String(valueISO.getBytes(), "UTF-8"); // works!
rowUTF.put(key, valueUTF8);
}
}
} catch (UnsupportedEncodingException e) {
System.err.println("Encoding exception: " + e);
}
}
In the code you'll see where I want to convert directly to UTF8, which doesn't seem to work, so I have to do a double conversion. Also note that there doesn't seem to be a way to specify the encoding type when using the jackcess driver.
Thanks,
Cam
New analysis, based on new information.
It looks like your problem is with the encoding of the text before it was stored in the Access DB. It seems it had been encoded as ISO-8859-1 or windows-1252, but decoded as cp850, resulting in the string HANDICAP╔ES being stored in the DB.
Having correctly retrieved that string from the DB, you're now trying to reverse the original encoding error and recover the string as it should have been stored: HANDICAPÉES. And you're accomplishing that with this line:
String valueISO = new String(valueCP850.getBytes("CP850"), "ISO-8859-1");
getBytes("CP850") converts the character ╔ to the byte value 0xC9, and the String constructor decodes that according to ISO-8859-1, resulting in the character É. The next line:
String valueUTF8 = new String(valueISO.getBytes(), "UTF-8");
...does nothing. getBytes() encodes the string in the platform default encoding, which is UTF-8 on your Linux system. Then the String constructor decodes it with the same encoding. Delete that line and you should still get the same result.
More to the point, your attempt to create a "UTF-8 string" was misguided. You don't need to concern yourself with the encoding of Java's strings--they're always UTF-16. When bringing text into a Java app, you just need to make sure you decode it with the correct encoding.
And if my analysis is correct, your Access driver is decoding it correctly; the problem is at the other end, possibly before the DB even comes into the picture. That's what you need to fix, because that new String(getBytes()) hack can't be counted on to work in all cases.
Original analysis, based on no information. :-/
If you're seeing HANDICAP╔ES on the console, there's probably no problem. Given this code:
System.out.println("HANDICAPÉES");
The JVM converts the (Unicode) string to the platform default encoding, windows-1252, before sending it to the console. Then the console decodes that using its own default encoding, which happens to be cp850. So the console displays it wrong, but that's normal. If you want it to display correctly, you can change the console's encoding with this command:
CHCP 1252
To display the string in a GUI element, such as a JLabel, you don't have to do anything special. Just make sure you use a font that can display all the characters, but that shouldn't be problem for French.
As for writing to a file, just specify the desired encoding when you create the Writer:
OutputStreamWriter osw = new OutputStreamWriter(
new FileOutputStream("myFile.txt"), "UTF-8");
String s = "HANDICAP╔ES";
System.out.println(new String(s.getBytes("CP850"), "ISO-8859-1")); // HANDICAPÉES
This shows the correct string value. This means that it was originally encoded/decoded with ISO-8859-1 and then incorrectly encoded with CP850 (originally CP1252 a.k.a. Windows ANSI as pointed in a comment is indeed also possible since the É has the same codepoint there as in ISO-8859-1).
Align your environment and binary pipelines to use all the one and same character encoding. You can't and shouldn't convert between them. You would risk losing information in the non-ASCII range that way.
Note: do NOT use the above code snippet to "fix" the problem! That would not be the right solution.
Update: you are apparently still struggling with the problem. I'll repeat the important parts of the answer:
Align your environment and binary pipelines to use all the one and same character encoding.
You can not and should not convert between them. You would risk losing information in the non-ASCII range that way.
Do NOT use the above code snippet to "fix" the problem! That would not be the right solution.
To fix the problem you need to choose character encoding X which you'd like to use throughout the entire application. I suggest UTF-8. Update MS Access to use encoding X. Update your development environment to use encoding X. Update the java.io readers and writers in your code to use encoding X. Update your editor to read/write files with encoding X. Update the application's user interface to use encoding X. Do not use Y or Z or whatever at some step. If the characters are already corrupted in some datastore (MS Access, files, etc), then you need to fix it by manually replacing the characters right there in the datastore. Do not use Java for this.
If you're actually using the "command prompt" as user interface, then you're actually lost. It doesn't support UTF-8. As suggested in the comments and in the article linked in the comments, you need to create a Swing application instead of relying on the restricted command prompt environment.
You can specify encoding when establishing connection. This way was perfect and solve my encoding problem:
DatabaseImpl open = DatabaseImpl.open(new File("main.mdb"), true, null, Database.DEFAULT_AUTO_SYNC, java.nio.charset.Charset.availableCharsets().get("windows-1251"), null, null);
Table table = open.getTable("FolderInfo");
Using "ISO-8859-1" helped me deal with the French charactes.

Categories