The problem I'm trying to fix is this:
Users of our application are copy/pasting characters from windows-related docs like Word for instance, and our application is not recognizing single and double quotes or bullets.
These are the steps I've taken so far to get this data into UTF format:
inside servers.xml, in Connector tag, I added the attribute URIEncoding="UTF-8".
in the bean charged with storing the input, I created a byte[] and passed in String holding inputNote text, then converted it to UTF-8. Then passed the UTF-8 converted String back to inputNoteText String. Please see directly below for condensed code on this.
byte[] bytesInUTF8inputNoteText = inputNoteText.getBytes("UTF-8");
inputNoteText = new String(bytesInUTF8inputNoteText, "UTF-8");
this.var = inputNoteText;
In the variable-setter charged with holding the result from the db query:
setNoteText(noteText) to convert the note data coming from database query into bytes in UTF8 format, then converted it back into a String and set it to String noteText property. Also below.
public void setNoteText(String noteText) throws UnsupportedEncodingException {
byte[] bytesInUTF8inputNoteText = noteText.getBytes("UTF-8");
String noteTextUTF8 = new String(bytesInUTF8inputNoteText, "UTF-8");
this.noteText = noteTextUTF8;}
In SQL Server I changed the data type from text to nvarchar(MAX) to store the data in Unicode, even though that is a different type of Unicode.
What I see when I copy/paste from a MS Word doc into our JSF input textbox:
In Eclipse if I set a watch on the property in the bean, once the data in that String property has been converted into UTF-8, all characters are in UTF-8 format. When I post to to SQL Server the string of data held in nvarchar(max) datatype shows all characters in UTF-8 format correctly. Then when the resultSet is returned and the holding property is populated with the String returned from the db query, it also shows as all being correctly formatted in UTF-8....BUT,...somewhere in between the correct string value that's sitting in the property that's tied into the JSF page and the JSF page, 1.2 by the way, the value is being unformatted so that I see question marks where I should see single/double quotes and bullet points. I hope that someone has run into this type of issue before and can shed some light on what I need to do to fix this. Seems kind of like a JSF bug, thanks in advance for your input!!
try this
String noteText = new String (noteText.getBytes ("iso-8859-1"), "UTF-8");
When you copy paste from windows documents, the encoding format is not UTF-8 but [Windows-1252] (http://en.wikipedia.org/wiki/Windows-1252). Note the cells marked in thick green borders. These chars DONT map to UTF-8 charset and so you will have to use Windows-1252 encoding while reading.
Related
I'm using PDFBox 2.0.1.
I try to dynamically add some (user provided) UTF8 text to the form fields and show the result to the user. Unfortunately either the pdf library is not capable of properly encoding special characters such as "äöü"... or I was not able find any useful documentation that could help me with this issue.
Can someone tell me what is wrong with the given code sample?
try (PDDocument document = PDDocument.load(pdfTemplate)) {
PDDocumentCatalog catalog = document.getDocumentCatalog();
PDAcroForm form = catalog.getAcroForm();
List<PDField> fields = form.getFields();
for (PDField field : fields) {
switch (field.getPartialName()) {
case "devices":
// Frontend (JS): userInput = btoa('Gerät')
String userInput = ...
String name = new String(Base64.getDecoder().decode(base64devices), "UTF-8");
field.setReadOnly(true);
break;
}
}
form.flatten(fields, true);
document.save(bos);
}
And here the stacktrace of the error:
java.lang.IllegalArgumentException: U+FFFD is not available in this font's encoding: WinAnsiEncoding
org.apache.pdfbox.pdmodel.font.PDTrueTypeFont.encode(PDTrueTypeFont.java:368)
org.apache.pdfbox.pdmodel.font.PDFont.encode(PDFont.java:286)
org.apache.pdfbox.pdmodel.font.PDFont.getStringWidth(PDFont.java:315)
org.apache.pdfbox.pdmodel.interactive.form.PlainText$Paragraph.getLines(PlainText.java:169)
org.apache.pdfbox.pdmodel.interactive.form.PlainTextFormatter.format(PlainTextFormatter.java:182)
org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.insertGeneratedAppearance(AppearanceGeneratorHelper.java:373)
org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.setAppearanceContent(AppearanceGeneratorHelper.java:237)
org.apache.pdfbox.pdmodel.interactive.form.AppearanceGeneratorHelper.setAppearanceValue(AppearanceGeneratorHelper.java:144)
org.apache.pdfbox.pdmodel.interactive.form.PDTextField.constructAppearances(PDTextField.java:263)
org.apache.pdfbox.pdmodel.interactive.form.PDAcroForm.refreshAppearances(PDAcroForm.java:324)
org.apache.pdfbox.pdmodel.interactive.form.PDAcroForm.flatten(PDAcroForm.java:213)
my.application.service.PDFService.generatePDF(PDFService.java:201)
I also found those (related) issues on SO:
pdfbox: ... is not available in this font's encoding
But that does not help me choose the right encoding or how. IIRC Java uses UTF16 internally for character encoding why is the default not enough though?
Is that an issue of the PDF-document itself or the code I use to set it?
PdfBox encode symbol currency euro
Well its dynamic user input, so there are way to many things I would have to replace myself.
Thus, if the PDFBox people decided to fix the broken PDFBox method, this seemingly clean work-around code here would start to fail as it would then feed the fixed method broken input data.
Admittedly, I doubt they will fix this bug before 2.0.0 (and in 2.0.0 the fixed method has a different name), but one never knows...
Unfortunately I was not able to find this other setter method, but it might also be a different scope it does apply to.
EDIT
Updated example code to better represent the problem.
U+FFFD is used to replace an incoming character whose value is unknown or unrepresentable in Unicode compare the use of U+001A as a control character to indicate the substitute function (source).
That said it is likely that that character gets messed up somewhere. Maybe the encoding of the file is not UTF-8 and that's why the character is messed up.
As a general rule you should only write ASCII characters in the source code. You can still represent the whole Unicode range using the escaped form \uXXXX. In this case ä -> \u00E4.
-- UPDATE --
Apparently the problem is in how the user input get encoded/decoded from client/server side using the JS function btoa. A solution to this problem can be found at this link:
Using Javascript's atob to decode base64 doesn't properly decode utf-8 strings
I have the following code in my JSP
... <%
out.println(request.getAttribute("textFromDB")); %> ...
When the JSP is called it just prints question marks (????..) instead of the actual text stored in a MySQL database which is not in English. What can I do to make it display the text correctly. I tried to change the charset and pageEncoding to UTF-8 but it didn't help.
Isn't encoding nice? Unfortunately it's hard to tell where it gets wrong: Your database might store in another character set than UTF-8. Or your database driver might be configured to work in another encoding. Or your server defaults to another one. Or your HTTP connection assumes different encoding and your charset change comes too late.
You'll have to go through all those layers - and keep in mind that everything might look fine and it's been the long-past write-operations to your database that already messed up the data beyond repair.
System.out.println(this.request.getHeader("Content-Encoding")); //check the content type
String data = new String(this.request.getParameter("data").getBytes("ISO-8859-1"), "UTF-8"); //this way it is encoded to byte and then to a string
if the above method didnt work you might wanna check with the database
if it encode characters to "UTF-8"
or
you can configure URIEncoding="UTF-8" in your tomcat setup and just (request.getAttribute("textFromDB")); do the rest.
Issue: Character encoding in Play! 1.2.4 framework becomes.
Context: We are trying to store the text "《我叫MT繁體版》台港澳專屬伺服器上線!" from input text field to mysql using Play! 1.2.4 framework.
Steps that we followed:
1) UI to get the input from user. just any lang text, so we tried Japneese Char. Note: page is set to UTF-8 character encoding.
2) Post submission to Play! controller, the controller just reads the input and stores it using Play! model. snippet mentiond below,
public static void text_create() throws UnsupportedEncodingException,
ParseException {
System.out.println("params :: text string value :: " + params.get("text"));
String oldString = params.get("text");
// Converting the input string(which is UTF-8 format) and parsing to Windown-1252
String newString = new String(oldString.getBytes(), "WINDOWS-1252");
// 1. passing encoded text to mysql.
// 2. TextCheck table and the column 'text' has encoding and collation format as UTF-8.
// 3. TextCheck > text column mentioned as String in model.
TextCheck a = new TextCheck(newString);
List<Object> text = TextCheck.TextList();
render(a,text);
}
It stores as TEXT value as "《我�MT�體版》�港澳專屬伺�器上線�"
Problem is there are character � in between value. when i read this
raw data from mysql using other platforms like java, ruby or some
other language it converts but makes those � characters as junk. just
junk.
Note: Interstingly when i read it from same Play! framework. it looks all fine even that junk characters were read correctly.
Question: Why those junk characters ?
The problem is the following line:
String newString = new String(oldString.getBytes(), "WINDOWS-1252");
This looks like nonsense to me. Java stores all strings internally using UTF-16, so you can't adjust the encoding of a Java string in the manner you've attempted here.
The getBytes() method returns the bytes of the string using the default platform encoding. You then covert these bytes into a new string using a (probably) different charset. The result is almost certain to be broken.
I want to send a URL request, but the parameter values in the URL can have french characters (eg. è). How do I convert from a Java String to Windows-1252 format (which supports the French characters)?
I am currently doing this:
String encodedURL = new String (unencodedUrl.getBytes("UTF-8"), "Windows-1252");
However, it makes:
param=Stationnement extèrieur into param=Stationnement extérieur .
How do I fix this? Any suggestions?
Edit for further clarification:
The user chooses values from a drop down. When the language is French, the values from the drop down sometimes include French characters, like 'è'. When I send this request to the server, it fails, saying it is unable to decipher the request. I have to figure out how to send the 'è' as a different format (preferably Windows-1252) that supports French characters. I have chosen to send as Windows-1252. The server will accept this format. I don't want to replace each character, because I could miss a special character, and then the server will throw an exception.
Use URLEncoder to encode parameter values as application/x-www-form-urlencoded data:
String param = "param="
+ URLEncoder.encode("Stationnement extr\u00e8ieur", "cp1252");
See here for an expanded explanation.
Try using
String encodedURL = new String (unencodedUrl.getBytes("UTF-8"), Charset.forName("Windows-1252"));
As per McDowell's suggestion, I tried encoding doing:
URLEncoder.encode("stringValueWithFrechCharacters", "cp1252") but it didn't work perfectly. I replayced "cp1252" with HTTP.ISO_8859_1 because I believe Android does not have the support for Windows-1252 yet. It does allow for ISO_8859_1, and after reading here, this supports MOST of the French characters, with the exception of 'Œ', 'œ', and 'Ÿ'.
So doing this made it work:
URLEncoder.encode(frenchString, HTTP.ISO_8859_1);
Works perfectly!
I am having some problems getting some French text to convert to UTF8 so that it can be displayed properly, either in a console, text file or in a GUI element.
The original string is
HANDICAP╔ES
which is supposed to be
HANDICAPÉES
Here is a code snippet that shows how I am using the jackcess Database driver to read in the Acccess MDB file in an Eclipse/Linux environment.
Database database = Database.open(new File(filepath));
Table table = database.getTable(tableName, true);
Iterator rowIter = table.iterator();
while (rowIter.hasNext()) {
Map<String, Object> row = this.rowIter.next();
// convert fields to UTF
Map<String, Object> rowUTF = new HashMap<String, Object>();
try {
for (String key : row.keySet()) {
Object o = row.get(key);
if (o != null) {
String valueCP850 = o.toString();
// String nameUTF8 = new String(valueCP850.getBytes("CP850"), "UTF8"); // does not work!
String valueISO = new String(valueCP850.getBytes("CP850"), "ISO-8859-1");
String valueUTF8 = new String(valueISO.getBytes(), "UTF-8"); // works!
rowUTF.put(key, valueUTF8);
}
}
} catch (UnsupportedEncodingException e) {
System.err.println("Encoding exception: " + e);
}
}
In the code you'll see where I want to convert directly to UTF8, which doesn't seem to work, so I have to do a double conversion. Also note that there doesn't seem to be a way to specify the encoding type when using the jackcess driver.
Thanks,
Cam
New analysis, based on new information.
It looks like your problem is with the encoding of the text before it was stored in the Access DB. It seems it had been encoded as ISO-8859-1 or windows-1252, but decoded as cp850, resulting in the string HANDICAP╔ES being stored in the DB.
Having correctly retrieved that string from the DB, you're now trying to reverse the original encoding error and recover the string as it should have been stored: HANDICAPÉES. And you're accomplishing that with this line:
String valueISO = new String(valueCP850.getBytes("CP850"), "ISO-8859-1");
getBytes("CP850") converts the character ╔ to the byte value 0xC9, and the String constructor decodes that according to ISO-8859-1, resulting in the character É. The next line:
String valueUTF8 = new String(valueISO.getBytes(), "UTF-8");
...does nothing. getBytes() encodes the string in the platform default encoding, which is UTF-8 on your Linux system. Then the String constructor decodes it with the same encoding. Delete that line and you should still get the same result.
More to the point, your attempt to create a "UTF-8 string" was misguided. You don't need to concern yourself with the encoding of Java's strings--they're always UTF-16. When bringing text into a Java app, you just need to make sure you decode it with the correct encoding.
And if my analysis is correct, your Access driver is decoding it correctly; the problem is at the other end, possibly before the DB even comes into the picture. That's what you need to fix, because that new String(getBytes()) hack can't be counted on to work in all cases.
Original analysis, based on no information. :-/
If you're seeing HANDICAP╔ES on the console, there's probably no problem. Given this code:
System.out.println("HANDICAPÉES");
The JVM converts the (Unicode) string to the platform default encoding, windows-1252, before sending it to the console. Then the console decodes that using its own default encoding, which happens to be cp850. So the console displays it wrong, but that's normal. If you want it to display correctly, you can change the console's encoding with this command:
CHCP 1252
To display the string in a GUI element, such as a JLabel, you don't have to do anything special. Just make sure you use a font that can display all the characters, but that shouldn't be problem for French.
As for writing to a file, just specify the desired encoding when you create the Writer:
OutputStreamWriter osw = new OutputStreamWriter(
new FileOutputStream("myFile.txt"), "UTF-8");
String s = "HANDICAP╔ES";
System.out.println(new String(s.getBytes("CP850"), "ISO-8859-1")); // HANDICAPÉES
This shows the correct string value. This means that it was originally encoded/decoded with ISO-8859-1 and then incorrectly encoded with CP850 (originally CP1252 a.k.a. Windows ANSI as pointed in a comment is indeed also possible since the É has the same codepoint there as in ISO-8859-1).
Align your environment and binary pipelines to use all the one and same character encoding. You can't and shouldn't convert between them. You would risk losing information in the non-ASCII range that way.
Note: do NOT use the above code snippet to "fix" the problem! That would not be the right solution.
Update: you are apparently still struggling with the problem. I'll repeat the important parts of the answer:
Align your environment and binary pipelines to use all the one and same character encoding.
You can not and should not convert between them. You would risk losing information in the non-ASCII range that way.
Do NOT use the above code snippet to "fix" the problem! That would not be the right solution.
To fix the problem you need to choose character encoding X which you'd like to use throughout the entire application. I suggest UTF-8. Update MS Access to use encoding X. Update your development environment to use encoding X. Update the java.io readers and writers in your code to use encoding X. Update your editor to read/write files with encoding X. Update the application's user interface to use encoding X. Do not use Y or Z or whatever at some step. If the characters are already corrupted in some datastore (MS Access, files, etc), then you need to fix it by manually replacing the characters right there in the datastore. Do not use Java for this.
If you're actually using the "command prompt" as user interface, then you're actually lost. It doesn't support UTF-8. As suggested in the comments and in the article linked in the comments, you need to create a Swing application instead of relying on the restricted command prompt environment.
You can specify encoding when establishing connection. This way was perfect and solve my encoding problem:
DatabaseImpl open = DatabaseImpl.open(new File("main.mdb"), true, null, Database.DEFAULT_AUTO_SYNC, java.nio.charset.Charset.availableCharsets().get("windows-1251"), null, null);
Table table = open.getTable("FolderInfo");
Using "ISO-8859-1" helped me deal with the French charactes.