I have oracle table in which I am storing XML file , column is of CLOB type . Then we picked that xml file for further processing . It is somewhere breaking with below exception
"com.ctc.wstx.exc.WstxIOException: Invalid UTF-8 start byte 0xa0 (at char #931, byte #20)"
When we copy the content in notepad++ ,it didn't show any invalid UTF-8 Character.
Could any one help how to find invalid UTF-8 character in XML file in oracle column , request you to considering column is of CLOB type.
ANy help is greatly appreciated
Do you have access to Unix? You can use iconv -f utf-8 -t utf-8 -c yourfile.xml. You can find more possible options in this thread.
Related
I am running this command via JAVA program builder
plpgsql = "\"path_to_psql_executable\psql.exe" -U myuser -w -h myhost -d mydb -a -f "some_path\copy_7133.sql" 2> "log_path\plsql_7133.log\"";
ProcessBuilder pb = new ProcessBuilder("C:\\Windows\\System32\\cmd.exe", "/c", plpgsql);
Process p = pb.start();
p.getOutputStream().close();
p.waitFor();
This is returning me the following error:
ERROR: invalid byte sequence for encoding "UTF8": 0xbd CONTEXT: COPY
copy_7133, line 4892
The catch is if I the run the SQL command manually in cmd, then it is copying all of the data successfully giving me the number of rows inserted. Not able to figure out the reason
NOTE: The code is causing problem only for one particular file, for rest working fine.
EDIT:
Copy command being run:
\copy s_m_asset_7140 FROM 'C:\ER\ETL\Unzip_7140\asset.csv' csv HEADER QUOTE '"' ENCODING 'UTF8';
The last error the command gave:
psql:C:/ER/ETL/Unzip_7140/copy_s_m_asset_7140.sql:1: ERROR: invalid
byte sequence for encoding "UTF8": 0xa0 CONTEXT: COPY s_m_asset_7140,
line 10282
But there doesn't seem to be any special character except a '-'. Not sure what it is not able read.
Few more details abt DB:
show client_encoding;
"UNICODE"
show server_encoding;
"UTF8"
Worked. But still not understand why UTF8 did not work.
I changed the encoding to LATIN1 and it worked
\copy s_m_asset_7140 FROM 'C:\ER\ETL\Unzip_7140\asset.csv' csv HEADER QUOTE '"' ENCODING 'LATIN1';
Can somebody pls explain why UTF8 did not work?
Anyone can tell me what could be to cause this problem?
I tried to post with post.jar a file xml; i copt below the server log
118208 [qtp760665089-18] ERROR org.apache.solr.servlet.SolrDispatchFilter û nul
l:java.lang.RuntimeException: [was class java.io.CharConversionException] Invali
d UTF-8 middle byte 0x6c (at char #139212, byte #136949)
at com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.j
ava:18)at com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731)
at com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.j
ava:3657)at com.ctc.wstx.sr.BasicStreamReader.getText(BasicStreamReader.java:809)
at org.apache.solr.handler.loader.XMLLoader.readDoc(XMLLoader.java:397)
at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java
:246)
[...]
Caused by: java.io.CharConversionException: Invalid UTF-8 middle byte 0x6c (at c
har #139212, byte #136949)
at com.ctc.wstx.io.UTF8Reader.reportInvalidOther(UTF8Reader.java:313)
at com.ctc.wstx.io.UTF8Reader.read(UTF8Reader.java:204)
at com.ctc.wstx.io.ReaderSource.readInto(ReaderSource.java:84)
at com.ctc.wstx.io.BranchingReaderSource.readInto(BranchingReaderSource.
java:57)...
You have 1 or more illegal (e.g. not UTF-8) characters in your document:
http://www.coderanch.com/t/433718/XML/Invalid-UTF-middle-byte-error
I'd take a close look at the document and consider stripping/filtering for only UTF-8
This previous stackoverflow answer has a couple of code snippets in Perl and Java for filtering out non UTF-8 characters:
How to remove bad characters that are not suitable for utf8 encoding in MySQL?
I have a simple xml file on my hard drive.
When I open it with notepad++ this is what I see:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<content>
... more stuff here ...
</content>
But when I read it using a FileInputStream I get:
?<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<content>...
I'm using JAXB to parse xml's and it throws an exception of "content not allowed in prolog" because of that "?" sign.
What is this extra "?" sign? why is it there and how do I get rid of it?
That extra character is a byte order mark, a special Unicode character code which lets the XML parser know what the byte order (little endian or big endian) of the bytes in the file is.
Normally, your XML parser should be able to understand this. (If it doesn't, I would regard that a bug in the XML parser).
As a workaround, make sure that the program that produces this XML leaves off the BOM.
Check the encoding of the file, I've seen a similar thing, openeing the file in most editors and it looked fine, turned out it was encoded with UTF-8 without BOM (or with, I can't recall off the top of my head). Notepad++ should be ok to switch between the two.
You can use Notepad++ to see show all symbols from the View > Show Symbols > Show All Characters menu. It would show you the extra bytes present in the beginning. There is a possibility that it is the byte order mark. If the extra bytes are indeed byte order mark, this approach would not help. In that case, you will need to download a hex editor or if you have Cygwin installed, follow the steps in the last paragraph of this response. Once you can see the file in terms of hex codes, look for the first two characters. Do they have one of the codes mentioned at http://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding
If they indeed are byte order mark or if you are unable to determine the cause of the error, just try this:
From the menu select, Encoding > Encoding in UTF-8 without BOM, and then save the file.
(On Linux, one can use command line tools to check what's the in the beginning. e.g. xxd -g1 filename | head or od -t cx1 filename | head.)
You might be having a newline. Delete that.
Select View > Show Symbol > Show All Characters in Notepad++ to see what's happening.
this is not a jaxb problem, the problem resides in the way you use to read the xml ... try using an inputstream
...
Unmarshaller u = jaxbContext.createUnmarshaller();
XmlDataObject xmlDataObject = (XmlDataObject) u.unmarshal(new FileInputStream("foo.xml"));
...
Next to the FileInputStream a ByteArrayInputStream worked also with me:
JAXB.unmarshal(new ByteArrayInputStream(string.getBytes("UTF-8")), Delivery.class);
=> No unmarshaling error anymore.
I create a XML String on the fly (NOT reading from a file). Then I use Cocoon 3 to transform it via FOP to a PDF. Somewhere in the middle Xerces runs. When I use the hardcoded stuff everything works. As soon as I put a german Umlaut into the database and enrich my xml with that data I get:
Caused by: org.apache.cocoon.pipeline.ProcessingException: Can't parse the XML string.
at org.apache.cocoon.sax.component.XMLGenerator$StringGenerator.execute(XMLGenerator.java:326)
at org.apache.cocoon.sax.component.XMLGenerator.execute(XMLGenerator.java:104)
at org.apache.cocoon.pipeline.AbstractPipeline.invokeStarter(AbstractPipeline.java:146)
at org.apache.cocoon.pipeline.AbstractPipeline.execute(AbstractPipeline.java:76)
at de.grobmeier.tab.webapp.modules.documents.InvoicePipeline.generateInvoice(InvoicePipeline.java:74)
... 87 more
Caused by: com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: Invalid byte 1 of 1-byte UTF-8 sequence.
at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.invalidByte(UTF8Reader.java:684)
at com.sun.org.apache.xerces.internal.impl.io.UTF8Reader.read(UTF8Reader.java:554)
I have then debugged my app and found out, my "Ä" (which comes frome the database) has the byte value of 196, which is C4 in hex. This is what I have expected according to this: http://www.utf8-zeichentabelle.de/
I do not know why my code fails.
I have then tried to add a BOM manually, like that:
byte[] bom = new byte[3];
bom[0] = (byte) 0xEF;
bom[1] = (byte) 0xBB;
bom[2] = (byte) 0xBF;
String myString = new String(bom) + inputString;
I know this is not exactly good, but I tried it - of course it failed. I have tried to add a xml header in front:
<?xml version="1.0" encoding="UTF-8"?>
Which failed too. Then I combined it. Failed.
After all I tried something like that:
xmlInput = new String(xmlInput.getBytes("UTF8"), "UTF8");
Which is doing nothing in fact, because it is already UTF-8. Still it fails.
So... any ideas what I am doing wrong and what Xerces is expecting from me?
Thanks
Christian
If your database contains only a single byte (with value 0xC4) then you aren't using UTF-8 encoding.
The character "LATIN CAPITAL LETTER A WITH DIAERESIS" has a code-point value U+00C4, but UTF-8 can't encode that in a single byte. If you check the third column "UTF-8 (hex.)" on UTF8-zeichentabelle.de you'll see that UTF-8 encodes that as 0xC3 84 (two bytes).
Please read Joel's article "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" for more info.
EDIT: Christian found the answer himself; turned out it was a problem in the Cocoon 3 SAX component (I guess it's the alpha 3 version). It turns out that if you pass an XML as a String into the XMLGenerator class, something will go wrong during SAX parsing causing this mess.
I looked up the code to find the actual problem in Cocoon-stax:
if (XMLGenerator.this.logger.isDebugEnabled()) {
XMLGenerator.this.logger.debug("Using a string to produce SAX events.");
}
XMLUtils.toSax(new ByteArrayInputStream(this.xmlString.getBytes()), XMLGenerator.this.getSAXConsumer();
As you can see, the call getBytes() will create a Byte array with the JRE's default encoding which will then fail to parse. This is because the XML declares itself to be UTF-8 whereas the data is now in bytes again, and likely using your Windows codepage.
As a workaround, one can use the following:
new org.apache.cocoon.sax.component.XMLGenerator(xmlInput.getBytes("UTF-8"),
"UTF-8");
This will trigger the right internal actions (as Christian found out by experimenting with the API).
I've opened an issue in Apache's bug tracker.
EDIT 2: The issue is fixed and will be included in an upcoming release.
The C4 you see on that page refers to the unicode code point, U+00C4. The byte sequence used to represent such a code point in UTF-8 is NOT "\xC4". What you want is what's in the UTF-8 (hex.) column, namely "\xC3\x84".
Therefore, your data is not in UTF-8.
You can read about how data is encoded in UTF-8 here.
I'm running Windows 7 with TextPad as a text editor for manually building the xml data file. I was getting the MalformedByteSequenceException. My spec in the xml file was UTF-8. After poking around, I found that my editor had a tool "Tools ... Convert to DOS". I did that, re-saved the file, and the exception went away and my code ran fine.
I then looked at the default encoding for that file type in my editor. It was ASCII, though when I changed the xml encoding parameter to ASCII, I got another different MalformedByteSequenceException.
So on Windows systems, you might try keeping the xml encoding to UTF-8, but save the file encoded DOS. I did not dig any further as to why this works.
I have problem with my Java program. How I read xml -file which has "UTF-8" encoding. Program works correctly in Kubuntu but I doesn't work in Windows. Both OSes is writing xml -file correctly but parsing gives exception error in Windows.
String XMLFile = "ÄÄKKÖSET.xml"
Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(new File (XMLFile));
Here is xml -file I need to parse:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<deck created="04/04/2011">
<title>ääkköset</title>
<code>ÄÄKKÖSET</code>
<description>ääkköset</description>
<author>ääkköset</author>
<cards nextCardID="1">
<card color="#1364F9" id="0">
<question>ÄÄKKÖSET</question>
<answer>ÄÄKKÖSET</answer>
</card>
</cards>
</deck>
How do I get to read xml -file with Java in Windows without getting I get "IOException: Invalid byte 2 of 2-byte UTF-8 sequence." -error?
Thanks in advance!
Invalid byte 2 of 2-byte UTF-8 sequence.
Your XML document has not been saved as UTF-8, the parser detects this (because not all byte sequences are legal UTF-8) and throws an error.
The solution is to save the file as UTF-8. It is not enough to declare the document as UTF-8 - the bytes the data is encoded to must match this declaration. By default, many text editors on Windows will default to saving data as ANSI.