How to parse invalid (bad / not well-formed) XML? - java

Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?

That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000‌​}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.

A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.

The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.

IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);

Related

how to efficiently parse XML with multiple roots in java? [duplicate]

Currently, I'm working on a feature that involves parsing XML that we receive from another product. I decided to run some tests against some actual customer data, and it looks like the other product is allowing input from users that should be considered invalid. Anyways, I still have to try and figure out a way to parse it. We're using javax.xml.parsers.DocumentBuilder and I'm getting an error on input that looks like the following.
<xml>
...
<description>Example:Description:<THIS-IS-PART-OF-DESCRIPTION></description>
...
</xml>
As you can tell, the description has what appears to be an invalid tag inside of it (<THIS-IS-PART-OF-DESCRIPTION>). Now, this description tag is known to be a leaf tag and shouldn't have any nested tags inside of it. Regardless, this is still an issue and yields an exception on DocumentBuilder.parse(...)
I know this is invalid XML, but it's predictably invalid. Any ideas on a way to parse such input?
That "XML" is worse than invalid – it's not well-formed; see Well Formed vs Valid XML.
An informal assessment of the predictability of the transgressions does not help. That textual data is not XML. No conformant XML tools or libraries can help you process it.
Options, most desirable first:
Have the provider fix the problem on their end. Demand well-formed XML. (Technically the phrase well-formed XML is redundant but may be useful for emphasis.)
Use a tolerant markup parser to cleanup the problem ahead of parsing as XML:
Standalone: xmlstarlet has robust recovering and repair capabilities credit: RomanPerekhrest
xmlstarlet fo -o -R -H -D bad.xml 2>/dev/null
Standalone and C/C++: HTML Tidy works with XML too. Taggle is a port of TagSoup to C++.
Python: Beautiful Soup is Python-based. See notes in the Differences between parsers section. See also answers to this question for more
suggestions for dealing with not-well-formed markup in Python,
including especially lxml's recover=True option.
See also this answer for how to use codecs.EncodedFile() to cleanup illegal characters.
Java: TagSoup and JSoup focus on HTML. FilterInputStream can be used for preprocessing cleanup.
.NET:
XmlReaderSettings.CheckCharacters can
be disabled to get past illegal XML character problems.
#jdweng notes that XmlReaderSettings.ConformanceLevel can be set to
ConformanceLevel.Fragment so that XmlReader can read XML Well-Formed Parsed Entities lacking a root element.
#jdweng also reports that XmlReader.ReadToFollowing() can sometimes
be used to work-around XML syntactical issues, but note
rule-breaking warning in #3 below.
Microsoft.Language.Xml.XMLParser is said to be “error-tolerant”.
Go: Set Decoder.Strict to false as shown in this example by #chuckx.
PHP: See DOMDocument::$recover and libxml_use_internal_errors(true). See nice example here.
Ruby: Nokogiri supports “Gentle Well-Formedness”.
R: See htmlTreeParse() for fault-tolerant markup parsing in R.
Perl: See XML::Liberal, a "super liberal XML parser that parses broken XML."
Process the data as text manually using a text editor or
programmatically using character/string functions. Doing this
programmatically can range from tricky to impossible as
what appears to be
predictable often is not -- rule breaking is rarely bound by rules.
For invalid character errors, use regex to remove/replace invalid characters:
PHP: preg_replace('/[^\x{0009}\x{000a}\x{000d}\x{0020}-\x{D7FF}\x{E000}-\x{FFFD}]+/u', ' ', $s);
Ruby: string.tr("^\u{0009}\u{000a}\u{000d}\u{0020}-\u{D7FF}\u{E000‌​}-\u{FFFD}", ' ')
JavaScript: inputStr.replace(/[^\x09\x0A\x0D\x20-\xFF\x85\xA0-\uD7FF\uE000-\uFDCF\uFDE0-\uFFFD]/gm, '')
For ampersands, use regex to replace matches with &: credit: blhsin, demo
&(?!(?:#\d+|#x[0-9a-f]+|\w+);)
Note that the above regular expressions won't take comments or CDATA
sections into account.
A standard XML parser will NEVER accept invalid XML, by design.
Your only option is to pre-process the input to remove the "predictably invalid" content, or wrap it in CDATA, prior to parsing it.
The accepted answer is good advice, and contains very useful links.
I'd like to add that this, and many other cases of not-wellformed and/or DTD-invalid XML can be repaired using SGML, the ISO-standardized superset of HTML and XML. In your case, what works is to declare the bogus THIS-IS-PART-OF-DESCRIPTION element as SGML empty element and then use eg. the osx program (part of the OpenSP/OpenJade SGML package) to convert it to XML. For example, if you supply the following to osx
<!DOCTYPE xml [
<!ELEMENT xml - - ANY>
<!ELEMENT description - - ANY>
<!ELEMENT THIS-IS-PART-OF-DESCRIPTION - - EMPTY>
]>
<xml>
<description>blah blah
<THIS-IS-PART-OF-DESCRIPTION>
</description>
</xml>
it will output well-formed XML for further processing with the XML tools of your choice.
Note, however, that your example snippet has another problem in that element names starting with the letters xml or XML or Xml etc. are reserved in XML, and won't be accepted by conforming XML parsers.
IMO these cases should be solved by using JSoup.
Below is a not-really answer for this specific case, but found this on the web (thanks to inuyasha82 on Coderwall). This code bit did inspire me for another similar problem while dealing with malformed XMLs, so I share it here.
Please do not edit what is below, as it is as it on the original website.
The XML format, requires to be valid a unique root element declared in the document.
So for example a valid xml is:
<root>
<element>...</element>
<element>...</element>
</root>
But if you have a document like:
<element>...</element>
<element>...</element>
<element>...</element>
<element>...</element>
This will be considered a malformed XML, so many xml parsers just throw an Exception complaining about no root element. Etc.
In this example there is a solution on how to solve that problem and succesfully parse the malformed xml above.
Basically what we will do is to add programmatically a root element.
So first of all you have to open the resource that contains your "malformed" xml (i. e. a file):
File file = new File(pathtofile);
Then open a FileInputStream:
FileInputStream fis = new FileInputStream(file);
If we try to parse this stream with any XML library at that point we will raise the malformed document Exception.
Now we create a list of InputStream objects with three lements:
A ByteIputStream element that contains the string: <root>
Our FileInputStream
A ByteInputStream with the string: </root>
So the code is:
List<InputStream> streams =
Arrays.asList(
new ByteArrayInputStream("<root>".getBytes()),
fis,
new ByteArrayInputStream("</root>".getBytes()));
Now using a SequenceInputStream, we create a container for the List created above:
InputStream cntr =
new SequenceInputStream(Collections.enumeration(str));
Now we can use any XML Parser library, on the cntr, and it will be parsed without any problem. (Checked with Stax library);

What is meant by 'parsed data' in the xml 1.1 spec?

I am re-wording my question because the 'parsed entity' thing has nothing to do with the problem at hand.
XML 1.1 versus 1.0
Is an xml 1.1 library is to escape illegal characters before serializing/deserializing them? Or is the library is to forbid them outright? Which is the correct way to set Text on an xml element?
if Element e = new Element("foo")
Should I do this:
e.setText(sanitized_text_illegal_characters_removed_or_escaped) ?
or
e.setText(any_text)
A parsed entity is something you don't really need to worry about unless you're writing an XML parser. It's things like < and &. You can define your own in the document DTD, but it's a rarely used feature. An external parsed entity is one whose contents reside in another file or network resource or somewhere like that.
As to your main question:
Which is the correct way to set Text on an xml element?
if Element e = new Element("foo")
Should I do this:
e.setText(string_of_sanitized_data_with_illegal_characters_escaped) ?
or
e.setText(any_text)
You should set the text as you would like it to come out the other end, when the document is deserialized. This normally means you should not escape the data, and the XML library will do this for you.
e.g.:
You insert the text "bed & breakfast".
The XML library converts this to "bed & breakfast" or "<![CDATA[bed & breakfast]]>" or some other representation, it doesn't really matter.
You send the document somewhere else.
The other parser reads the document and converts the text back.
The end software retrieves the string "bed & breakfast".
If you're writing XML programmatically, then you almost certainly don't want to use parsed entities.
There are two kinds of parsed entities: internal and external. An internal parsed entity is defined by a DTD declaration like this:
<!ENTITY me "Mike">
or
<!ENTITY me "<name>Mike</name>">
An external parsed entity is defined by a DTD declaration like this:
<!ENTITY me SYSTEM "me.xml">
Whether the entity is internal or external, it can be referenced by an entity reference like this:
&me;
which can appear within the content of an element or attribute.

Using org.apache.commons.digester.Digester in XML with attributes

I'm going to extract values from this XML/RDF:
<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:j.0="urn:turismoculturale.itdc.filas-1.0.0-RC1#">
<j.0:Chiesa rdf:ID="turismoCulturale_POI_880">
<j.0:title xml:lang="en">Church of S. Giuda Taddeo or S. Onofrio - Gaeta</j.0:title>
<j.0:title xml:lang="it">Chiesa S. Giuda Taddeo o S. Onofrio - Gaeta</j.0:title>
</j.0:Chiesa>
</rdf:RDF>
I would like to get en title when I am in "en" language and "it" title otherwise. I am able to set the title value in the Poi bean by using:
Digester digester = new Digester();
digester.setNamespaceAware( true );
digester.setRuleNamespaceURI( "urn:turismoculturale.itdc.filas-1.0.0-RC1#" );
digester.addObjectCreate( "*/Chiesa", Poi.class);
digester.addBeanPropertySetter("*/title", "title");
...
but I don't know if it is the english title or the italian one.
Ok - first and foremost, don't try to parse RDF/XML with an XML parser. It's never going to work because the semantics of the XML document are irrelevant with respect to RDF/XML and it is a bad idea (if you know how RDF/XML works), especially in your case where the RDF/XML is being generated dynamically (you can tell by the namespaces). You need to use an RDF parser to parse RDF.
So that means don't use an XML to Java object mapping tool, use an RDF to Java Object mapping tool.
Here is a great link explaining how to do this:
http://answers.semanticweb.com/questions/859/best-way-to-convert-rdfxml-file-to-pojos
And another:
http://answers.semanticweb.com/questions/3251/experience-using-java-based-frameworks-for-rdf-to-pojo-and-vice-versa-mapping
Along with links to all the tools in the aforementioned resource:
Jenabean
Empire
AliBaba
RDFReactor
For an out and out RDF parser, look at Jena:
http://incubator.apache.org/jena
It's an Apache project that is also nicely Maven'ed up.
The Commons Digester FAQ says:
Occasionally, people ask how they can fire a rule for an element based on the value of an attribute
There is no simple way to do this with Digester; the built-in rule-matching engines only provide the ability to match on element name. There is no support available for XPath expressions
It might be possible to create a custom "filtering" rule that has a child rule, and fires that child rule only when the appropriate conditions are set. There are no examples of such a solution, however.
Digester isn't a very good tool. It's too simplistic. Consider using a more comprehensive event-based API such as StAX.

Is it possible to display "<" or ">" in generated XML source using XStreamMarshaller

I have been trying to use XStreamMarshaller to generate XML output in my Java Spring project. The XML I am generating has CDATA values in the element text. I am manually creating this CDATA text in the command object like this:
f.setText("<![CDATA[cdata-text]]>");
The XStreamMarshaller generated the element(text-data below is an alias) as:
<text-data><![CDATA[cdata-text]]></text-data>
The above XML display is as expected (Please ignore the back slash in the above element name: forum formatting). But when I do a View Source on the XML output generated I see this for the element: <text-data><![CDATA[cdata-text]]></text-data>.
Issue:
As you can see the less than and greater than characters have been replaced by < and > in the View Source. I need my client to read the source and identify CDATA section from the XML output which it will not in the above scenario.
Is there a way I can get the XStreamMarshaller to escape special characters in the text I provided?
I have set the encoding of the Marshaller to ISO-8859-1 but that does not work either. If the above cannot be done by XStreamMarshaller can you please suggest alternate marshallers/unmarshallers that can do this for me?
// Displaying my XML and View Source as suggested by Paŭlo Ebermann below:
XML View (as displayed in IE):
An invalid character was found in text content. Error processing resource 'http://localhost:8080/file-service-framework/fil...
Los t
View Source:
<service id="file-text"><text-data><![CDATA[
Los túneles a través de las montañas hacen más fácil viajar por carretera.
]]></text-data></service>
Thanks you very much.
Generating CDATA sections is the task of your XML-generating library, not of its client. So you should simply have to write
f.setText("cdata-text");
and then the library can decide whether to use <![CDATA[...]]> or the <-escaping for its contents. It should make no difference for the receiver.
Edit:
Looking at your output, it looks right (apart from the CDATA) - here you must work on your input, as said.
If IE throws an error here, most probably you don't have declared the right encoding.
I don't really know much about the Spring framework, but the encoding used by the Marshaller should be the same encoding as the encoding sent in either the HTTP header (Content-Type: ... ;charset=...) or the <?xml version="1.0" encoding="..." ?> XML prologue (these two should not differ, too).
I would recommend UTF-8 as encoding everywhere, as this can represent all characters, not only the Latin-1 ones.

Parse file containing XML Fragments in Java

I inherited an "XML" license file containing no root element, but rather two XML fragments (<XmlCreated> and <Product>) so when I try to parse the file, I (expectantly) get an error about a document that is not-well-formed.
I need to get both the XmlCreated and Product tags.
Sample XML file:
<?xml version="1.0"?>
<XmlCreated>May 11 2009</XmlCreated>
<!-- License Key file Attributes -->
<Product image ="LicenseKeyFile">
<!-- MyCompany -->
<Manufacturer ID="7f">
<SerialNumber>21072832521007</SerialNumber>
<ChassisId>72060034465DE1C3</ChassisId>
<RtspMaxUsers>500</RtspMaxUsers>
<MaxChannels>8</MaxChannels>
</Manufacturer>
</Product>
Here is the current code that I use to attempt to load the XML. It does not work, but I've used it before as a starting point for well-formed XML.
public static void main(String[] args) {
try {
File file = new File("C:\\path\\LicenseFile.xml");
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
Document doc = db.parse(file);
} catch (Exception e) {
e.printStackTrace();
}
}
At the db.parse(file) line, I get the following Exception:
[Fatal Error] LicenseFile.xml:6:2: The markup in the document following the root element must be well-formed.
org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown Source)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(Unknown Source)
at com.mycompany.licensesigning.LicenseSigner.main(LicenseSigner.java:20)
How would I go about parsing this frustrating file?
If you know this document is always going to be non-well formed... make it so. Add a new dummy <root> tag after the <?xml...>and </root> after the last of the data.
You're going to need to create two separate Document objects by breaking the file up into smaller pieces and parsing those pieces individually (or alternatively reconstructing them into a larger document by adding a tag which encloses both of them).
If you can rely on the structure of the file it should be easy to read the file into a string and then search for substrings like <Product and </Product> and then use those markers to create a string you can pass into a document builder.
How about implementing a simple wrapper around InputStream that wraps the input from the file with a root-level tag, and using that as the input to DocumentBuilder.parse()?
If the expected input is small enough to load into memory, read into a string, wrap it with a dummy start/end tag and then use:
DocumentBuilder.parse(new InputSource(new StringReader(string)))
I'd probably create a SequenceInputStream where you sandwich the real stream with two ByteArrayInputStreams that return some dummy root start tag, and end tag.
Then i'd use use the parse method that takes a stream rather than a file name.
I agree with Jim Garrison to some extent, use an InputStream or StreamReader and wrap the input in the required tags, its a simple and easy method. Main problem i can forsee is you'll have to have some checks for valid and invalid formatting (if you want to be able to use the method for both valid and invalid data), if the formatting is invalid (because of root level tags missing) wrap the input with the tags, if its valid then don't wrap the input. If the input is invalid for some other reason, you can also alter the input to correct the formatting issues.
Also, its probably better to store the ipnut in a collection of strings (of some sort) rather than a string itself, this will mean that you wont have as much of a limit to your input size. Make each string one line from the file. You should end up with a logical and easy to follow structure which mwill make it easier to allow for corrections of other formatting issues in the future.
Hardest part about that is figuring out what has caused the invalid formatting. In your case just check for root level tags, if the tags exist and are formatted correctly, dont wrap, If not, wrap.

Categories