classical way to handle XML in java is really lengthy and scary.
For this purpose i made my own class which can return me result without giving me more detail like,
myXML mx=new myXML("filename");
:
mx.getAll("node name");
mx.getFirst("node name");
:
I had completed it 80%. But unfortunately, i had lost it in PC crash.
is there any jar under GPL or apache license which provides facility to read & write XML in simplest way?
JDOM is simple API for parsing, creating, manipulating, and serializing XML documents in Java. API's you mentioned in your question are supported by JDOM (Other than many more useful API's).
Checkout JDOM documentation/book chapter here for more reading:
http://www.jdom.org/downloads/docs.html
http://www.cafeconleche.org/books/xmljava/chapters/ch14.html
Following are lines from http://www.jdom.org/docs/oracle/jdom-part1.pdf
So what’s the point of JDOM (Java
Document Object Model), and why do
developers need it? JDOM is an open
source library for Java-optimized XML
data manipulations. Although it’s
similar to the World Wide Web
Consortium’s (W3C) DOM, it’s an
alternative document object model that
was not built on DOM or modeled after
DOM. The main difference is that while
DOM was created to be language-neutral
and initially used for JavaScript
manipulation of HTML pages, JDOM was
created to be Java-specific and
thereby take advantage of Java’s
features, including method
overloading, collections, reflection,
and familiar programming idioms. For
Java programmers, JDOM tends to feel
more natural and “right.”
Try Apache Digester.Using digester will really simplify your XML parsing.You can refer this link for an example.
For your use case you may be interested in the javax.xml.xpath APIs available in the JDK. For an example see one of my answers to another question (below):
Remove XML Node using java parser
You may also prefer Service Data Objects (SDO). It is a generic data structure for representing XML data. For more information see:
http://www.eclipse.org/eclipselink/sdo.php
http://bdoughan.blogspot.com/2010/09/processing-atom-feeds-with-sdo.html
When parsing XML I recommend using the standard technologies: StAX, SAX, DOM, and JAXB. An implementation of each is included in the. JDK and alternate open source implementations are available offering improved performance and extended features, such as MOXy JAXB's XPath based mapping:
http://bdoughan.blogspot.com/2010/09/xpath-based-mapping-geocode-example.html
http://bdoughan.blogspot.com/2010/10/how-does-jaxb-compare-to-xstream.html
The advantage of the standard libraries is that they all work together:
StAX, SAX, and DOM are all valid inputs/outputs for JAXB
StAX, SAX, DOM, and JAXB are all compatible with javax.xml.transform libraries
StAX, SAX, DOM, and JAXB are all compatible with javax.xml.xpath libraries
StAX, SAX, DOM, and JAXB are all compatible with javax.xml.validation libraries
JAXB is the binding layer for two Web Service standards: JAX-WS and JAX-RS
Related
I have an existing proprietary class that can marshall data into XML using a SAX ContentHandler to capture the events. I'm looking for a class or a technique to subclass that event handler to create a DOM document with the smallest amount of coding. In particular I would prefer not to use any external libraries.
There are a number of old code examples available on various sites around the web, but I'm hoping something like this is now managed in recent standard libraries or Java itself.
Thanks!
Being new to XML parsing I'm trying to understand the different technologies. There is a confusing amount of different technologies for different needs:
W3C-DOM
XOM
jDom
JAXP
JAXB
DOM
SAX
StAX
TrAX
Woodstox
dom4j
Crimson
VTD-XML
Xerces-J
Castor
XStream
...
Just to name a few.
DOM and SAX seem to be a low-level way for parsing and working on XML, so I decided to focus on the ones that get mentioned the most in different sources and are low-level:
DOM, SAX, JAXP.
I've read about parsers in general here on stackoverflow, JAXP-Tutorial from Oracle, XML-Parsing in general, and so on.
I've also tried some tutorials like this german one and others.
I'm grasping a little bit about DOM and SAX now, but the reason to use JAXP is still beyond me. It seems to be more of an interface to use DOM, SAX, ... internally, but why not use DOM or SAX directly?
What is the advantage of using JAXP in layman's-terms?
(Although you haven't said so explicitly, your question seems to relate exclusively to the Java world, and this answer reflects that.)
JAXP is a set of interfaces covering XML parsing, XSLT transformation, and XML schema validation. If we just focus on the XML parsing side, its main contribution is to provide a mechanism for locating an XML parser implementation, so your source code isn't locked into a particular product. Frankly that's of limited value these days; the only two SAX/DOM parsers in common use are the one embedded in the JDK, and Apache Xerces. Apache Xerces is better in every respect except that you need to download it separately.
As for the other parsing interfaces, they break down into two categories: event-based APIs and tree-based APIs. Tree-based APIs are much easier to work with, but can use a lot of memory when handling large documents.
The two dominant event-based APIs are SAX (push) and StAX (pull). Pull parsing is something many programmers find easier because you can use the program stack to maintain state information; unfortunately though the StAX API is a bit buggy - different implementations have fixed its gaps in different ways. The most complete and reliable implementation of StAX is the Woodstox parser; the most complete and reliable implementation of SAX is Apache Xerces. But don't attempt to use an event-based parsing approach unless your application really needs that level of performance (and unless you have the level of experience needed to avoid losing all the performance gains at the application level.)
For tree-based APIs, the DOM remains dominant solely because it was defined by W3C and is implemented in the JDK, and is therefore perceived as "standard"; also it's the one mentioned in all the books on the subject. However, of all the tree models, it is unquestionably the worst designed (mainly because it predates the introduction of namespaces). Alternatives include JDOM2, DOM4J, XOM, and AXIOM. I tend to recommend JDOM2 or XOM.
JAXP is just Sun's (now Oracle's) name for a collection of SAX and DOM classes they bundle with the JDK. If you're using JAXP, you're also using SAX and/or DOM. It's not a different thing.
JAXP also adds a few helper classes in the javax.xml.parsers package that fill gaps in SAX 1 and DOM 1, i.e. old versions of these libraries from 15+ years ago. However these are not necessary with SAX2/DOM3 that are used today. Worse yet, javax.xml.parsers classes such as DocumentBuilderFactory and SAXParserFactory are designed in a confusing way (they're not namespace aware by default) so they are almost always used incorrectly. Then developers come here to ask why their program doesn't do what they think it should. Just ignore these classes and use XMLReaderFactory (SAX 2) or DOMImplementationLS (DOM 3) instead.
I am looking at a set of parsers generated for Atom, XAL, Kml etc. seemingly using an automated technique with a XML pull based parser. The clue towards the automation is presence of "package.html" in all XML-to-Java mapped classes folders. I would like to produce a similar one for the rather large Collada 1.4 spec. My first attempt with Altova ran into small problems due the "enum" keyword. I am sure I can fix it in the next run with appropriate renaming. Khronos admit to not designing the 1.4 spec to being automated parser generation friendly.
The actual parsers i.e. XAL parser, Atom parser etc. implement the XMLEventParser interface. I would like to know if anybody has encountered/used this pattern. If so which tool can be used to map the XSD to a class set simply giving access to the data components of the nodes using getters and setters.
I'm not sure I understand your question, but it appears that you want to process XML formats like Atom and represent it in objects with getters/setters. This can easily be done with JAXB.
For an example see:
http://bdoughan.blogspot.com/2010/09/processing-atom-feeds-with-jaxb.html
I'm newbie to XML using Java. I've to write a method to send a large XML data having lots of nodes through a socket to client application.
What is the suitable method to generate XML?
What is the best method to send large XML through sockets?
Since you are using sockets you just need to deal with Java InputStream/OutputStream. This gives you alot of flexibility in your XML handling as almost all XML technologies handle streams as input/output.
You could represent your data as plain old Java objects (POJOs), and then bind them to XML using JAXB. An implementation of JAXB is included in Java SE 6. There are other implementations such as MOXy (I'm the tech lead) and JaxMe.
For an example see:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/GettingStarted
To generate XML you use DOM implementation provided by any XML DOM parser and generator.
Here is a nice tutorial. But for only generation try to use some small and light-weight parcers e.g. [tinyxml][2] or [qdparcer][3], because the xerces and others are going to be heavy weight for that. But if the parcing is also involved libxml or xerces will be of good choice because they provide nice SAX implementation for parsing, but you need to have schema defined for your data. Again try to serialize the data before sending so you can get rid of other problems.
Out of all the libraries for inputing and outputting xml with java, in which circumstances is commons-digester the tool of choice?
From the digester wiki
Why use Digester?
Digester is a layer on top of the SAX
xml parser API to make it easier to
process xml input. In particular,
digester makes it easy to create and
initialise a tree of objects based on
an xml input file.
The most common use for Digester is to
process xml-format configuration
files, building a tree of objects
based on that information.
Note that digester can create and
initialise true objects, ie things
that relate to the business goals of
the application and have real
behaviours. Many other tools have a
different goal: to build a model of
the data in the input XML document,
like a W3C DOM does but a little more
friendly.
and
And unlike tools that generate
classes, you can write your
application's classes first, then
later decide to use Digester to build
them from an xml input file. The
result is that your classes are real
classes with real behaviours, that
happen to be initialised from an xml
file, rather than simple "structs"
that just hold data.
As an example of what it's NOT used for:
If, however, you are looking for a direct representation of the input xml document, as
data rather than true objects, then digester is not for you; DOM, jDOM or other more
direct binding tools will be more appropriate.
So, digester will map XML directly into java objects. In some cases that's more useful than having to read through the tree and pull out options.
My first take would be "never"... but perhaps it has its place. I agree with eljenso that it has been surpassed by competition.
So for good efficient and simple object binding/mapping, JAXB is much better, or XStream. Much more convenient and even faster.
EDIT 2019: also, Jackson XML, similar to JAXB in approach but using Jackson annotations
If you want to create and intialize "true" objects from XML, use a decent bean container, like the one provided by Spring.
Also, reading in the XML and processing it yourself using XPath, or using Java/XML binding tools like Castor, are good and maybe more standard alternatives.
I have worked with the Digester when using Struts, but it seems that it has been surpassed by other tools and frameworks for the possible uses it has.