org.w3c.dom.NodeList doesn't extend Iterable - java

Is there any reason why would authors of Java org.w3c.dom library choose not to support the Iterable interface? For example, the interface NodeList seems like a perfect fit for extending Iterable.

The World Wide Web consortium has defined the Document Object Model (DOM) as follows:
The Document Object Model is a platform- and language-neutral
interface that will allow programs and scripts to dynamically access
and update the content, structure and style of documents.
It's implementation for a number of languages look very much like each other, which smart people thought to be a good idea, a lot of years ago when they designed it.
As a result, it doesn't look like anything familiar in any language.
If you want to use an alternative to the w3c DOM that does look like a Java library, use JDOM. Or map your XML to Java objects using a mapping/binding solution, such as JAXB
But if you need to interface with existing libraries that already use w3c DOM (like the built-in XSLT and XSD processors), then you're stuck with it. Unfortunately.
To #eis:
Yes there is a reason that you can't add an interface such as Iterable to NodeList, and that reason is that the Java binding of the Document Object Model is defined in the standard. Take NodeList, it is 100% defined in the standard. No room for any extra interfaces.
org/w3c/dom/NodeList.java:
package org.w3c.dom;
public interface NodeList {
public Node item(int index);
public int getLength();
}
There is no binding in the standard for C#, but there is one for EcmaScript. I believe the the IXMLDocument interfaces that you mention are also used for their EcmaScript implementation (but I could be wrong), in which case they still need to stick to the standard in terms of what methods they support and what the type hierarchy is.
The difference is that the EcmaScript binding only describes which methods should exist, while the Java binding describes the exact method in the interface.
There is no reason though in Java that the class that implements NodeList can't implement Iterable too. However, if your code depended on that it would not work with the DOM standard, but with a particular implementation only.
Microsoft has never really bothered with this fine distinction since they generally don't cater for multiple standards compliant implementations - if you use any of the methods that Microsoft has labelled with "* Denotes an extension to the World Wide Web Consortium (W3C) DOM." in Microsoft's implementation, then you're not using the DOM standard.

Related

Import Java Custom Method in Xquery

I am using Weblogic Integration framework. While transforming one XML format to another using .xq file, I want to apply some logic written in a custom Java Class.
For example, XML1 has tag: <UnitCode>XYZ</UnitCode>
Custom Java Class:
public class unitcodemapper{
public static String getMappedUnitCode(String unitCode){
if(unitCode=="XYZ")
return <<value from DB table>>
else
return unitCode;
}
}
XML2 will have a tag: <UnitCode>unitcodemapper.getMappedUnitCode(XML1/UnitCode)</UnitCode>
I cannot find any documentation or example to do this. Can someone please help in understanding how this can be done?
This is known as an "extension function". The documentation for your XQuery implementation should have a section telling you how to write such functions and plug them into the processor. (The details may differ from one XQuery processor to another, which is why I'm referring you to the manual.)
Whilst #keshlam mentions Extension Functions, which are indeed supported by many implementations each with their own API.
I think perhaps what you are looking for instead is Java Binding from XQuery. Many implementations also support this and tend to use the same approach. I do not know whether WebLogic supports this or not! If it does, the trick is to use java: at the start of your namespace URI declaration, you can then use the fully qualified Java class name of a static class, each static method you may then call directly from that namespace.
You can from two examples of implementations that offer the same Java Binding from XQuery functionality here:
http://exist-db.org/exist/apps/doc/xquery.xml#calling-java
http://docs.basex.org/wiki/Java_Bindings
These could serve as examples for you to try on WebLogic to see if it is supported in the same way. However, I strongly suggest you check their documentation as they may take a different approach.

When is a reference to the object class required?

What is the function of the class Object in java? All the "objects" of any user defined class have the same function as the aforementioned class .So why did the creators of java create this class?
In which situations should one use the class 'Object'?
Since all classes in Java are obligated to derive (directly or indirectly) from Object, it allows for a default implementation for a number of behaviours that are needed or useful for all objects (e.g. conversion to a string, or a hash generation function).
Furthermore, having all objects in the system with a common lineage allows one to work with objects in a general sense. This is very useful for developing all sorts of general applications and utilities. For example, you can build a general purpose cache utility that works with any possible object, without requiring users to implement a special interface.
Pretty much the only time that Object is used raw is when it's used as a lock object (as in Object foo = new Object(); synchronized(foo){...}. The ability to use an object as the subject of a synchronized block is built in to Object, and there's no point to using anything more heavyweight there.
Object provides an interface with functionality that the Java language designers felt all Java objects should provide. You can use Object when you don't know the subtype of a class, and just want to treat it in a generic manner. This was especially important before the Java language had generics support.
There's an interesting post on programmers.stackexchange.com about why this choice was made for .NET, and those decisions most likely hold relevance for the Java language.
What Java implements is sometimes called a "cosmic hierarchy". It means that all classes in Java share a common root.
This has merit by itself, for use in "generic" containers. Without templates or language supported generics these would be harder to implement.
It also provides some basic behaviour that all classes automatically share, like the toString method.
Having this common super class was back in 1996 seen as a bit of a novelty and cool thing, that helped Java get popular (although there were proponents for this cosmic hierarchy as well).

Whats the correct way to extend the functionality of DOM elements?

After taking quite a long break from active coding I am just starting to get accustomed to Java again, so this might be considered a "newbie question". Any help is appreciated.
Consider the following scenario. I am parsing an XML document as DOM. I am using javax.xml.parsers.DocumentBuilder to obtain an org.w3c.dom.Document node and scan through its org.w3c.dom.Element nodes, and I am fine with that.
However, I would like to extend the functionality of my org.w3c.dom.Element objects. Say, I would like to have a convenient way to extract some information from the nodes by giving them some public FancyObject toFancyObject() method. Whats the right way of doing this?
Considering that org.w3c.dom.Element is an interface, inheritance seems to be no option. Composition, on the other hand, seems to be quite cumbersome in this case, since this would be like 5% new functionality and 95% delegation of the existing methods.
Also, I am aware that I could always write a static utility method to obtain my FancyObject, but I would like to avoid this solution.
You have a couple of options:
Use the user data field of the Node interface. You can attach arbitrary objects to i t and build something that resembles your static variant.
Use JDOM or DOM4J instead. These APIs are better suited for your requirements w.r.t. extending base implementation classes. For example, with JDOM you can define a custom NodeFactory that can create the customized Element implementations.
Use JAXB to unmarshal the XML into an object graph. In this case, you have almost complete freedom to implement custom behavior.

How could an idiomatic design of Serializable/Cloneable/... look like in Scala?

I wonder how much different these funcionality would look like (and how different the implementation would be), if Scala wouldn't (have to) follow Java's java.io.Serializable/java.lang.Cloneable (mostly to stay compatible with Java and the tools/ecosystem around it).
Because Scala is more simpler in language design, but enables more powerful implementation and abstraction possibilities, it is thinkable that Scala might take a different path compared to Java, if it wouldn't have to shoulder the Java-compatibility-burden.
I could imagine that a idiomatic implementation would use type classes or traits with (possibly) private fields/methods (not possible in Java interfaces?), maybe carrying some standard implementation?
Or are marker interfaces still the right choice in Scala?
Serialization and cloning are both kind of special because of mutability:
Serialization, because it has to deal with cycles in the object graph, and;
Cloning because... Well, the only reason to clone an object is to prevent the accidental spread of mutable state.
So, if you're willing to commit to a completely immutable domain model, you don't have object graphs as such anymore, you have object trees instead.
For a functionally-oriented approach to serialization, SBinary is what I'd probably try first. For cloning, Just Don't Do It. :)
Or are marker interfaces still the right choice in Scala?
Nope. They aren't even the right choice in Java. They should be annotations, not interfaces.
the best way to do this in ideomatic scala is to use implicits with the effect of a typeclass.
This is used for the Ordered trait
def max[A <% Ordered[A]](a:A,b:A);
means the same as:
def max[A](a:A,b:A)(implicit orderer: T => Ordered[A]);
It says you can use every type A as long as it can be threated as an Ordered[A].
this has several benefits you don´t have with the interface/inheritance approach of Java
You can add an implicit Ordered definition to an existing Type. You can´t do that with inheritance.
You can have several implementation of Ordered for one Type! This is even more flexible than Type classes in Haskell wich allow only one instance per type.
In conclusion scalas implicits used together with generics enable a very flexible approach to define Constraints on types.
It is the same with cloneable/serializable.
You may also want to look at the scalaz library which adds haskell like typeclasses to Scala such as Functor, Applicative and Monad and offers a rich set of implicits so that this concepts can also enrich the standart library.

Why can we call methods of interface org.w3c.dom.Document?

I don't see any class implementing the methods of interface org.w3c.dom.Document. Then why can we (usually) call getDocumentElement method of this interface to get the root element ?
org.w3c.dom.Document is a part of XML specifications which can be implemented by many different libraries. If you want to know which exact implementation is used, try
org.w3c.dom.Document doc = <your instance>;
System.out.println(doc.getClass().getName());
at the same place where you call methods on it. That will tell you the name of implementing class that would have those methods (or its superclass would).
The org.w3c.dom package and it's classes is part of the Java API for XML Processing (JAXP). It is present to provide the Java language binding for the DOM Level 2 Core API.
The language binding merely exists to provide an interface that can be implemented by various DOM parsers. After all, different parsers will have different techniques to maintain the internal data structures that represent the DOM. Multiple JAXP parsers that comply with the DOM Core API can co-exist in the libraries available to the JVM. At runtime, only one of these will be utilized for parsing XML documents.
You can call the method, once a suitable DOM parser that implements JAXP, has read the contents of an XML document, and has populated it's internal structures to make an instance of the Document class available to you. In other words, the DOM parser is responsible for providing you with an instance of the Document object, after parsing a XML document.
Few of the known implementations are Xerces and JDom

Categories