I need to decide on which one to use. My case is pretty simple. I need to convert a simple POJO/Bean to XML, and then back. Nothing special.
One thing I am looking for is it should include the parent properties as well. Best would be if it can work on super type, which can be just a marker interface.
If anyone can compare these two with cons and pros, and which thing is missing in which one. I know that XStream supports JSON too, thats a plus. But Simple looked simpler in a glance, if we set JSON aside. Whats the future of Simple in terms of development and community? XStream is quite popular I believe, even the word, "XStream", hit many threads on SO.
Thanks.
Just from reading the documentation (I'm facing down the same problem you are, but haven't tried either way yet; take this with a grain of salt):
XSTREAM
Very, very easy to Google. Examples, forum posts, and blog posts about it are trivial to find.
Works out of the box. (May need more tweaking, of course, but it'll give you something immediately.)
Converting a variable to an attribute requires creating a separate converter class, and registering that with XStream. (It's not hard for simple values, but it is a little extra work.)
Doesn't handle versioning at all, unless you add in XMT (another library); if the XML generated by your class changes, it won't deserialize at all. (Once you add XMT, you can alter your classes however you like, and have XStream handle it fine, as long as you create an increasing line of incremental versioning functions.)
All adjustments require you to write code, either to implement your own (de)serialization functions, or calling XStream functions to alter the (de)serialization techniques used.
Trivial syntax note: you need to cast the output of the deserializer to your class.
SIMPLE
Home page is the only reliable source of information; it lists about a half-dozen external articles, and there's a mailing list, but you can't find it out in the wild Internet.
Requires annotating your code before it works.
It's easy to make a more compact XML file using attributes instead of XML nodes for every property.
Handles versioning by being non-strict in parsing whenever the class is right, but the version is different. (i.e., if you added two fields and removed one since the last version, it'll ignore the removed field and not throw an exception, but won't set the added fields.) Like XStream, it doesn't seem to have a way to migrate data from one version to the next, but unlike XStream, there's no external library to step in and handle it. Presumably, the way to handle this is with some external function (and maybe a "version" variable in your class?), so you do
Stuff myRestoredStuff = serializer.read(Stuff.class, file);
myRestoredStuff.sanityCheck();
Commonly-used (de)serializing adjustments are made by adding/editing annotations, but there's support for writing your own (de)serialization functions to override the standard methods if you need to do something woolly.
Trivial syntax note: you need to pass the restored object's class into the deserializer (but you don't need to cast the result).
Why not use JAXB instead?
100% schema coverage
Huge user base
Multiple implementations (in case you hit a bug in one)
Included in Java SE 6, compatible with JDK 1.5
Binding layer for JAX-WS (Web Services)
Binding layer for JAX-RS (Rest)
Compatible with JSON (when used with libraries such as Jettison)
Useful resources:
Comparison, JAXB & XStream
Comparison, JAXB & Simple
I'd recommend that you take a look at Simple
I would also suggest Simple, take a look at the tutorial, there and decide for yourself. The mailing list is very responsive and you will always get a prompt answer to any queries.
So far I have never use Simple framework yet.
Based on my experience with Xstream. It worked well on XML. However, for JSON, the result is not as precise as expected when I attempt to serialize a bean that contain a List of Hashtable.
Thought I share this here.
To get XStream to ignore missing fields (when you have removed a property):
XStream xstream = new XStream() {
#Override
protected MapperWrapper wrapMapper(MapperWrapper next) {
return new MapperWrapper(next) {
#Override
public boolean shouldSerializeMember(Class definedIn,
String fieldName) {
if (definedIn == Object.class) {
return false;
}
return super.shouldSerializeMember(definedIn, fieldName);
}
};
}
};
This can also be extended to handle versions and property renames.
Credit to Peter Voss: https://pvoss.wordpress.com/2009/01/08/xstream
One "simple" (pun intended) disadvantage of Simple and Jaxb is that they require annotating your objects before they can be serialized to XML. What happens the day you quickly want to serialize someone else's code with objects that are not annotated? If you can see that happening one day, XStream is a better fit. (Sometimes it really just boils down to simple requirements like this to drive your decisions).
Was taking a quick look at simple while reading stackoverflow; as an amendment to Paul Marshalls helpful post, I thought i'd mention that Simple does seem to support versioning through annotations-
http://simple.sourceforge.net/download/stream/doc/tutorial/tutorial.php#version
Simple is much slower then XStream(in serialization objects to xml)
http://pronicles.blogspot.com/2011/03/xstream-vs-simple.html
Related
Let's say I have a simple class Person
public class Person{
final List<String> names= Lists.newArrayList();
public List<String> getNames(){
return names;
}
}
If I try to deserialise that with Jackson (2.2)
Person l = mapper.readValue(js,Person.class);
I get Disabling Afterburner deserialization ....due to access error (type java.lang.IllegalAccessError ....
This is because of final names list. To solve this I set MapperFeature.ALLOW_FINAL_FIELDS_AS_MUTATORS to false.
Is this the right solution or better just to make the list non-final?
Is there a Jackson method to use collection.add methods for initialising collections?
Or maybe there is a better way. What can be suggested here?
EDIT: I now found this setting:
USE_GETTERS_AS_SETTERS (default: true) Controls whether "getters" that
return Collection or Map types can be used for "setting" values (same
as how JAXB API works with XML), so that separate "setter" method is
not needed. Even if enabled, explicit "setter" method will have
precedence over implicit getter-as-setter, if one exists.
Seems like exactly what I was looking and it is on by default. So why was it ignored then?
Working with immutable Objects in your application is a best practice, but on the boundaries to the (non-Java) outside world, you usually have to refrain from using them.
In most Serialization technologies, everything works fine when your Objects are "well-behaved" (mutable, with getters and setters according to the JavaBeans-standard). There's usually some way around that but in my experience it's easiest to just make the damn thing mutable, as long as you're not going to reference it from other Java Code. (if worst comes to worst, create a dedicated serialization DTO)
I suspect that this is due to a problem with combination of things. Specifically, can you try your approach without enabling Afterburner first? If this works, then this is an issue with Afterburner module's handling of processing -- it is possible since code path involved here differs from default one.
Make sure to use latest version; 2.3.0 was just released.
I'm trying to make an application that makes use of the Hermit OWL reasoner to reason on user input data. I've already made the mapping from OWL classes to Java classes and the other way by using the various OWLAPI methods.
The only thing left to do now is to make some kind of mapping that allows a Java program to automatically convert a lot of OWL individuals, extracted from the ontology, to the associated Java classes.
Currently I have the following in mind: a hashmap that contains a list of the names of the OWL classes as keys and then as value to the key the name of the Java class. When looking up a key, the class can then be instantiated through the use of Java Reflection. The only downside to this way is that it will probably be very slow?
Does anybody have a better idea to do the above?
Thanks in advance!
Tom DC
EDIT:
An example of an OWL class that I converted into a Java class (the class was too big to post here): http://pastebin.com/aEsjvDN7
As you can see in the example, I already tried to make it easier for a mapping by creating a function that looks at the OWL IRI and then decides what object it has to choose to make. This function is probably obsolete and useless when using JAXB or the hashmap.
If you need to instantiate a particular Java class starting from a match in the map, I would put as values builders for these classes rather than class names to be built through reflection, since it gives you better flexibility and possibly better performances.
An example of such builders:
public interface BuilderClass<O, P>{
O build(P parameter);
}
public class BuilderSpecificClass<SpecificClass, Object>{
#Override
public SpecificClass build(Object parameter){
return new SpecificClass(parameter);
}
}
Then the map would look something like:
Map<String, BuilderClass<SpecificClass, Object>> map=new HashMap<String, BuilderClass<SpecificClass, Object>>();
map.put("<class_iri>", new BuilderSpecificClass<SpecificClass, Object>());
That said, I'm not clear how your specific classes work, so there might be a better way. Can you add an example of how you built them?
Edited after Tom's extra details:
Ok, if I understand what your class is doing, you have half the approach I described already implemented.
Your class is basically wrapping sets of OWL assertion axioms, either asserted or inferred - i.e., values for your fields come either from the ontology or from a reasoner, and relate individuals with individuals or with values.
You also have methods to populate a class instance from an ontology and a reasoner; these correspond to what I proposed above as build() method, where the parameters would be the ontology and the reasoner. You can skip passing the ontology manager since an instance of OWLOntologyManager is already accessible through the ontology: ontology.getOWLOntologyManager()
What I would do here is create builders pretty much like I described and have them call your methods to populate the objects.
In terms of performances, it's hard to tell whether there are any serious hot spots - at a guess, there shouldn't be anything particularly hard in this code. The ontology is the place where such problems usually arise.
What I can suggest in this class is the following:
private final String personURI = ThesisOntologyTools.PERSON_URI + "Person";
You have a few member variables which look like this one. I believe these are constants, so rather than having a copy in each of your instances, you might save memory by making them static final.
OWLDataProperty isLocationConfirmed = dataFactory.getOWLDataProperty(IRI.create(isLocationConfirmedURI));
You are creating a number of objects in a way similar to this. Notice that IRI.create() will return an immutable object, as well as dataFactory.getOWLDataProperty(), so rather than accessing the data factory each time you can reuse such objects.
All objects produced by a data factory are not linked to a specific ontology and are immutable, so you can reuse them freely across your classes, to reduce the number of new objects created. A data factory might cache some of them, but some others might be recreated from scratch on each call, so reducing the number of calls should improve speed and memory requirements.
Other than this, the approach looks fine. If the memory required or the speed are too high (or too low), you may want to start using a profiler to pinpoint the issues. and if the hotspots are in the OWL API, please raise an issue in OWLAPI issue tracker :-) we don't get enough performance reports from real users.
I would recommend strongly against Reflection, as powerful as it is, and recommend instead JAXB. It allows you to derive Java classes based on XML schema, and OWL would just be a specific instance of that.
Otherwise, take a look on the web. I feel like you aren't the first to want such a thing.
This question has almost certainly been asked before, but I ask it anyway because I couldn't find an answer.
Generally, is there a utility class of some sort that assists in common String manipulations associated with URL/URIs?
I'm thinking something like Java SE's URL Class, but maybe a little beefier. I'm looking for something that will let you do simple things, like:
Get a List of query string parameters
An "addParameter" method to add a
query string parameter, and it will
take care of adding "&", "?", and "="
where necessary
Also, encoding
parameter values would be ideal...
Let me know, thanks!
There isn't really (oddly enough) any standard that does it all. There are some bits and pieces, usually buried in various util packages:
I've used http://java.net/projects/urlencodedquerystring/pages/Home to decent effect (for extraction of parameters).
Atlassian's JIRA has http://docs.atlassian.com/jira/4.2/index.html?com/atlassian/jira/util/UrlBuilder.html, which I've actually extracted from the jar and used.
On Android, http://developer.android.com/reference/android/net/Uri.Builder.html is a Uri builder that works pretty well as far as building a url with ease.
And finally, in a classic case of history repeating itself: A good library to do URL Query String manipulation in Java.
I'd really just rip out the android.net.Uri.Builder class and pair that with the urlencodedquerystring class and then carry those around with you, but this does seem like a good candidate for an Apache commons package.
I personnaly like UriBuilder from jax-rs
This does not answer OP's question directly (i.e. it's not a generic, all-around library for URL manipulation), but: if you're going to be using Spring anyway, you might as well consider the ServletUriComponentsBuilder and UriComponentsBuilder classes (see here and here for javadocs).
I believe they are bundled with the spring-web dependency. IMHO, these offer quite a few convenient utility methods for working with URIs, URLs and query parameters.
Java's Properties object hasn't changed much since pre-Java 5, and it hasn't got Generics support, or very useful helper methods (defined pattern to plug in classes to process properties or help to load all properties files in a directory, for example).
Has development of Properties stopped? If so, what's the current best practice for this kind of properties saving/loading?
Or have I completely missed something?
A lot of the concepts around Properties are definitely ancient and questionable. It has very poor internationalization, it adds methods that today would just be accomplished via a Generic type, it extends Hashtable, which is itself generally out of use, since its synchronization is of limited value and it has methods which are not in harmony with the Collections classes introduced in 1.2, and many of the methods added to the Properties class essentially provide the kind of type safety that is replaced by Generics.
If implemented today it would probably be a special implementation of a Map<String, String>, and certainly support better encoding in the properties file.
That being said, there isn't really a replacement that doesn't add complexity. Sure the java.util.prefs.Preferences api is the "new and improved" but it adds a layer of complexity that is well beyond what is needed for many use cases. Just using XML is also an option (which at least fixes the internationalization issues) but a properties object often fits the needs just fine, at which point use it.
It's still a viable solution for simple configuration requirements. They don't need generics support because Property keys and values are inherently Strings, that is, they are stored in flat, ascii files. If you need un/marshaling/serialization of objects, Properties aren't the right approach. The preferred method is now java.util.prefs.Preferences for anything beyond even moderately sophisticated configuration needs.
It does what it needs to do. It's not that hard to write support for reading in all the properties files in a directory. I would say that's not a common use-case, so I don't see that as something that needs to be in the JDK.
Also, it has changed slightly since pre-Java 5, as the Javadoc says that extends Hashtable<Object, Object> and implements Map<Object, Object>.
"it hasn't got Generics support,"
why does it need generics support; it deals with string key and string values
I would not consider Java properties deprecated. It is a mature library - that's all
The dictionary structure is one of the oldest most used structures in most programming languages http://en.wikipedia.org/wiki/Associative_array, I doubt it would be deprecated.
Even if were to be removed there would soon be new implementations outside of the core.
There already are external extensions, apache commons are great resources that I think have helped to shape java over the years, see http://commons.apache.org/configuration/howto_properties.html.
What is the purpose of annotations in Java? I have this fuzzy idea of them as somewhere in between a comment and actual code. Do they affect the program at run time?
What are their typical usages?
Are they unique to Java? Is there a C++ equivalent?
Annotations are primarily used by code that is inspecting other code. They are often used for modifying (i.e. decorating or wrapping) existing classes at run-time to change their behavior. Frameworks such as JUnit and Hibernate use annotations to minimize the amount of code you need to write yourself to use the frameworks.
Oracle has a good explanation of the concept and its meaning in Java on their site.
Also, are they unique to Java, is there a C++ equivalent?
No, but VB and C# have attributes which are the same thing.
Their use is quite diverse. One typical Java example, #Override has no effect on the code but it can be used by the compiler to generate a warning (or error) if the decorated method doesn't actually override another method. Similarly, methods can be marked obsolete.
Then there's reflection. When you reflect a type of a class in your code, you can access the attributes and act according to the information found there. I don't know any examples in Java but in .NET this is used by the compiler to generate (de)serialization information for classes, determine the memory layout of structures and declare function imports from legacy libraries (among others). They also control how the IDE form designer works.
/EDIT: Attributes on classes are comparable to tag interfaces (like Serializable in Java). However, the .NET coding guidelines say not to use tag interfaces. Also, they only work on class level, not on method level.
Anders gives a good summary, and here's an example of a JUnit annotation
#Test(expected=IOException.class)
public void flatfileMissing() throws IOException {
readFlatFile("testfiles"+separator+"flatfile_doesnotexist.dat");
}
Here the #Test annotation is telling JUnit that the flatfileMissing method is a test that should be executed and that the expected result is a thrown IOException. Thus, when you run your tests, this method will be called and the test will pass or fail based on whether an IOException is thrown.
Java also has the Annotation Processing Tool (apt) where not only you create annotations, but decide also how do these annotations work on the source code.
Here is an introduction.
To see some cool stuff you can do with Annotations, check out my JavaBean annotations and annotation processor.
They're great for generating code, adding extra validations during your build, and I've also been using them for an error message framework (not yet published -- need to clear with the bosses...).
The first thing a newcomer to annotations will ask about annotations is: "What is an annotation?" It turns out that there is no answer to this question, in the sense that there is no common behavior which is present in all of the various kinds of java annotations. There is, in other words, nothing that binds them together into an abstract conceptual group other than the fact that they all start with an "#" symbol.
For example, there is the #Override annotation, which tells the compiler to check that this member function overrides one in the parent class. There is the #Target annotation, which is used to specify what kinds of objects a user defined annotation (a third type of construct with nothing in common with other kinds of annotation) can be attached to. These have nothing to do with one another except for starting with an # symbol.
Basically, what appears to have happened is that some committee responsible for maintaining the java language definition is gatekeeping the addition of new keywords to the java language, and therefore other developers are doing an end run around that by calling new keywords "annotations". And that's why it is hard to understand, in general what an annotation is: because there is no common feature linking all annotations that could be used to put them in a conceptual group. In other words, annotations as a concept do not exist.
Therefore I would recommend studying the behavior of every different kind of annotation individually, and do not expect understanding one kind of annotation to tell you anything about the others.
Many of the other answers to this question assume the user is asking about user defined annotations specifically, which are one kind of annotation that defines a set of integers or strings or other data, static to the class or method or variable they are attached to, that can be queried at compile time or run time. Sadly, there is no marker that distinguishes this kind of annotation from other kinds like #interface that do different things.
By literal definition an annotation adds notes to an element. Likewise, Java annotations are tags that we insert into source code for providing more information about the code. Java annotations associate information with the annotated program element. Beside Java annotations Java programs have copious amounts of informal documentation that typically is contained within comments in the source code file. But, Java annotations are different from comments they annotate the program elements directly using annotation types to describe the form of the annotations. Java Annotations present the information in a standard and structured way so that it could be used amenably by processing tools.
When do you use Java's #Override annotation and why?
The link refers to a question on when one should use the override annotation(#override)..
This might help understand the concept of annotation better.Check out.
Annotations when it comes to EJB is known as choosing Implicit middle-ware approach over an explicit middle-ware approach , when you use annotation you're customizing what you exactly need from the API
for example you need to call transaction method for a bank transfer :
without using annotation :
the code will be
transfer(Account account1, Account account2, long amount)
{
// 1: Call middleware API to perform a security check
// 2: Call middleware API to start a transaction
// 3: Call middleware API to load rows from the database
// 4: Subtract the balance from one account, add to the other
// 5: Call middleware API to store rows in the database
// 6: Call middleware API to end the transaction
}
while using Annotation your code contains no cumbersome API calls to use the middle-
ware services. The code is clean and focused on business logic
transfer(Account account1, Account account2, long amount)
{
// 1: Subtract the balance from one account, add to the other
}