I'm trying to serialize objects from a database that have been retrieved with Hibernate, and I'm only interested in the objects' actual data in its entirety (cycles included).
Now I've been working with XStream, which seems powerful. The problem with XStream is that it looks all too blindly on the information. It recognizes Hibernate's PersistentCollections as they are, with all the Hibernate metadata included. I don't want to serialize those.
So, is there a reasonable way to extract the original Collection from within a PersistentCollection, and also initialize all referring data the objects might be pointing to. Or can you recommend me to a better approach?
(The results from Simple seem perfect, but it can't cope with such basic util classes as Calendar. It also accepts only one annotated object at a time)
solution described here worked well for me: http://jira.codehaus.org/browse/XSTR-226
the idea is to have custom XStream converter/mapper for hibernate collections, which will extract actual collection from hibernate one and will call corresponding standard converter (for ArrayList, HashMap etc.)
I recommend a simpler approach: user dozer: http://dozer.sf.net. Dozer is a bean mapper, you can use it to convert, say, a PersonEJB to an object of the same class. Dozer will recursively trigger all proxy fecthes through getter() calls, and will also convert src types to dest types (let's say java.sql.date to java.utilDate).
Here's a snippet:
MapperIF mapper = DozerBeanMapperSingletonWrapper.getInstance();
PersonEJB serializablePerson = mapper.map(myPersonInstance, PersonEJB.class);
Bear in mind, as dozer walks through your object tree it will trigger the proxy loading one by one, so if your object graph has many proxies you will see many queries, which can be expensive.
What generally seems to be the best way to do it, and the way I am currently doing it is to have another layer of DTO objects. This way you can exclude data that you don't want to go over the channel as well as limit the depth to which the graph is serialized. I use Dozer for my current DTO (Data Transfer Object) from Hibernate objects to the Flex client.
It works great, with a few caveats:
It's not fast, in fact it's downright slow. If you send a lot of data, Dozer will not perform very well. This is mostly because of the Reflection involved in performing its magic.
In a few cases you'll have to write custom converters for special behavior. These work very well, but they are bi-directional. I personally had to hack the Dozer source to allow uni-directional custom converters.
Related
Sometimes i use jsonschema2pojo to convert some json into a java object, but according with these definitions I always be confused if it is a VO or a DTO. Im sure that isnt an entity, but I dont know how to classify it correctly.
The purpose of the use, is just to get a json in an object. After that, i manipulate these data over the app.
Technically, it's a DTO until you add additional business logic into the class rather than simply the JSON serialization annotations.
The reason I say so is that it's responsible for both transfer and deserialization of a JSON object
I would say that a DTO is a POJO that is setup for the exclusive purpose of being transmitted to and from a datasource. So I would say that if you plan on using the POJO just for transmitting to and from a datasource, then I would call it a DTO. That would let me know what its purpose is for. If the POJO is going to be used for other things beyond just transmitting to and from a datasource, than I would call it a POJO.
Typically I do not see these terms used much anymore. Now I just see POJO and they typically go into a package with the name "model" or "domain". If I see these packages in a project, I know these are POJO's that can be used for business logic or transmitting.
Why its probably not a VO: VO's are small objects, like coordinates, or money. They are immutable and do not contain many fields. So not really something with multiple fields that you would require JSONshema2pojo. Though when parsing a large JSON, JSONschema2pojo might create many little classes that fit this definition.
EDIT: This is all subjective. And only providing an opinion here.
I'm trying to make some auto-magic happen in java using proxies to track objects and saving them when a set* method is called. I started of using java's built in Proxy, and everything works just fine, but from what I can understand I need a interface for every model, which is something that I'm trying to avoid.
This is where CGLIB comes in, it allows me to create proxies of my models without the use of interfaces. BUT, how can I now retrieve the original object, the one I am trying to save?
The optimal solution to be would be something like the EntityManager interface used by hibernate, where you keep your original object, but it is still tracked.
MethodInvocation interface specifies only one method that takes MethodInvocation as an argument. MethodInvocation has several methods to retrieve objects that are being acted upon: getStaticPart, getThis. Have you tried calling them?
As a shameless plug, you could actually use Hibernate together with Xstream. Here's my blog post about Xstream persister. But in this case Xstream is used to save fields in XML format in a database.
I am developing an application in Flex, using Blaze DS to communicate with a Java back-end, which provides persistence via JPA (Eclipse Link).
I am encountering issues when passing JPA entities to Flex via Blaze DS. Blaze DS uses reflection to convert the JPA entity into an ObjectProxy (effectively a HashMap) by calling all getter methods on the entity. This includes any lazy-initialised one/many-to-many relationships.
You can probably see where I am going. If I pass a single object through JPA this will call all one/many-to-many methods on this object. For each returned object if they have one/many-to-many relationships they will be called too. As such, by passing back a single JPA entity I actually end up doing multiple database calls and passing all related entries back as a single ObjectProxy instance!
My solution to date is to create a translator to convert each entity to an ObjectProxy and vice-versa. This is clearly cumbersome and there must be a better way.
Thoughts please?
As an alternative, you could consider using GraniteDS instead of BlazeDS: GraniteDS has a much more powerful data management stack than BlazeDS (it competes more with LCDS) and fully support lazy-loading for all major JPA engines: Hibernate, EclipseLink, OpenJPA, etc.
Moreover, GraniteDS has a great client-side transparent lazy loading feature and even a so-called reverse lazy-loading mechanism.
And you don't need any kind of intermediate DTOs: it serializes JPA entities as is and uses code-generated ActionScript beans on the client-side to keep their initialization states.
Unfortunately, lazy-loading is not easy to accomplish with Flash clients. There are some working solutions, like dpHibernate, but so far all the different solutions I have tested fall short of what you would expect in terms of performance and ease of use.
So in my experience, it is the best and most reliable solution to always use DTOs, which adds the benefit of cleanly separating the database and view layers. This necessitates, though, that you implement either eager loading, or a second server round trip to resolve your many-to-many relations, as well as a good deal more boilerplate code to copy the DAO and DTO field values.
Which one to choose depends on your use case: Sometimes getting only the main object's fields might be enough, then you could simply omit the List of related objects from your DTO (transfer only those values you need for your query). Sometimes you may actually need the entire list of related entities, and then you could get it via eager loading, or by setting up a second remote object to find only the list.
EclipseLink also provides a copyObject() API that allows you to give a copy group of exactly what attribute you want. You could then use this copy to avoid having the relationships that you do not want.
If you have a detached object, you could just null out the fields that you do not want as well, or use a DTO.
I have had similar questions and concerns as to how to convert between Hibernate entities and data transfer objects to be returned by a web service as are discussed in this question:
Is using data transfer objects in ejb3 considered best practice
One of the factors mentioned here is that if the domain model changes, a set of DTOs will protect consumers in the case of a web service.
Even though it seems like it will add a substantial amount of code to my project, this reasoning seems sound.
Is there a good design pattern that I can use to convert a Hibernate entity (which implements an interface) to a DTO that implements the same interface?
So assuming both of the following implement 'Book', I would need to convert a BookEntity.class to a BookDTO.class so that I can let JAXB serialize and return.
Again, this whole prospect seems dubious to me, but if there are good patterns out there for helping to deal with this conversion, I would love to get some insight.
Is there perhaps some interesting way to convert via reflection? Or a 'builder' pattern that I'm not thinking of?
Should I just ignore the DTO pattern and pass entities around?
Should I just ignore the DTO pattern
and pass entities around?
My preference is usually "yes". I don't like the idea of parallel hierarchies created just for the sake of architectural or layer purity.
The original reason for the DTO pattern was excessive chattiness in EJB 1.0 and 2.0 apps when passing entity EJBs to the view tier. The solution was to put the entity bean state into a DTO.
Another reason that's usually given for creating DTOs is to prohibit modification by the view layer. DTOs are immutable objects in that case, with no behavior. They do nothing but ferry data to the view layer.
I would argue that DTO is a Core J2EE pattern that's become an anti-pattern.
I realize that some people would disagree. I'm simply offering my opinion. It's not the only way to do it, nor necessarily the "right" way. It's my preference.
There needs to be a contrarian view amongst all the jolly kicking of the DTO.
tl;dr - It is sometimes still useful.
The advantage of the DTO is that you don't have to add a zillion annotations to your domain classes.
You start with #Entity. Not so bad. But then you need JAXB so you add #XMLElement etc - and then you need JSON so you add things like #JsonManagedReference for Jackson to do the right thing with relationships then you add etc. etc. etc. ad infinitum.
Pretty soon your POJO ain't so plain any more. Read about "domain driven design" sometime.
In addition you can "filter" some properties that you don't want the view to know about.
We should not forget that entity objects are not easy to handle when they are in managed state. This makes their passing to GUI forms problematic. To be more precise, child objects are handled eagerly. This cannot be done out of session, cousing exceptions. So, they either have to be evicted (detached) from the entity manager of they have to be converted to appropriate DTOs. Unless of cource there is a pattern, which I am not aware of, that I would be very glad to know.
For quickly create a "look-alike" DTO, without a bunch of duplicate get/set code, you can use BeanUtils.copyProperties. That function help you quickly copy the data from DAO to DTO class. Just remember that there are more than one common libraries support BeanUtils.copyProperties, but their syntax are not the same.
I know this is an old question, but thought I would add an answer offering a framework to help in case someone else is tackling this problem.
Our project has JAXB annotated POJOs that are separate from the JPA annotated POJOs. Our team was debating how best to move data between the two objects (actually data structures).
Here is an option for people to consider:
We found and are experimenting with Dozer which handles (1) same name, (2) XML mapping and (3) custom conversions as ways to copy data between two POJOs.
It has been very easy to use so far.
I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput):
An input stream that contains a stream
of values representing an instance of
an SQL structured type or an SQL
distinct type. This interface, used
only for custom mapping, is used by
the driver behind the scenes, and a
programmer never directly invokes
SQLInput methods.
This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to
ResultSet.getObject(int index, Map mapping)
The JDBC driver will then call-back on my custom type using the
SQLData.readSQL(SQLInput stream, String typeName)
method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT.
To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way:
I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc.
I can generate source code from my UDTs, with appropriate getters/setters and other properties
My questions:
What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise?
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
Addendum:
UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.
The advantage of configuring the driver so that it works behind the scenes is that the programmer does not need to pass the type map into ResultSet.getObject(...) and therefore has one less detail to remember (most of the time). The driver can also be configured at runtime using properties to define the mappings, so the application code can be kept independent of the details of the SQL type to object mappings. If the application could support several different databases, this allows different mappings to be supported for each database.
Your method is viable, its main characteristic is that the application code uses explicit type mappings.
In the behind the scenes approach the ResultSet.getObject(int) method will use the type mappings defined on the connection rather than those passed by the application code in ResultSet.getObject(int index, Map mapping). Otherwise the approaches are the same.
Other Approaches
I have seen another approach used with JBoss 4 based on these classes:
org.jboss.ejb.plugins.cmp.jdbc.JDBCParameterSetter
org.jboss.ejb.plugins.cmp.jdbc.JDBCResultSetReader.AbstractResultSetReader
The idea is the same but the implementation is non-standard (it probably pre-dates the version of the JDBC standard defining SQLData/SQLInput).
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
An example of how something similar to this can be done in Hibernate/JPA is shown in this answer to another question:
Java Enums, JPA and Postgres enums - How do I make them work together?
I know what Spring does: you write implementations of their RowMapper interface. I've never used SQLData with Spring. Your post was the first time I'd ever heard of or thought about that interface.