I am using eclipselink JPA implementation (Entity) with GWT 2.0 framework on presentation layer.
Everything is working properly. But when i change my JPA implementation to Hibernate, I get Serialization/Deserialization Exception on GWT Layer when I pass entity beans but It is okay on eclipselink JPA.
Whats really happens? Hibernate is an implementation of JPA and eclipselink too, why those act differently?
What should I do for solving this exception on Hibernate? using Hibernate4gwt?
Which JPA implementation is better for GWT?
Regards
I recommend to read the whole Using GWT with Hibernate paper, it explains very nicely why enhanced classes (whether you're using proxies or weaving) are "problematic" for GWT:
Why Hibernate objects can't be understood when they reach the browser world
...
When you take an object and turn it
into a Hibernate object, the object is
now enhanced to be persistent. That
persistence does not come without some
type of instrumentation of the object.
In the case of Hibernate, the
Javassist library actually replaces
and rewrites the bytecode for these
objects by persistent entities to make
the Hibernate magic work. What this
means for GWT RPC is that by the time
the object is ready to be transferred
over the wire, it actually isn't the
same object that the compiler thought
was going to be transferred, so when
trying to deserialize, the GWT RPC
mechanism no longer knows what the
type is and refuses to deserialize it.
In fact, if you were to look deeper to
the earlier call to loadAccounts(),
and step into the
RPC.invokeAndEncodeResponse() method,
you would see that the object we're
trying to deserialize has now become
an ArrayList of Account types with
their java.util.Set of records
replaced by the
org.hibernate.collection.PersistentSet
type.
Similar problems arise with other
persistence frameworks, such as JDO or
JPA, used on Google App Engine.
...
So my understanding it that this isn't an Hibernate specific problem and you might also run into troubles with alternative JPA implementations, including EclipseLink if you use static or dynamic weaving (you're not forced to use weaving but then you miss features like lazy loading or fetch groups).
The paper suggests several integration strategies allowing to workaround the issues:
Using Data Transfer Objects (argh!)
Using Dozer for Hibernate integration (an improved version of the previous approach)
Using Gilead (formerly known as Hibernate4Gwt) for Hibernate Integration
It also discusses their pros and cons, just check it out.
To sum up...
First, I don't think there is a "best" JPA implementation for GWT, they are all facing the same issue. If you can live without lazy loading, EclipseLink without weaving might be simpler. But you'd be somehow burying your head in the sand, the issue is there and you won't be able to use another implementation.
Second, while the two first "integration strategies" will work with any JPA provider, Hibernate is the only JPA implementation currently supported by Gilead (but OpenJPA and EclipseLink supports is planned).
Pick your poison :)
See also
Gilead Presentation
GWT Developer Forum
Another thought: Custom Field Serializers.
Example: MyClass has a member mapped in a One To Many relationship with YourClass:
public class MyClass implements Serializable {
private List<YourClass> yourClassList;
#OneToMany(mappedBy="myClass")
public List<YourClass> getYourClassList {
return yourClassList;
}
}
The precise implementation Hibernate will use is probably PersistentBag, which is not serializable, for the reasons Pascal mentioned. But GWT provides Custom Field Serializers to control the serialization. It would look something like this.
public class MyClass_CustomFieldSerializer {
public serialize(SerializationStreamWriter writer, MyClass instance) throws SerializationException {
writer.write(new ArrayList<YourClass>(instance.getYourClassList());
}
}
Advantage here is not having to mess with Gilead/Dozer/more external libraries.
Related
I work on a big project using Hibernate, Spring and ZK frameworks, and I want to upgrade to Hibernate 5. There are several ZK tables with DB-layer paging/filtering/sorting in the GUI. For these tables we use the approach described in https://www.zkoss.org/wiki/Small_Talks/2009/May/Paging_Sorting_with_a_filter_object, that is, the model of the table has a SearchObject (wrapper of the query), a SearchResult (wrapper of the resultset) and a reference to the DAO. By paging/sorting/filtering the SearchObject will be changed, and then processed by the DAO automatically.
The problem is that the hibernate-generic-dao project (https://code.google.com/archive/p/hibernate-generic-dao/) is dead and should be upgraded to Hibernate 5. I consider to upgrade it myself (or at least the search and search-hibernate modules), but I am interesting whether there is a similar living project. (Although it would be painful to use a different API in each cases.)
It sounds a bit similar to spring-data repositories which greatly reduce boiler plate code, and provide a common interface for regular CRUD/Paging/Sorting repository methods. The implementation is generated automatically based on naming-conventions and meta-data.
At the same time it is extensible down to native queries in case none of the naming conventions match.
Also the query by example functionality might be a candidate to replace your dynamic SearchObject.
Maybe that's something for you.
the Topic already says one of the key roles regarding ORM
Don't run your own ORM Implementation
But, I have a situation here where I'm not sure how to get our Requirements implemented properly.
To give you a bit of background, currently we are using Spring Data JPA with Hibernate as JPA Implementation and all is fine so far.
But we have separate fields which we want to "manage" automatically, a bit similar to Auditing Annotations from Hibernate (#CreatedBy, #ModifiedBy, ...).
In our case this is e.g. a specific "instance" the entity belongs to.
Our Application is rather a Framework than an App, so other Developers frequently add Entities and we want to keep it simple and intuitive.
But, we do not only want to set it automatically on storage but also add it as condition for most "simple and frequent" queries (see my related question here Inject further query conditions / inject automatic entity field values when using Spring Data JPA Repositories).
Thus, I thought about building a simple Layer on top of the EntityManager and its Criteria API to support at least simple Queries like
findById(xx)
findByStringAttribute(String attribute, String value)
findByIntegerAttribute(int attribute, String value)
...
I'm not sure if this is too broad of a question but, what are your thoughts on that? Is this a reasonable idea or should I skip that idea?
I am developing an application in Flex, using Blaze DS to communicate with a Java back-end, which provides persistence via JPA (Eclipse Link).
I am encountering issues when passing JPA entities to Flex via Blaze DS. Blaze DS uses reflection to convert the JPA entity into an ObjectProxy (effectively a HashMap) by calling all getter methods on the entity. This includes any lazy-initialised one/many-to-many relationships.
You can probably see where I am going. If I pass a single object through JPA this will call all one/many-to-many methods on this object. For each returned object if they have one/many-to-many relationships they will be called too. As such, by passing back a single JPA entity I actually end up doing multiple database calls and passing all related entries back as a single ObjectProxy instance!
My solution to date is to create a translator to convert each entity to an ObjectProxy and vice-versa. This is clearly cumbersome and there must be a better way.
Thoughts please?
As an alternative, you could consider using GraniteDS instead of BlazeDS: GraniteDS has a much more powerful data management stack than BlazeDS (it competes more with LCDS) and fully support lazy-loading for all major JPA engines: Hibernate, EclipseLink, OpenJPA, etc.
Moreover, GraniteDS has a great client-side transparent lazy loading feature and even a so-called reverse lazy-loading mechanism.
And you don't need any kind of intermediate DTOs: it serializes JPA entities as is and uses code-generated ActionScript beans on the client-side to keep their initialization states.
Unfortunately, lazy-loading is not easy to accomplish with Flash clients. There are some working solutions, like dpHibernate, but so far all the different solutions I have tested fall short of what you would expect in terms of performance and ease of use.
So in my experience, it is the best and most reliable solution to always use DTOs, which adds the benefit of cleanly separating the database and view layers. This necessitates, though, that you implement either eager loading, or a second server round trip to resolve your many-to-many relations, as well as a good deal more boilerplate code to copy the DAO and DTO field values.
Which one to choose depends on your use case: Sometimes getting only the main object's fields might be enough, then you could simply omit the List of related objects from your DTO (transfer only those values you need for your query). Sometimes you may actually need the entire list of related entities, and then you could get it via eager loading, or by setting up a second remote object to find only the list.
EclipseLink also provides a copyObject() API that allows you to give a copy group of exactly what attribute you want. You could then use this copy to avoid having the relationships that you do not want.
If you have a detached object, you could just null out the fields that you do not want as well, or use a DTO.
I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput):
An input stream that contains a stream
of values representing an instance of
an SQL structured type or an SQL
distinct type. This interface, used
only for custom mapping, is used by
the driver behind the scenes, and a
programmer never directly invokes
SQLInput methods.
This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to
ResultSet.getObject(int index, Map mapping)
The JDBC driver will then call-back on my custom type using the
SQLData.readSQL(SQLInput stream, String typeName)
method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT.
To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way:
I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc.
I can generate source code from my UDTs, with appropriate getters/setters and other properties
My questions:
What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise?
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
Addendum:
UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.
The advantage of configuring the driver so that it works behind the scenes is that the programmer does not need to pass the type map into ResultSet.getObject(...) and therefore has one less detail to remember (most of the time). The driver can also be configured at runtime using properties to define the mappings, so the application code can be kept independent of the details of the SQL type to object mappings. If the application could support several different databases, this allows different mappings to be supported for each database.
Your method is viable, its main characteristic is that the application code uses explicit type mappings.
In the behind the scenes approach the ResultSet.getObject(int) method will use the type mappings defined on the connection rather than those passed by the application code in ResultSet.getObject(int index, Map mapping). Otherwise the approaches are the same.
Other Approaches
I have seen another approach used with JBoss 4 based on these classes:
org.jboss.ejb.plugins.cmp.jdbc.JDBCParameterSetter
org.jboss.ejb.plugins.cmp.jdbc.JDBCResultSetReader.AbstractResultSetReader
The idea is the same but the implementation is non-standard (it probably pre-dates the version of the JDBC standard defining SQLData/SQLInput).
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
An example of how something similar to this can be done in Hibernate/JPA is shown in this answer to another question:
Java Enums, JPA and Postgres enums - How do I make them work together?
I know what Spring does: you write implementations of their RowMapper interface. I've never used SQLData with Spring. Your post was the first time I'd ever heard of or thought about that interface.
I'm hesitating between two designs of a database project using Hibernate.
Design #1.
(1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it.
(2) Create a database library with Hibernate. This library implements the data provider interface in (1).
The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects.
Design #2.
Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate.
I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider?
Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2?
What do you think about this? Thanks for your time!
Hibernate Domain objects are simple POJO so you won't have to create separate DPI-objects, H-Object themselves can be used directly. In DAO you can control whether they come from hibernate or anything else.
I highly recommend reading Chapter 4 "Hitting the database" of Spring in Action, 3rd edition, even if you aren't using Spring in your application. Although my second recommendation would be to use Spring :-)
The DAO pattern is a great way to keep database and ORM logic isolated in the DAO implementation, and you only need one set of entity objects. You can make that happen without Spring, it just takes more work managing your sessions and transactions.
If I understand your post, this is sort of a middle-ground between Design 1 and Design 2. The H-Objects (the entities that Hibernates loads and persists) don't need any Hibernate specific code in them at all. That makes them perfectly acceptable to be used as your DPI-Objects.
I've had arguments with folks in the past who complain that the use of JPA or Hibernate Annotations exposes Hibernate specifics through the DAO interface. I personally take a more pragmatic view, since annotations are just metadata, and don't directly affect the operation of your entity classes.
If you do feel that the annotations expose too much, then you can go old school and use Hibernate Mappings instead. Then your H-Objects are 100% Hibernate free :-)
I recommend design #2. Simply construct domain objects, and let hibernate look after them. Don't write separate classes that are persisted.
Hibernate tries to hide most of the persistence business from you. You may need to add a few small annotations to your entities to help it along. But certainly don't make separate classes.
You may need some very small DAO classes. For example, if you have a Person entity, it would be fairly common practice to have a PersonDAO object that saves a person. Having said that, the code inside the DAO will be very simple, so for a really small project, it may not be worth it. For a large project, it's probably worth keeping your persistence code separate from your business logic, in case you want to use a different persistence technology later.