I work on a big project using Hibernate, Spring and ZK frameworks, and I want to upgrade to Hibernate 5. There are several ZK tables with DB-layer paging/filtering/sorting in the GUI. For these tables we use the approach described in https://www.zkoss.org/wiki/Small_Talks/2009/May/Paging_Sorting_with_a_filter_object, that is, the model of the table has a SearchObject (wrapper of the query), a SearchResult (wrapper of the resultset) and a reference to the DAO. By paging/sorting/filtering the SearchObject will be changed, and then processed by the DAO automatically.
The problem is that the hibernate-generic-dao project (https://code.google.com/archive/p/hibernate-generic-dao/) is dead and should be upgraded to Hibernate 5. I consider to upgrade it myself (or at least the search and search-hibernate modules), but I am interesting whether there is a similar living project. (Although it would be painful to use a different API in each cases.)
It sounds a bit similar to spring-data repositories which greatly reduce boiler plate code, and provide a common interface for regular CRUD/Paging/Sorting repository methods. The implementation is generated automatically based on naming-conventions and meta-data.
At the same time it is extensible down to native queries in case none of the naming conventions match.
Also the query by example functionality might be a candidate to replace your dynamic SearchObject.
Maybe that's something for you.
Related
We have a (possibly large) custom data structure implemented in Java (8+). It has a simple and optimal API for querying pieces of data. The logical structure is roughly similar to an RDMS (it has e. g. relations, columns, primary keys, and foreign keys), but there is no SQL driver.
The main goal is to access the data via ORM (mapping logical entities to JPA annotated beans). It would be nice if we could use JPQL. Hibernate is preferred but other alternatives are welcome too.
What is the simplest way to achieve this? Which are the key parts of such an implementation?
(P. S. Directly implementing SessionImplementor, EntityManagerImplementor etc. seems to be too complicated.)
You have two possibilities.
Implement a JDBC compliant driver for your system, so you can use a JPA implementation such as Hibernate "directly" (although you may need to create a custom dialect for your system).
Program directly against the JPA specification like ObjectDB does, which bypasses the need to go through SQL and JPA implementations completely.
The latter one is probably easier, but you'd still need to implement the full JPA API. If it's a custom in-house-only system, there's very little sense in doing either one.
One idea I thought up just now, that I feel may work is this:
Use an existing database implementation like H2 and use the JPA integration with that. H2 already has a JPA integration libraries, so it should be easy.
In this database, create a Java stored procedure or function and call it from your current application through JPA. See this H2 documentation on how to create a Java stored procedure or function. (You may want to explore the section "Using a Function as a Table" also.)
Define a protocol for the service methods and encapsulate it in a model class. An instance of this model class may be passed to the function/SP and responses retrieved.
Caveat: I have never done this myself but I think it will work.
Edit: Here is a diagram representing the thought. Though the diagram show H2 separately, it will most probably be in the same JVM as "Your Java/JEE application". However, since it is not necessary to use H2, I have shown it as as separate entity.
the Topic already says one of the key roles regarding ORM
Don't run your own ORM Implementation
But, I have a situation here where I'm not sure how to get our Requirements implemented properly.
To give you a bit of background, currently we are using Spring Data JPA with Hibernate as JPA Implementation and all is fine so far.
But we have separate fields which we want to "manage" automatically, a bit similar to Auditing Annotations from Hibernate (#CreatedBy, #ModifiedBy, ...).
In our case this is e.g. a specific "instance" the entity belongs to.
Our Application is rather a Framework than an App, so other Developers frequently add Entities and we want to keep it simple and intuitive.
But, we do not only want to set it automatically on storage but also add it as condition for most "simple and frequent" queries (see my related question here Inject further query conditions / inject automatic entity field values when using Spring Data JPA Repositories).
Thus, I thought about building a simple Layer on top of the EntityManager and its Criteria API to support at least simple Queries like
findById(xx)
findByStringAttribute(String attribute, String value)
findByIntegerAttribute(int attribute, String value)
...
I'm not sure if this is too broad of a question but, what are your thoughts on that? Is this a reasonable idea or should I skip that idea?
I am using eclipselink JPA implementation (Entity) with GWT 2.0 framework on presentation layer.
Everything is working properly. But when i change my JPA implementation to Hibernate, I get Serialization/Deserialization Exception on GWT Layer when I pass entity beans but It is okay on eclipselink JPA.
Whats really happens? Hibernate is an implementation of JPA and eclipselink too, why those act differently?
What should I do for solving this exception on Hibernate? using Hibernate4gwt?
Which JPA implementation is better for GWT?
Regards
I recommend to read the whole Using GWT with Hibernate paper, it explains very nicely why enhanced classes (whether you're using proxies or weaving) are "problematic" for GWT:
Why Hibernate objects can't be understood when they reach the browser world
...
When you take an object and turn it
into a Hibernate object, the object is
now enhanced to be persistent. That
persistence does not come without some
type of instrumentation of the object.
In the case of Hibernate, the
Javassist library actually replaces
and rewrites the bytecode for these
objects by persistent entities to make
the Hibernate magic work. What this
means for GWT RPC is that by the time
the object is ready to be transferred
over the wire, it actually isn't the
same object that the compiler thought
was going to be transferred, so when
trying to deserialize, the GWT RPC
mechanism no longer knows what the
type is and refuses to deserialize it.
In fact, if you were to look deeper to
the earlier call to loadAccounts(),
and step into the
RPC.invokeAndEncodeResponse() method,
you would see that the object we're
trying to deserialize has now become
an ArrayList of Account types with
their java.util.Set of records
replaced by the
org.hibernate.collection.PersistentSet
type.
Similar problems arise with other
persistence frameworks, such as JDO or
JPA, used on Google App Engine.
...
So my understanding it that this isn't an Hibernate specific problem and you might also run into troubles with alternative JPA implementations, including EclipseLink if you use static or dynamic weaving (you're not forced to use weaving but then you miss features like lazy loading or fetch groups).
The paper suggests several integration strategies allowing to workaround the issues:
Using Data Transfer Objects (argh!)
Using Dozer for Hibernate integration (an improved version of the previous approach)
Using Gilead (formerly known as Hibernate4Gwt) for Hibernate Integration
It also discusses their pros and cons, just check it out.
To sum up...
First, I don't think there is a "best" JPA implementation for GWT, they are all facing the same issue. If you can live without lazy loading, EclipseLink without weaving might be simpler. But you'd be somehow burying your head in the sand, the issue is there and you won't be able to use another implementation.
Second, while the two first "integration strategies" will work with any JPA provider, Hibernate is the only JPA implementation currently supported by Gilead (but OpenJPA and EclipseLink supports is planned).
Pick your poison :)
See also
Gilead Presentation
GWT Developer Forum
Another thought: Custom Field Serializers.
Example: MyClass has a member mapped in a One To Many relationship with YourClass:
public class MyClass implements Serializable {
private List<YourClass> yourClassList;
#OneToMany(mappedBy="myClass")
public List<YourClass> getYourClassList {
return yourClassList;
}
}
The precise implementation Hibernate will use is probably PersistentBag, which is not serializable, for the reasons Pascal mentioned. But GWT provides Custom Field Serializers to control the serialization. It would look something like this.
public class MyClass_CustomFieldSerializer {
public serialize(SerializationStreamWriter writer, MyClass instance) throws SerializationException {
writer.write(new ArrayList<YourClass>(instance.getYourClassList());
}
}
Advantage here is not having to mess with Gilead/Dozer/more external libraries.
I just started working on upgrading a small component in a distributed java application. The main application is a rather complicated applet/servlet combo running on JBoss and it extensively uses Hibernate for its DataAccess. The component i am working on however is very a very straightforward data importing service.
Basically the workflow is
Listen for a network event
Parse the data packet, extract a set of identifiers
Map the identifier set to a primary key in our database
Parse the rest of the packet and insert items in a related table using the foreign key found in step 3
Repeat
in the previous version of this component it used a hibernate based DAL, that is no longer usable for a variety of reasons (in particular it is EOL), so I am in charge of replacing the Data Access layer for this component.
So on the one hand I think i should use Hibernate because that's what the rest of the application does, but on the other i think i should just use regular java.sql.* classes because my requirements are really straightforward and aren't expected to change any time soon.
So my question is (and i understand it is subjective) at what point do you think that the added complexity of using an ORM tool (in terms of configuration, dependencies...) is worth it?
UPDATE
due to the way the DataAccesLayer for the main application was written (weird dependencies) i cannot easily use it, i would have to implement it myself.
If we look into why Spring-Hibernate combination is used?
Because for simple Jdbc operation we have to do lot of operation like getting a connection.
Making a statement and handling resultset.For all these steps there are lot of exception handling.
But with spring hibernate you have to use just this:
public PostProfiles findPostProfilesById(long id) {
List list=getHibernateTemplate().find("from PostProfiles where id=?",id);
return (PostProfiles) list.get(0);
}
And everything is taken care by framework.I hope it will solve you dilemma
I think the answer really depends on your skill set. It would probably take similar amount of time to craft a simple solution involving a handful of tables in either way (Hibernate or raw JDBC) if you are comfortable with both techniques.
As I am pretty comfortable with Hibernate, I'd just choose it as I prefer to working in a higher level and not worrying about things that Hibernate handles for me. Yes, it has its own glitches, but especially for simple data models it does the job, and does it well.
The only few reasons why would I choose plain JDBC would be:
uber-complicated maximum-optimized SQL that is performance critical;
Hibernate being stupid and not being capable to express what I want;
And especially if you say you are already managing other entities with Hibernate, why not keep your code in the same style everywhere?
I think you are better off using JDBC api. From what you describe, the two operations (select foreign key from table, insert into table_2) can easily be executed with a simple Stored Procedure call.
The advantage of using this technique is that you can manage transactions/exceptions within your stored procedure call.
I'm hesitating between two designs of a database project using Hibernate.
Design #1.
(1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it.
(2) Create a database library with Hibernate. This library implements the data provider interface in (1).
The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects.
Design #2.
Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate.
I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider?
Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2?
What do you think about this? Thanks for your time!
Hibernate Domain objects are simple POJO so you won't have to create separate DPI-objects, H-Object themselves can be used directly. In DAO you can control whether they come from hibernate or anything else.
I highly recommend reading Chapter 4 "Hitting the database" of Spring in Action, 3rd edition, even if you aren't using Spring in your application. Although my second recommendation would be to use Spring :-)
The DAO pattern is a great way to keep database and ORM logic isolated in the DAO implementation, and you only need one set of entity objects. You can make that happen without Spring, it just takes more work managing your sessions and transactions.
If I understand your post, this is sort of a middle-ground between Design 1 and Design 2. The H-Objects (the entities that Hibernates loads and persists) don't need any Hibernate specific code in them at all. That makes them perfectly acceptable to be used as your DPI-Objects.
I've had arguments with folks in the past who complain that the use of JPA or Hibernate Annotations exposes Hibernate specifics through the DAO interface. I personally take a more pragmatic view, since annotations are just metadata, and don't directly affect the operation of your entity classes.
If you do feel that the annotations expose too much, then you can go old school and use Hibernate Mappings instead. Then your H-Objects are 100% Hibernate free :-)
I recommend design #2. Simply construct domain objects, and let hibernate look after them. Don't write separate classes that are persisted.
Hibernate tries to hide most of the persistence business from you. You may need to add a few small annotations to your entities to help it along. But certainly don't make separate classes.
You may need some very small DAO classes. For example, if you have a Person entity, it would be fairly common practice to have a PersonDAO object that saves a person. Having said that, the code inside the DAO will be very simple, so for a really small project, it may not be worth it. For a large project, it's probably worth keeping your persistence code separate from your business logic, in case you want to use a different persistence technology later.