When a method is created under a specific rule of Spring Data JPA, a method that calls the corresponding query is created.
For example,
public interface CustomerJpaRepository implements JpaRepository<Customer, Long>{
public List<Customer> findByName(String name);
}
findByName() generate the query similar to one below.
select * from Customer where name = name;
I am curious about this principle. To be precise, I'm curious about the code that parses this method and turns it into a query.
I looked at the code of the SimpleJpaRepository class that implements JpaRepository, but could not find a clue. (Of course, there is a possibility that I did not find it).
In summary, when a method consisting of specific words is declared in JpaRepository, I am curious about the code that actually executes this method internally. More specifically, I'd like to see the code that makes this method works.
If there is no code to do this internally (I personally doubt it's possible...), I want to know how it is implemented in detail, if there is a link or material that explains the principle or internal process, please share related references.
The parsing logic for creating queries from spring-data repository method names is currently mainly declared in the package org.springframework.data.repository.query.parser.
Basically, a repository method name string is parsed into a PartTree, which contains Parts representing defined abstract query criteria.
The PartTree can then be used to create a more specific query object, e.g. with a JpaQueryCreator, or a RedisQueryCreator, depending on the type of repository.
I recommend you to check this Query Creation spring doc
It explains the rules of how the method convert into a query.
AppEngine only supports "TABLE_PER_CLASS" and "MAPPED_SUPERCLASS" for JPA inheritance.
Unfortunately "JOINED" and especially "SINGLE_TABLE" are not supported.
I'm wondering what the best alternative is to implement a SINGLE_TABLE alternative?
My only requirements are:
1) Have separate classes like AbstractEmployee, InternalEmployee, ExternalExmployee.
2) Being able to run a query over all employees, thus resulting in both InternalEmployee and ExternalEmployee instances.
The only thing I'm thinking off is using a 'big' Employee object containing all fields?
Any other ideas?
PS: vote for proper "SINGLE_TABLE" support via http://code.google.com/p/googleappengine/issues/detail?id=8366
You could in theory use #Embeded and #Embeddable to group related fields into an object. So you would have a class that looks something like.
#Entity
public class Employee {
// all the common employee fields go here
//
// the discriminator column on Employee class lets you be specific in your queries
private Integer type;
#Emebded
private Internal internal; // has the fields that are internal
#Embeded
private External external; // has the fields that are external
equals & hashcode that compare based on the discriminator type and other fields
}
What AppEngine supports and doesn't support is misleading there. AppEngine uses a property store, so any Kind can have any properties. Consequently, in principle, a Kind can contain InternalEmployee and ExternalEmployee "instances". The only thing that AppEngine JPA actually does is store all fields of a class in a single Kind object. That doesn't preclude having subtypes stored in the same Kind (with extra properties for the subtype-specific fields), which is the equivalent of "single-table".
PS, raising some issue on "AppEngine" as a whole won't get any response (look at the rest of issues in there ;-) ), bearing in mind the code affected here is in its own project at http://code.google.com/p/datanucleus-appengine and has its own issue tracker
How is it possible to keep clean layers with Hibernate/ORM (or other ORMs...)?
What I mean by clean layer separation is for exemple to keep all of the Hibernate stuff in the DAO layer.
For example, when creating a big CSV export stream, we should often do some Hibernate operations like evict to avoid OutOfMemory... The filling of the outputstream belong to the view, but the evict belongs to the DAO.
What I mean is that we are not supposed to put evict operations in the frontend / service, and neither we are supposed to put business logic in the DAO... Thus what can we do in such situations?
There are many cases where you have to do some stuff like evict, flush, clear, refresh, particularly when you play a bit with transactions, large data or things like that...
So how do you do to keep clear layers separation with an ORM tool like Hibernate?
Edit: something I don't like either at work is that we have a custom abstract DAO that permits a service to give an Hibernate criterion as an argument. This is practical, but for me in theory a service that calls this DAO shouldn't be aware of a criterion. I mean, we shouldn't have in any way to import Hibernate stuff into the business / view logic.
Is there an answer, simple or otherwise?
If by "clean" you mean that upper layers don't know about implementations of the lower layers, you can usually apply the
Tell, don't ask principle. For your CSV streaming example, it would be something like, say:
// This is a "global" API (meaning it is visible to all layers). This is ok as
// it is a specification and not an implementation.
public interface FooWriter {
void write(Foo foo);
}
// DAO layer
public class FooDaoImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
for (Foo foo: executeQueryThatReturnsLotsOfFoos(...)) {
fooWriter.write(foo);
evict(foo);
}
}
...
}
// UI layer
public class FooUI {
...
public void dumpCsv(...) {
...
fooBusiness.streamBigQueryTo(new CsvFooWriter(request.getOutputStream()), ...);
...
}
}
// Business layer
public class FooBusinessImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
if (user.canQueryFoos()) {
beginTransaction();
fooDao.streamBigQueryTo(fooWriter, ...);
auditAccess(...);
endTransaction();
}
...
}
}
In this way you can deal with your specific ORM with freedom. The downside of this "callback" approach: if your layers are on different JVMs then it might not be very workable (in the example you would need to be able to serialize CsvFooWriter).
About generic DAOs: I have never felt the need, most object access patterns I have found are different enough to make an specific implementation desirable. But certainly doing layer separation and forcing the business layer to create Hibernate criteria are contradictory paths. I would specify a different query method in the DAO layer for each different query, and then I would let the DAO implementation get the results in whatever way it might choose (criteria, query language, raw SQL, ...). So instead of:
public class FooDaoImpl extends AbstractDao<Foo> {
...
public Collection<Foo> getByCriteria(Criteria criteria) {
...
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Criteria criteria = ...;
// Build your criteria to get only foos between from and to
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
public void doSomethingWithActiveFoos() {
...
Criteria criteria = ...;
// Build your criteria to filter out passive foos
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
...
}
I would do:
public class FooDaoImpl {
...
public Collection<Foo> getFoosBetween(Date from ,Date to) {
// build and execute query according to from and to
}
public Collection<Foo> getActiveFoos() {
// build and execute query to get active foos
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Collection<Foo> foos = fooDaoImpl.getFoosBetween(from, to);
...
}
public void doSomethingWithActiveFoos() {
...
Collection<Foo> foos = fooDaoImpl.getActiveFoos();
...
}
...
}
Though someone could think that I'm pushing some business logic down to the DAO layer, it seems a better approach to me: changing the ORM implementation to an alternative one would be easier this way. Imagine, for example that for performance reasons you need to read Foos using raw JDBC to access some vendor-specific extension: with the generic DAO approach you would need to change both the business and DAO layers. With this approach you would just reimplement the DAO layer.
Well, you can always tell your DAO layer to do what it needs to do when you want to. Having a method like cleanUpDatasourceCache in your DAO layer, or something similar (or even a set of these methods for different objects), is not bad practice to me.
And your service layer is then able to call that method without any assumption on what is done by the DAO under the hood. A specific implementation which uses direct JDBC calls would do nothing in that method.
Usually a DAO layer to wrap the data access logic is necessary. Other times is just the EntityManager what you want to use for CRUD operations, for those cases, I wouldn't use a DAO as it would add unnecessary complexity to the code.
How should EntityManager be used in a nicely decoupled service layer and data access layer?
If you don't want to tie your code to Hibernate you can use Hibernate through JPA instead and not bother too much about abstracting everything within your DAOs. You are less likely to switch from JPA to something else than replacing Hibernate.
my 2 cents: i think the layer separation pattern is great as a starting point for most cases, but there is a point where we have to analyze each specific application case by case and design a more flexible solution. what i mean is, ask yourself for example:
is your DAO expected to be reused in another context other than
exporting csv data?
does it make sense to have another implementation of the same DAO
interface without hibernate ?
if both answers were no, maybe a little bit of coupling between persistence and data presentation is ok. i like the callback solution proposed above.
IMHO sometimes strict implementation of a pattern has a higher cost in readability, mantainability, etc. which are the very issues we were trying to fix by adopting a pattern in the first place
you can achieve layer separation by implementing DAO pattern and and doing all hibernate/JDBC/JPA related stuff in Dao itself
for eg:
you can specify a Generic Dao interface as
public interface GenericDao <T, PK extends Serializable> {
/** Persist the newInstance object into database */
PK create(T newInstance);
/** Retrieve an object that was previously persisted to the database using
* the indicated id as primary key
*/
T read(PK id);
/** Save changes made to a persistent object. */
void update(T transientObject);
/** Remove an object from persistent storage in the database */
void delete(T persistentObject);
}
and its implementaion as
public class GenericDaoHibernateImpl <T, PK extends Serializable>
implements GenericDao<T, PK>, FinderExecutor {
private Class<T> type;
public GenericDaoHibernateImpl(Class<T> type) {
this.type = type;
}
public PK create(T o) {
return (PK) getSession().save(o);
}
public T read(PK id) {
return (T) getSession().get(type, id);
}
public void update(T o) {
getSession().update(o);
}
public void delete(T o) {
getSession().delete(o);
}
}
so whenever service classes calls any method on any Dao without any assumption of the internal implementation of the method
have a look at the GenericDao link
Hibernate (either as a SessionManager or a JPA EntityManager) is the DAO. The Repository pattern is, as far as I have seen, the best starting place. There is a great image over at the DDD Sample Website which I think speaks volumes about how you keep things things separate.
My application layer has interfaces that are explicit business actions or values. The business rules are in the domain model and things like Hibernate live in the infrastructure. Services are defined at the domain layer as interfaces, and implemented in the infrastructure in my case. This means that for a given Foo domain object (an aggregate root in the DDD terminology) I usually get the Foo from a FooService and the FooService talks to a FooRepository which allows one to find a Foo based on some criteria. That criteria is expressed via method parameters (possibly complex object types) which at the implementation side, for example in a HibernateFooRepository, would be translated in to HQL or Hibernate criterion.
If you need batch processing, it should exist at the application level and use domain services to facilitate this. StartBatchTransaction/EndBatchTransaction. Hibernate may listen to start/end events in order to coordinate purging, loading, whatever.
In the specific case of serializing domain entities, though, I see nothing wrong with taking a set of criteria and iterating over them one at a time (from root entities).
I find that often, in the pursuit of separation, we often try to make things completely general. They are not one in the same - your application has to do something, and that something can and should be expressed rather explicitly.
If you can substitute an InMemoryFooRepository where a HibernateFooRepository was previously being used, you're on the right path. The natural flow through unit and integration testing your objects encourages this when you adhere or at least try to respect the layering outlined in the image I linked above.
You got some good answers here, I would like to add my thoughts on this (by the way, this is something to take care of in our code as well) I would also like to focus on the issue of having Hibernate annotations/JPA annotations on entities that you might need to use outside of your DAL (i.e - at business logic, or even send to your client side) -
A. If you use the GenericDAO pattern for a given entity, you may find your entity being annotated with Hibernate (or maybe JPA annotation) such as #Table, #ManyToOne and so on - this means that you client code may contain Hibernate/JPA annotations and you would require an appropriate jar to get it compiled, or have some other support at your client code this is for example if you use GWT as your client (which can have support for JPA annotations in order to get entities compiled), and share the entities between the server and the client code, or if you write a Java client that performs a bean lookup using InitialContext against a Java application server (in this case you will need a JAR
B. Another approach that you can have is work with Hibernate/JPA annotated code at server side, and expose Web Services (let's say RESTFul web service or SOAP) - this way, the client works with an "interface" that does not expose knowledge on Hibernate/JPA (for example - a WSDL in case of SOAP defines the contract between the client of the service and the service itself). By breaking the architecture to service oriented one, you get all kinds of benefits such as loose coupling, ease of replacement of pieces of code, and you can concentrate all the DAL logic in one service that serves the rest of your services, and later own replace the DAL if needed by another service.
C. You can use an "object to object" mapping framework such as dozer to map objects of classes with Hibernate/JPA annotations to what I call "true" POJOs - i.e - java beans with no annotations whatsoever on them.
D. Finally regarding annotations - why use annotations at all? Hibernate uses hbm xml files an alternative for doing the "ORM magic" - this way your classes can remain without annotations.
E. One last point - I would like to suggest you look at the stuff we did at Ovirt - you can dowload the code by git clone our repo. You will find there under engine/backend/manager/modules/bll - a maven project holding our bll logic, and under engine/backend/manager/moduled/dal - our DAL layer (although currently implemented with Spring-JDBC, and some hibernate experiments, you will get some good ideas on how to use it in your code. I would like to add that if you go for a similar solution, I suggest that you inject the DAOs in your code, and not hold them in a Singletone like we did with getXXXDao methods (this is legacy code we should strive to remove and move to injections).
I would recommend you let the database handle the export to CSV operation rather than building it yourself in Java, it isn't as efficient. ORM shouldn't really be used for those large scale batch operations, because ORM should only be used to manipulate transactional data.
Large scale Java batch operations should really be done by JDBC directly with transactional support turned off.
However, if you do this regularly, I recommend setting up a reporting database which is a delayed replica of the database that is not used by the application and utilizes database specific replication tools that may come with your database.
Your solution architect should be able to work with the other groups to help set this up for you.
If you really have to do it in the application tier, then using raw JDBC calls may be the better option. With raw JDBC you can perform a query to assemble the data that you require on the database side and fetch the data one row at a time then write to your output stream.
To answer your layers question. Though I don't like using the word layers because it usually implies one thing on top of another. I would rather use the word "components" and I have the following component groups.
application
domain - just annotated JPA classes, no persistence logic, usually a plain JAR file, but I recommend just plop it as a package in the EJB rather than having to deal with class path issues
contracts - WSDL and XSD files that define an interface between different components be it web services or just UI.
transaction scripts - Stateless EJBs that would have a transaction and persistence units injected into them and do the manipulation and persistence of the domain objects. These may implement the interfaces generated by the contracts.
UI - a separate WAR project with EJBs injected into them.
database
O/R diagram - this is the contract that is agreed upon by application and data team to ensure THE MINIMUM that the database will provide. It does not have to show everything.
DDLs - this is the database side implementation of the O/R diagram which will contain everything, but generally no one should care because it implementation details.
batch - batch operations such as export or replicate
reporting - provides queries to get business value reports from the system.
legacy
messaging contracts - these are contracts used by messaging systems such as JMS or WS-Notifications or standard web services.
their implementation
transformation scripts - used to transform one contract to another.
It seems to me we need to take another look at the layers.
(I hope someone corrects me if I get this wrong.)
Front End/UI
Business
Service/DAO
So for the case of Generating a Report, THe layers break down like so.
Front End/UI
will have a UI with a button "Get Some Report"
the button will then call the Business layer that knows what the report is about.
The data returned by the report generator is given any final formatting before being returned to the user.
Business
MyReportGenerator.GenerateReportData() or similar will be called
Service/DAO
inside of the report generator DAOs will be used. DAOLocator.GetDAO(Entity.class); or similar factory type methods would be used to get the DAOs. the returned DAOs will extend a Common DAO interface
Well, to get a clean separation of concern or you can say clean layer separation you can add Service layer to your application, which lies between you FrontEnd and DaoLayer.
You can put your business logic in Service layer and database related things in Dao layer using Hibernate.
So if you need to change something in your business logic, you can edit your service layer without changing the DAO and if you want to change the Dao layer, you can do without changing actual business logic i.e. Service Layer.
I have been brushing up on my design patterns and came across a thought that I could not find a good answer for anywhere. So maybe someone with more experience can help me out.
Is the DAO pattern only meant to be used to access data in a database?
Most the answers I found imply yes; in fact most that talk or write on the DAO pattern tend to automatically assume that you are working with some kind of database.
I disagree though. I could have a DAO like follows:
public interface CountryData {
public List<Country> getByCriteria(Criteria criteria);
}
public final class SQLCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From SQL Database.
}
}
public final class GraphCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From an Injected In-Memory Graph Data Structure.
}
}
Here I have a DAO interface and 2 implementations, one that works with an SQL database and one that works with say an in-memory graph data structure. Is this correct? Or is the graph implementation meant to be created in some other kind of layer?
And if it is correct, what is the best way to abstract implementation specific details that are required by each DAO implementation?
For example, take the Criteria Class I reference above. Suppose it is like this:
public final class Criteria {
private String countryName;
public String getCountryName() {
return this.countryName;
}
public void setCountryName(String countryName) {
this.countryName = countryName;
}
}
For the SQLCountryData, it needs to somehow map the countryName property to an SQL identifier so that it can generate the proper SQL. For the GraphCountryData, perhaps some sort of Predicate Object against the countryName property needs to be created to filter out vertices from the graph that fail.
What's the best way to abstract details like this without coupling client code working against the abstract CountryData with implementation specific details like this?
Any thoughts?
EDIT:
The example I included of the Criteria Class is simple enough, but consider if I want to allow the client to construct complex criterias, where they should not only specify the property to filter on, but also the equality operator, logical operators for compound criterias, and the value.
DAO's are part of the DAL (Data Access Layer) and you can have data backed by any kind of implementation (XML, RDBMS etc.). You just need to ensure that the project instance is injected/used at runtime. DI frameworks like Spring/Guice shine in this case. Also, your Criteria interface/implementation should be generic enough so that only business details are captured (i.e country name criteria) and the actual mapping is again handled by the implementation class.
For SQL, in your case, either you can hand generate SQL, generate it using a helper library like Spring or use a full fledged framework like MyBatis. In our project, Spring XML configuration files were used to decouple the client and the implementation; it might vary in your case.
EDIT: I see that you have raised a similar concern in the previous question. The answer still remains the same. You can add as much flexibility as you want in your interface; you just need to ensure that the implementation is smart enough to make sense of all the arguments it receives and maps them appropriately to the underlying source. In our case, we retrieved the value object from the business layer and converted it to a map in the SQL implementation layer which can be used by MyBatis. Again, this process was pretty much transparent and the only way for the service layer to communicate with DAO was via the interface defined value objects.
No, I don't believe it's tied to only databases. The acronym is for Data Access Object, not "Database Access Object" so it can be usable with any type of data source.
The whole point of it is to separate the application from the backing data store so that the store can be modified at will, provided it still follows the same rules.
That doesn't just mean turfing Oracle and putting in DB2. It could also mean switching to a totally non-DBMS-based solution.
ok this is a bit philosophical question, so I'll tell what I'm thinking about it.
DAO usually stands for Data Access Object. Here the source of data is not always Data Base, although in real world, implementations are usually come to this.
It can be XML, text file, some remote system, or, like you stated in-memory graph of objects.
From what I've seen in real-world project, yes, you right, you should provide different DAO implementations for accessing the data in different ways.
In this case one dao goes to DB, and another dao implementation goes to object graph.
The interface of DAO has to be designed very carefully. Your 'Criteria' has to be generic enough to encapsulate the way you're going to get the data from.
How to achieve this level of decoupling? The answer can vary depending on your system, by in general, I would say, the answer would be "as usual, by adding an another level of indirection" :)
You can also think about your criteria object as a data object where you supply only the data needed for the query. In this case you won't even need to support different Criteria.
Each particular implementation of DAO will take this data and treat it in its own different way: one will construct query for the graph, another will bind this to your SQL.
To minimize hassling with maintenance I would suggest you to use Dependency Management frameworks (like Spring, for example). Usually these frameworks are suited well to instantiate your DAO objects and play good together.
Good Luck!
No, DAO for databases only is a common misconception.
DAO is a "Data Access Object", not a "Database Access Object". Hence anywhere you need to CRUD data to/from ( e.g. file, memory, database, etc.. ), you can use DAO.
In Domain Driven Design there is a Repository pattern. While Repository as a word is far better than three random letters (DAO), the concept is the same.
The purpose of the DAO/Repository pattern is to abstract a backing data store, which can be anything that can hold a state.
I can't believe I'm asking this, but...
Is there any way, in Java, to execute a SQL statement (not JPQL) and map the results to a List of Plain Old Java Objects?
I want to be able to create small lightweight POJO objects and then have them populated by raw SQL queries. I'm expressly NOT looking to create complex objects: just primitives, with no relationships.
Everything seems to be centered around JPA/JPQL, but the problem with that is that I do not want to bind my objects to a specific table.
I feel like I'm either:
(a) on crazy pills, or
(b) missing something fundamental
A lightweight mapper is not available as part of the JDK itself. You could either roll-your-own simple mapper using Java's standard JDBC API (in fact JPA implementations build on top of that) or you could have a look at external libraries that provide simple SQL-to-Object mappers. I know MyBatis (formerly known as iBatis).
A) No, I think you're not on crazy pills and B) is it possible that you just missed JDBC?
Sormula may be able to do what you want. You would need to extend Table and override getTableName() and/or getQualifiedTableName() to supply the desired table name since sormula normally associates one POJO to one table. See example 2a and example 3a.
jOOQ has a couple of Record -> POJO mapping capabilities that will probably do the job for you (although jOOQ can do much more). Here's an example:
// A "mutable" POJO class
public class MyBook1 {
public int id;
public String title;
}
// The various "into()" methods allow for fetching records into your POJOs:
List<MyBook1> myBooks = create.select().from(BOOK).fetchInto(MyBook1.class);
Taken from the manual here:
http://www.jooq.org/doc/latest/manual/sql-execution/fetching/pojos/
The mapping algorithm is described in the Javadoc:
http://www.jooq.org/javadoc/latest/org/jooq/impl/DefaultRecordMapper.html
While the above example makes use of jOOQ's DSL API, you can do with plain SQL as well:
List<MyBook1> myBooks = create.resultQuery("SELECT * FROM BOOK")
.fetchInto(MyBook1.class);
You can even operate on a JDBC ResultSet, using jOOQ only for mapping:
ResultSet rs = stmt.executeQuery();
List<MyBook1> myBooks = create.fetch(rs).into(MyBook1.class);