How do you keep clean layers separation with Hibernate/ORM? - java

How is it possible to keep clean layers with Hibernate/ORM (or other ORMs...)?
What I mean by clean layer separation is for exemple to keep all of the Hibernate stuff in the DAO layer.
For example, when creating a big CSV export stream, we should often do some Hibernate operations like evict to avoid OutOfMemory... The filling of the outputstream belong to the view, but the evict belongs to the DAO.
What I mean is that we are not supposed to put evict operations in the frontend / service, and neither we are supposed to put business logic in the DAO... Thus what can we do in such situations?
There are many cases where you have to do some stuff like evict, flush, clear, refresh, particularly when you play a bit with transactions, large data or things like that...
So how do you do to keep clear layers separation with an ORM tool like Hibernate?
Edit: something I don't like either at work is that we have a custom abstract DAO that permits a service to give an Hibernate criterion as an argument. This is practical, but for me in theory a service that calls this DAO shouldn't be aware of a criterion. I mean, we shouldn't have in any way to import Hibernate stuff into the business / view logic.
Is there an answer, simple or otherwise?

If by "clean" you mean that upper layers don't know about implementations of the lower layers, you can usually apply the
Tell, don't ask principle. For your CSV streaming example, it would be something like, say:
// This is a "global" API (meaning it is visible to all layers). This is ok as
// it is a specification and not an implementation.
public interface FooWriter {
void write(Foo foo);
}
// DAO layer
public class FooDaoImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
for (Foo foo: executeQueryThatReturnsLotsOfFoos(...)) {
fooWriter.write(foo);
evict(foo);
}
}
...
}
// UI layer
public class FooUI {
...
public void dumpCsv(...) {
...
fooBusiness.streamBigQueryTo(new CsvFooWriter(request.getOutputStream()), ...);
...
}
}
// Business layer
public class FooBusinessImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
if (user.canQueryFoos()) {
beginTransaction();
fooDao.streamBigQueryTo(fooWriter, ...);
auditAccess(...);
endTransaction();
}
...
}
}
In this way you can deal with your specific ORM with freedom. The downside of this "callback" approach: if your layers are on different JVMs then it might not be very workable (in the example you would need to be able to serialize CsvFooWriter).
About generic DAOs: I have never felt the need, most object access patterns I have found are different enough to make an specific implementation desirable. But certainly doing layer separation and forcing the business layer to create Hibernate criteria are contradictory paths. I would specify a different query method in the DAO layer for each different query, and then I would let the DAO implementation get the results in whatever way it might choose (criteria, query language, raw SQL, ...). So instead of:
public class FooDaoImpl extends AbstractDao<Foo> {
...
public Collection<Foo> getByCriteria(Criteria criteria) {
...
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Criteria criteria = ...;
// Build your criteria to get only foos between from and to
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
public void doSomethingWithActiveFoos() {
...
Criteria criteria = ...;
// Build your criteria to filter out passive foos
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
...
}
I would do:
public class FooDaoImpl {
...
public Collection<Foo> getFoosBetween(Date from ,Date to) {
// build and execute query according to from and to
}
public Collection<Foo> getActiveFoos() {
// build and execute query to get active foos
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Collection<Foo> foos = fooDaoImpl.getFoosBetween(from, to);
...
}
public void doSomethingWithActiveFoos() {
...
Collection<Foo> foos = fooDaoImpl.getActiveFoos();
...
}
...
}
Though someone could think that I'm pushing some business logic down to the DAO layer, it seems a better approach to me: changing the ORM implementation to an alternative one would be easier this way. Imagine, for example that for performance reasons you need to read Foos using raw JDBC to access some vendor-specific extension: with the generic DAO approach you would need to change both the business and DAO layers. With this approach you would just reimplement the DAO layer.

Well, you can always tell your DAO layer to do what it needs to do when you want to. Having a method like cleanUpDatasourceCache in your DAO layer, or something similar (or even a set of these methods for different objects), is not bad practice to me.
And your service layer is then able to call that method without any assumption on what is done by the DAO under the hood. A specific implementation which uses direct JDBC calls would do nothing in that method.

Usually a DAO layer to wrap the data access logic is necessary. Other times is just the EntityManager what you want to use for CRUD operations, for those cases, I wouldn't use a DAO as it would add unnecessary complexity to the code.
How should EntityManager be used in a nicely decoupled service layer and data access layer?

If you don't want to tie your code to Hibernate you can use Hibernate through JPA instead and not bother too much about abstracting everything within your DAOs. You are less likely to switch from JPA to something else than replacing Hibernate.

my 2 cents: i think the layer separation pattern is great as a starting point for most cases, but there is a point where we have to analyze each specific application case by case and design a more flexible solution. what i mean is, ask yourself for example:
is your DAO expected to be reused in another context other than
exporting csv data?
does it make sense to have another implementation of the same DAO
interface without hibernate ?
if both answers were no, maybe a little bit of coupling between persistence and data presentation is ok. i like the callback solution proposed above.
IMHO sometimes strict implementation of a pattern has a higher cost in readability, mantainability, etc. which are the very issues we were trying to fix by adopting a pattern in the first place

you can achieve layer separation by implementing DAO pattern and and doing all hibernate/JDBC/JPA related stuff in Dao itself
for eg:
you can specify a Generic Dao interface as
public interface GenericDao <T, PK extends Serializable> {
/** Persist the newInstance object into database */
PK create(T newInstance);
/** Retrieve an object that was previously persisted to the database using
* the indicated id as primary key
*/
T read(PK id);
/** Save changes made to a persistent object. */
void update(T transientObject);
/** Remove an object from persistent storage in the database */
void delete(T persistentObject);
}
and its implementaion as
public class GenericDaoHibernateImpl <T, PK extends Serializable>
implements GenericDao<T, PK>, FinderExecutor {
private Class<T> type;
public GenericDaoHibernateImpl(Class<T> type) {
this.type = type;
}
public PK create(T o) {
return (PK) getSession().save(o);
}
public T read(PK id) {
return (T) getSession().get(type, id);
}
public void update(T o) {
getSession().update(o);
}
public void delete(T o) {
getSession().delete(o);
}
}
so whenever service classes calls any method on any Dao without any assumption of the internal implementation of the method
have a look at the GenericDao link

Hibernate (either as a SessionManager or a JPA EntityManager) is the DAO. The Repository pattern is, as far as I have seen, the best starting place. There is a great image over at the DDD Sample Website which I think speaks volumes about how you keep things things separate.
My application layer has interfaces that are explicit business actions or values. The business rules are in the domain model and things like Hibernate live in the infrastructure. Services are defined at the domain layer as interfaces, and implemented in the infrastructure in my case. This means that for a given Foo domain object (an aggregate root in the DDD terminology) I usually get the Foo from a FooService and the FooService talks to a FooRepository which allows one to find a Foo based on some criteria. That criteria is expressed via method parameters (possibly complex object types) which at the implementation side, for example in a HibernateFooRepository, would be translated in to HQL or Hibernate criterion.
If you need batch processing, it should exist at the application level and use domain services to facilitate this. StartBatchTransaction/EndBatchTransaction. Hibernate may listen to start/end events in order to coordinate purging, loading, whatever.
In the specific case of serializing domain entities, though, I see nothing wrong with taking a set of criteria and iterating over them one at a time (from root entities).
I find that often, in the pursuit of separation, we often try to make things completely general. They are not one in the same - your application has to do something, and that something can and should be expressed rather explicitly.
If you can substitute an InMemoryFooRepository where a HibernateFooRepository was previously being used, you're on the right path. The natural flow through unit and integration testing your objects encourages this when you adhere or at least try to respect the layering outlined in the image I linked above.

You got some good answers here, I would like to add my thoughts on this (by the way, this is something to take care of in our code as well) I would also like to focus on the issue of having Hibernate annotations/JPA annotations on entities that you might need to use outside of your DAL (i.e - at business logic, or even send to your client side) -
A. If you use the GenericDAO pattern for a given entity, you may find your entity being annotated with Hibernate (or maybe JPA annotation) such as #Table, #ManyToOne and so on - this means that you client code may contain Hibernate/JPA annotations and you would require an appropriate jar to get it compiled, or have some other support at your client code this is for example if you use GWT as your client (which can have support for JPA annotations in order to get entities compiled), and share the entities between the server and the client code, or if you write a Java client that performs a bean lookup using InitialContext against a Java application server (in this case you will need a JAR
B. Another approach that you can have is work with Hibernate/JPA annotated code at server side, and expose Web Services (let's say RESTFul web service or SOAP) - this way, the client works with an "interface" that does not expose knowledge on Hibernate/JPA (for example - a WSDL in case of SOAP defines the contract between the client of the service and the service itself). By breaking the architecture to service oriented one, you get all kinds of benefits such as loose coupling, ease of replacement of pieces of code, and you can concentrate all the DAL logic in one service that serves the rest of your services, and later own replace the DAL if needed by another service.
C. You can use an "object to object" mapping framework such as dozer to map objects of classes with Hibernate/JPA annotations to what I call "true" POJOs - i.e - java beans with no annotations whatsoever on them.
D. Finally regarding annotations - why use annotations at all? Hibernate uses hbm xml files an alternative for doing the "ORM magic" - this way your classes can remain without annotations.
E. One last point - I would like to suggest you look at the stuff we did at Ovirt - you can dowload the code by git clone our repo. You will find there under engine/backend/manager/modules/bll - a maven project holding our bll logic, and under engine/backend/manager/moduled/dal - our DAL layer (although currently implemented with Spring-JDBC, and some hibernate experiments, you will get some good ideas on how to use it in your code. I would like to add that if you go for a similar solution, I suggest that you inject the DAOs in your code, and not hold them in a Singletone like we did with getXXXDao methods (this is legacy code we should strive to remove and move to injections).

I would recommend you let the database handle the export to CSV operation rather than building it yourself in Java, it isn't as efficient. ORM shouldn't really be used for those large scale batch operations, because ORM should only be used to manipulate transactional data.
Large scale Java batch operations should really be done by JDBC directly with transactional support turned off.
However, if you do this regularly, I recommend setting up a reporting database which is a delayed replica of the database that is not used by the application and utilizes database specific replication tools that may come with your database.
Your solution architect should be able to work with the other groups to help set this up for you.
If you really have to do it in the application tier, then using raw JDBC calls may be the better option. With raw JDBC you can perform a query to assemble the data that you require on the database side and fetch the data one row at a time then write to your output stream.
To answer your layers question. Though I don't like using the word layers because it usually implies one thing on top of another. I would rather use the word "components" and I have the following component groups.
application
domain - just annotated JPA classes, no persistence logic, usually a plain JAR file, but I recommend just plop it as a package in the EJB rather than having to deal with class path issues
contracts - WSDL and XSD files that define an interface between different components be it web services or just UI.
transaction scripts - Stateless EJBs that would have a transaction and persistence units injected into them and do the manipulation and persistence of the domain objects. These may implement the interfaces generated by the contracts.
UI - a separate WAR project with EJBs injected into them.
database
O/R diagram - this is the contract that is agreed upon by application and data team to ensure THE MINIMUM that the database will provide. It does not have to show everything.
DDLs - this is the database side implementation of the O/R diagram which will contain everything, but generally no one should care because it implementation details.
batch - batch operations such as export or replicate
reporting - provides queries to get business value reports from the system.
legacy
messaging contracts - these are contracts used by messaging systems such as JMS or WS-Notifications or standard web services.
their implementation
transformation scripts - used to transform one contract to another.

It seems to me we need to take another look at the layers.
(I hope someone corrects me if I get this wrong.)
Front End/UI
Business
Service/DAO
So for the case of Generating a Report, THe layers break down like so.
Front End/UI
will have a UI with a button "Get Some Report"
the button will then call the Business layer that knows what the report is about.
The data returned by the report generator is given any final formatting before being returned to the user.
Business
MyReportGenerator.GenerateReportData() or similar will be called
Service/DAO
inside of the report generator DAOs will be used. DAOLocator.GetDAO(Entity.class); or similar factory type methods would be used to get the DAOs. the returned DAOs will extend a Common DAO interface

Well, to get a clean separation of concern or you can say clean layer separation you can add Service layer to your application, which lies between you FrontEnd and DaoLayer.
You can put your business logic in Service layer and database related things in Dao layer using Hibernate.
So if you need to change something in your business logic, you can edit your service layer without changing the DAO and if you want to change the Dao layer, you can do without changing actual business logic i.e. Service Layer.

Related

What's the point of having DTO object when you have the same object as POJO (Entity)?

I would like to understand what's the benefits to create DTO objects when you already have POJO object (as Entity).
In my project I have both :
DTO classes are used to communicate between Web Service and the application
POJO entity classes (JPA) are used for communication between database and the application
If I look at a DTO object class (let's call it MyObjDTO) and the same class but POJO side (let's call it MyObjPOJO) there is no difference at all except MyObjPOJO as annotation due to the fact it's an #Entity.
So in fact, I got in my project 2 classes who look the same (same attributes, same methods) but for different puprose.
IMO, in this case the DTO class is useless and increase application complexity because all I do with DTO class I can do it with my POJO class and moreover, for a single type of object I have to maintain at least 2 classes (the DTO and POJO), for instance if I add an attribute I have to add this attribute in both classes.
I'm not an expert and I'm questionning about my thoughts; what do you think about it ?
This answer is a replica of what can be found on stack exchange. IMHO the OP should be closed for being posted in the wrong forum. It's currently also attracting opinionated answers, though not necessarily so, and isn't tied to java in any particular way.
DTO is a pattern and it is implementation (POJO/POCO) independent. DTO says, since each call to any remote interface is expensive, response to each call should bring as much data as possible. So, if multiple requests are required to bring data for a particular task, data to be brought can be combined in a DTO so that only one request can bring all the required data. Catalog of Patterns of Enterprise Application Architecture has more details.
DTO's are a fundamental concept, not outdated.
What is somewhat outdated is the notion of having DTOs that contain no logic at all, are used only for transmitting data and "mapped" from domain objects before transmission to the client, and there mapped to view models before passing them to the display layer. In simple applications, the domain objects can often be directly reused as DTOs and passed through directly to the display layer, so that there is only one unified data model. For more complex applications you don't want to expose the entire domain model to the client, so a mapping from domain models to DTOs is necessary. Having a separate view model that duplicates the data from the DTOs almost never makes sense.
However, the reason why this notion is outdated rather than just plain wrong is that some (mainly older) frameworks/technologies require it, as their domain and view models are not POJOS and instead tied directly to the framework.
Most notably, Entity Beans in J2EE prior to the EJB 3 standard were not POJOs and instead were proxy objects constructed by the app server - it was simply not possible to send them to the client, so you had no choice about haing a separate DTO layer - it was mandatory.
Although DTO is not an outdated pattern, it is often applied needlessly, which might make it appear outdated.
From Java guru Adam Bien:
The most misused pattern in the Java Enterprise community is the DTO. DTO was clearly defined as a solution for a distribution problem. DTO was meant to be a coarse-grained data container which efficiently transports data between processes (tiers). ~ Adam Bien
From Martin Fowler:
DTOs are called Data Transfer Objects because their whole purpose is to shift data in expensive remote calls. They are part of implementing a coarse grained interface which a remote interface needs for performance. Not just do you not need them in a local context, they are actually harmful both because a coarse-grained API is more difficult to use and because you have to do all the work moving data from your domain or data source layer into the DTOs. ~ Martin Fowler
Here is a Java EE specific example of a common but incorrect use of the DTO pattern. If you're unfamiliar with Java EE, you just need to know the MVC pattern: a "JSF ManagedBean" is a class used by the View, and a "JPA Entity" is the Model in the MVC pattern.
So, for example, say you have a JSF ManagedBean. A common question is whether the bean should hold a reference to a JPA Entity directly, or should it maintain a reference to some intermediary object which is later converted to an Entity. I have heard this intermediary object referred to as a DTO, but if your ManagedBeans and Entities are operating within the same JVM, then there is little benefit to using the DTO pattern.
Futhermore, consider Bean Validation annotations (again, if you're unfamiliar with Java EE, know that Bean Validation is an API for validating data). Your JPA Entities are likely annotated with #NotNull and #Size validations. If you're using a DTO, you'll want to repeat these validations in your DTO so that clients using your remote interface don't need to send a message to find out they've failed basic validation. Imagine all that extra work of copying Bean Validation annotations between your DTO and Entity, but if your View and Entities are operating within the same JVM, there is no need to take on this extra work: just use the Entities.
The Catalog of Patterns of Enterprise Application Architecture provides a concise explanation of DTOs, and here are more references I found illuminating:
HOW TO DEAL WITH J2EE AND DESIGN PATTERNS
How to use DTO in JSF + Spring + Hibernate
Pros and Cons of Data Transfer Objects Martin Fowler's description of DTO
Martin Fowler explains the
problem with DTOs. Apparently they were being misused as early
as 2004
Most of this comes down to Clean Architecture and a focus on separation of concerns
My biggest use-case for the entities is so i don't litter the DTO's with runtime variables or methods that i've added in for convenience (such as display names / values or post-calculated values)
If its a very simple entity then there isn't so much of a big deal about it, but if you're being extremely strict with Clean then there becomes a lot of redundant models (DTO, DBO, Entity)
Its really a preference in how much you want to dedicate to strict Clean architecture
https://medium.com/android-dev-hacks/detailed-guide-on-android-clean-architecture-9eab262a9011
There is an advantage, even if very small, to having a separation of layers in your architecture, and having objects "morph" as they travel through the layers. this decoupling allows you to replace any layer in your software with minimal change, just update the mapping of fields between 2 objects and your all set.
If the 2 objects have the same members...well, then that's what Apache Commons BeanUtils.copyProperties() is for ;)
Other people have already informed you of the benefits of DTO, here I will talk about how to solve the trouble of maintaining one more DTO version object.
I deveploy a library beanKnife to automatically generate a dto. It will create a new class base the original pojo. You can filter the inherited properties, modify existing properties or add new properties. All you need is just writing a configuration class, and the library will do the left things for you. The configuration support inheritance feature, so you can extract the common part to simpify the configuration even more.
Here is the example
#Entity
class Pojo1 {
private int a;
#OneToMany(mappedBy="b")
private List<Pojo2> b;
}
#Entity
class Pojo2 {
private String a;
#ManyToOne()
private Pojo1 b;
}
// Include all properties. By default, nothing is included.
// To change this behaviour, here use a base configuration and all other final configuration will inherit it.
#PropertiesIncludePattern(".*")
// By default, the generated class name is the original class name append with "View",
// This annotation change the behaviour. Now class Pojo1 will generate the class Pojo1Dto
#ViewGenNameMapper("${name}Dto")
class BaseConfiguration {
}
// generate Pojo1Dto, which has a pojo2 info list b instead of pojo2 list
#ViewOf(value = Pojo1.class)
class Pojo1DtoConfiguration extends BaseConfiguration {
private List<Pojo2Info> b;
}
// generate Pojo1Info, which exclude b
#ViewOf(value = Pojo1.class, genName="Pojo1Info", excludes = "b")
class Pojo1InfoConfiguration extends BaseConfiguration {}
// generate Pojo2Dto, which has a pojo1 info b instead of pojo1
#ViewOf(value = Pojo2.class)
class Pojo2DtoConfiguration extends BaseConfiguration {
private Pojo1Info b;
}
// generate Pojo2Info, which exclude b
#ViewOf(value = Pojo2.class, genName="Pojo2Info", excludes = "b")
class Pojo2InfoConfiguration extends BaseConfiguration {}
will generate
class Pojo1Dto {
private int a;
private List<Pojo2Info> b;
}
class Pojo1Info {
private int a;
}
class Pojo2Dto {
private String a;
private Pojo1Info b;
}
class Pojo2Info {
private String a;
}
Then use it like this
Pojo1 pojo1 = ...
Pojo1Dto pojo1Dto = Pojo1Dto.read(pojo1);
Pojo2 pojo2 = ...
Pojo2Dto pojo2Dto = Pojo2Dto.read(pojo2);

Difference between Repository and DAO design patterns [duplicate]

What is the difference between Data Access Objects (DAO) and Repository patterns? I am developing an application using Enterprise Java Beans (EJB3), Hibernate ORM as infrastructure, and Domain-Driven Design (DDD) and Test-Driven Development (TDD) as design techniques.
DAO is an abstraction of data persistence.
Repository is an abstraction of a collection of objects.
DAO would be considered closer to the database, often table-centric.
Repository would be considered closer to the Domain, dealing only in Aggregate Roots.
Repository could be implemented using DAO's, but you wouldn't do the opposite.
Also, a Repository is generally a narrower interface. It should be simply a collection of objects, with a Get(id), Find(ISpecification), Add(Entity).
A method like Update is appropriate on a DAO, but not a Repository - when using a Repository, changes to entities would usually be tracked by separate UnitOfWork.
It does seem common to see implementations called a Repository that is really more of a DAO, and hence I think there is some confusion about the difference between them.
OK, think I can explain better what I've put in comments :).
So, basically, you can see both those as the same, though DAO is a more flexible pattern than Repository. If you want to use both, you would use the Repository in your DAO-s. I'll explain each of them below:
REPOSITORY:
It's a repository of a specific type of objects - it allows you to search for a specific type of objects as well as store them. Usually it will ONLY handle one type of objects. E.g. AppleRepository would allow you to do AppleRepository.findAll(criteria) or AppleRepository.save(juicyApple).
Note that the Repository is using Domain Model terms (not DB terms - nothing related to how data is persisted anywhere).
A repository will most likely store all data in the same table, whereas the pattern doesn't require that. The fact that it only handles one type of data though, makes it logically connected to one main table (if used for DB persistence).
DAO - data access object (in other words - object used to access data)
A DAO is a class that locates data for you (it is mostly a finder, but it's commonly used to also store the data). The pattern doesn't restrict you to store data of the same type, thus you can easily have a DAO that locates/stores related objects.
E.g. you can easily have UserDao that exposes methods like
Collection<Permission> findPermissionsForUser(String userId)
User findUser(String userId)
Collection<User> findUsersForPermission(Permission permission)
All those are related to User (and security) and can be specified under then same DAO. This is not the case for Repository.
Finally
Note that both patterns really mean the same (they store data and they abstract the access to it and they are both expressed closer to the domain model and hardly contain any DB reference), but the way they are used can be slightly different, DAO being a bit more flexible/generic, while Repository is a bit more specific and restrictive to a type only.
DAO and Repository pattern are ways of implementing Data Access Layer (DAL). So, let's start with DAL, first.
Object-oriented applications that access a database, must have some logic to handle database access. In order to keep the code clean and modular, it is recommended that database access logic should be isolated into a separate module. In layered architecture, this module is DAL.
So far, we haven't talked about any particular implementation: only a general principle that putting database access logic in a separate module.
Now, how we can implement this principle? Well, one know way of implementing this, in particular with frameworks like Hibernate, is the DAO pattern.
DAO pattern is a way of generating DAL, where typically, each domain entity has its own DAO. For example, User and UserDao, Appointment and AppointmentDao, etc. An example of DAO with Hibernate: http://gochev.blogspot.ca/2009/08/hibernate-generic-dao.html.
Then what is Repository pattern? Like DAO, Repository pattern is also a way achieving DAL. The main point in Repository pattern is that, from the client/user perspective, it should look or behave as a collection. What is meant by behaving like a collection is not that it has to be instantiated like Collection collection = new SomeCollection(). Instead, it means that it should support operations such as add, remove, contains, etc. This is the essence of Repository pattern.
In practice, for example in the case of using Hibernate, Repository pattern is realized with DAO. That is an instance of DAL can be both at the same an instance of DAO pattern and Repository pattern.
Repository pattern is not necessarily something that one builds on top of DAO (as some may suggest). If DAOs are designed with an interface that supports the above-mentioned operations, then it is an instance of Repository pattern. Think about it, If DAOs already provide a collection-like set of operations, then what is the need for an extra layer on top of it?
Frankly, this looks like a semantic distinction, not a technical distinction. The phrase Data Access Object doesn't refer to a "database" at all. And, although you could design it to be database-centric, I think most people would consider doing so a design flaw.
The purpose of the DAO is to hide the implementation details of the data access mechanism. How is the Repository pattern different? As far as I can tell, it isn't. Saying a Repository is different to a DAO because you're dealing with/return a collection of objects can't be right; DAOs can also return collections of objects.
Everything I've read about the repository pattern seems rely on this distinction: bad DAO design vs good DAO design (aka repository design pattern).
Repository is more abstract domain oriented term that is part of Domain Driven Design, it is part of your domain design and a common language, DAO is a technical abstraction for data access technology, repository is concerns only with managing existing data and factories for creation of data.
check these links:
http://warren.mayocchi.com/2006/07/27/repository-or-dao/
http://fabiomaulo.blogspot.com/2009/09/repository-or-dao-repository.html
A DAO allows for a simpler way to get data from storage, hiding the ugly queries.
Repository deals with data too and hides queries and all that but, a repository deals with business/domain objects.
A repository will use a DAO to get the data from the storage and uses that data to restore a business object.
For example, A DAO can contain some methods like that -
public abstract class MangoDAO{
abstract List<Mango>> getAllMangoes();
abstract Mango getMangoByID(long mangoID);
}
And a Repository can contain some method like that -
public abstract class MangoRepository{
MangoDao mangoDao = new MangDao;
Mango getExportQualityMango(){
for(Mango mango:mangoDao.getAllMangoes()){
/*Here some business logics are being applied.*/
if(mango.isSkinFresh()&&mangoIsLarge(){
mango.setDetails("It is an export quality mango");
return mango;
}
}
}
}
This tutorial helped me to get the main concept easily.
The key difference is that a repository handles the access to the aggregate roots in a an aggregate, while DAO handles the access to entities. Therefore, it's common that a repository delegates the actual persistence of the aggregate roots to a DAO. Additionally, as the aggregate root must handle the access of the other entities, then it may need to delegate this access to other DAOs.
Repository are nothing but well-designed DAO.
ORM are table centric but not DAO.
There's no need to use several DAO in repository since DAO itself can do exactly the same with ORM repositories/entities or any DAL provider, no matter where and how a car is persisted 1 table, 2 tables, n tables, half a table, a web service, a table and a web service etc.
Services uses several DAO/repositories.
My own DAO, let's say CarDao only deal with Car DTO,I mean, only take Car DTO in input and only return car DTO or car DTO collections in output.
So just like Repository, DAO actually is an IoC, for the business logic, allowing persitence interfaces not be be intimidated by persitence strategies or legacies.
DAO both encapsulates the persistence strategy and does provide the domaine-related persitence interface.
Repository is just an another word for those who had not understood what a well-defined DAO actualy was.
DAO provides abstraction on database/data files or any other persistence mechanism so that, persistence layer could be manipulated without knowing its implementation details.
Whereas in Repository classes, multiple DAO classes can be used inside a single Repository method to get an operation done from "app perspective". So, instead of using multiple DAO at Domain layer, use repository to get it done.
Repository is a layer which may contain some application logic like: If data is available in in-memory cache then fetch it from cache otherwise, fetch data from network and store it in in-memory cache for next time retrieval.
Try to find out if DAO or the Repository pattern is most applicable to the following situation :
Imagine you would like to provide a uniform data access API for a persistent mechanism to various types of data sources such as RDBMS, LDAP, OODB, XML repositories and flat files.
Also refer to the following links as well, if interested:
http://www.codeinsanity.com/2008/08/repository-pattern.html
http://blog.fedecarg.com/2009/03/15/domain-driven-design-the-repository/
http://devlicio.us/blogs/casey/archive/2009/02/20/ddd-the-repository-pattern.aspx
http://en.wikipedia.org/wiki/Domain-driven_design
http://msdn.microsoft.com/en-us/magazine/dd419654.aspx
Per Spring documentation there is no clear difference:
The #Repository annotation is a marker for any class that fulfills the
role or stereotype of a repository (also known as Data Access Object
or DAO).
DAOs might not always explicitly be related to Only DataBase,
It can be just an Interface to access Data, Data in this context might be accessed from DB/Cache or even REST (not so common these days, Since we can easily separate them in their respective Rest/IPC Clients),
Repo here in this approach be implemented by any of the ORM Solutions, If the underlying Cache/Repo Changes it'll not be propagated/impacts Service/Business Layers.
DAOs can accept/return Domain Types. Consider for a Student Domain, the associated DAO class will be StudentDao
StudentDao {
StudentRepository,
StudentCache,
Optional<Student> getStudent(Id){
// Use StudentRepository/StudentCache to Talk to DD & Cache
// Cache Type can be the same as Domain Type, DB Type(Entities) should be a Same/Different Type.
}
Student updateStudent(Student){
// Use StudentRepository/StudentCache to Talk to DD & Cache
// Cache Type can be the same as Domain Type, DB Type(Entities) should be a Same/Different Type.
}
}
DAOs can accept/return SubDomain Types. Consider a Student Domain, that has Sub Domain, say Attendance / Subject that will have a DAO class StudentDao,
StudentDao {
StudentRepository, SubjectRepository, AttendanceRepository
StudentCache, SubjectCache, AttendanceCache
Set<Subject> getStudentSubject(Id){
// Use SubjectRepository/SubjectCache to Talk to DD & Cache
}
Student addNewSubjectToStudent(ID, Subject){
// Use SubjectRepository/SubjectCache to Talk to DD & Cache
}
}
If we consider the original definitions of both design patterns DAO and Repository appear very similar. The main difference is the dictionary and the source from which they came (Oracle vs. Fowler).
Quote:
DAO - "separates a data resource's client interface from its data access mechanisms" and "The DAO implements the access mechanism required to work with the data source. The data source could be a persistent store like an RDBMS, an external service like a B2B exchange, a repository like an LDAP database, or a business service accessed via CORBA Internet Inter-ORB Protocol (IIOP) or low-level sockets."
Repository - "Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects." and "Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer. Repository also supports the objective of achieving a clean separation and one-way dependency between the domain and data mapping layers."
Based on these citations, both design patterns mediate communication between the domain layer and the data layer. Moreover, Repository is associated with ORM and on the contrary DAO is a more generic interface for accessing data from anywhere.
in a very simple sentence: The significant difference being
that Repositories represent collections, whilst DAOs are closer to the database, often being far more
table-centric.

DAO pattern in 3 level Architecture. How to deal with complex queries

I'm currently developing an application using 3 layers ui-service-dao . At the dao level I am using Spring's jdbcTemplate . So far so good but I encountered a situation which I like to have some more insight
My DAOs had at the beginnign only simple CRUD methods . At the service level I'm checking for input values and delegating to the DAOs and also dealing with transactions.
Now I need things more like this one below
List getAllBooksByAuthorName(String name)
My question is where to put this one? In DAO-layer using sql or in service by using core methods of CRUD and computing simply in java
I rather tend to use sql as much as possible instead of calculating at service layer. But now it seems like for every new method , I also need to change the interface of the DAO and make correspondent method in the interface of the service. Then service becomes nothing more than a delegator and parameter checker. It feels not right.
Your opinions are quite valid but i didn't get much why you are in doubt.Generally DAO pattern reduce coupling between Business logic and Persistence logic.
public interface BooksDAO{
public boolean save(Book book);
public boolean update(Book book);
public boolean findByBookIsbn(int isbn);
public boolean delete(Book book);
//here is what you want
public List<Book> getAllBooksByAuthorName(String name);
}
Now you can have different implementations for BooksDao like HibernateBooksDaoImpl or JdbcBooksDAOImpl. DAO pattern makes easy to write isolated junit test and executes faster.
If you have complex queries you can still use dao pattern. Basically there is way to write complex queries in implementation side whether it is simple jdbc (sql can be used) or spring jdbc template(still sql can be used) or hibernate use criteria.
see:
http://docs.jboss.org/hibernate/core/3.6/javadocs/org/hibernate/Criteria.html
For more information look:
http://javarevisited.blogspot.com/2013/01/data-access-object-dao-design-pattern-java-tutorial-example.html
http://www.oracle.com/technetwork/articles/entarch/spring-jdbc-dao-101284.html
That's however how it should be. In the business logic is reduced to nothing except calling a DAO method, then you are lucky to have simple business logic.
It would obviously be extremely inefficient and completely unrealistic to have the service call BookDAO.findAll() and filter the giant list of books returned by the DAO. SQL is the right tool for the job.
Note that the days where mocking was only possible with interfaces are past. Using an interface to define your DAO methods isn't really necessary anymore.
For example you could use the Entity-Control-Boundary-Pattern.
Your package structure will look like the following:
Under the namespace of your application you could introduce a package called "business", in that package there can be packages named by the business responsibility and these packages are seperated into "entity", "control" and "boundary".
com.example.myapplication.business.project.entity -> If you are using JPA all your entities can be stored in this package, contains DTOs
com.example.myapplication.business.project.control -> In this package refactored services can be stored, for example if the DAO-Code is needed in more than just one boundary, the code could be refactored in this package
com.example.myapplication.business.project.boundary -> This package contains all services that can be seen by the client (for example your web page)
In the package "presentation" your ui controllers can be stored and the ui controllers should only access the services stored in the boundary package.
com.example.myapplication.presentation.project
By using this pattern you avoid the use of delegators, because the services stored in the boundary-package can also contain sql-specific stuff and all servies and entities are in the package they belong to.
The pattern can be also used outside of JEE. Adam Bien has revolutionised this pattern in the JEE-Architecture and I´m using it also in my own projects. Here is an example -> http://www.youtube.com/watch?v=JWcoiXNoKxk#t=2380
The methods of your boundary could look like the following:
public interface ProjectService {
public Project createProject(Project project);
public Project getProjectById(String projectId);
public List<Project> getProjectList(ListConfig config); // where ListConfig is a class containing information of how the list should be sorted, optional pagination information, etc, so that the interface must not be changed every time you need a new parameter
public Project updateProject(Project project);
public void deleteProject(String projectId);
public Project addFeature(Project project, Feature feature);
}
#ayan ahmedov: Sorry, the first time I tried to answer your question I had unfortunately edit your question and my answer was in the content area of your question. I´ve 'reverted' the accidental changes.

How can I break down some methods in the spring service layer

I'm doing some maintenance/evolution on a multi-layered spring 3.0 project. The client is a heavy RCP application invoking some spring beans methods from the service layer (Managers) on a RMI based server.
I have several huge method in the Managers, some of them are doing more than 250 lines.Here is an example : (I've omitted code for clarity)
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public Declaration saveOrUpdateOrDelete(Declaration decla, List<Declaration> toDeleteList ...){
if (decla.isNew()){
// create from scratch and apply business rules for a creation
manager1.compute(decla);
dao1.save(decla);
...
}else if (decla.isCopy() {
// Copy from an other Declaration and apply business rules for a copy
...
}else {
// update Declaration
...
}
if (toDeleteList!=null){
// Delete declarations and apply business rules for a mass delete
...
}
The first 3 branches are mutually exclusive and represent a unit of work. The last branch (delete) can happen simultaneously with other branches.
Isn't it better to divide this method in something more 'CRUDy' for the sake of clarity and maintainability ? I've been thinking of dividing this behemoth into other manager methods like :
public Declaration create(Declaration decla ...){...
public Declaration update(Declaration decla ...){...
public Declaration copyFrom(Declaration decla ...){...
public void delete(List<Declaration> declaList ...){...
But my colleagues say it will transfer complexity and business rules to the client that I will loose the benefit of atomicity etc.. Who is right here ?
The decision what the updateOrCreateOrWhatever really does is made in the client anyway as it has to set the corresponding field in Declaration object.
The client could equally well just call the apropriate method.
That way code is definitely more manageable and testable (less branches to care about).
The only argument for maintaining it as is is the network round-trips mentioned by #Pangea. I think this could be handled by custom dispatcher class. IMO it doesn't form a part of business logic, and as such shouldn't be taken care of in service layer.
Another thing to take into consideration is transaction logic. Do create/update and deletes have to happen in the same transaction? Can both decla and toDelete be not null at the same time?
One of the basic principles to keep in mind when designing remote services is to make it coarse-grained in order to reduce network latency/round-trips. Also, after going through your code, it seems like the method encapsulates a logical unit of work as it is transactional. In this case, I suggest to keep it as it is.
However, you can still refactor it into multiple methods as long as they are not exposed to be invoked remotely thus forcing the client to manage the transactions from client layer. So make them private.
Bad design. If u really have to make the transaction atomic or complete in one trip, create a more specific method instead of having this.
What's the difference of writing:
public Object doIt(Object... obj){
...
}

Is a DAO Only Meant to Access Databases?

I have been brushing up on my design patterns and came across a thought that I could not find a good answer for anywhere. So maybe someone with more experience can help me out.
Is the DAO pattern only meant to be used to access data in a database?
Most the answers I found imply yes; in fact most that talk or write on the DAO pattern tend to automatically assume that you are working with some kind of database.
I disagree though. I could have a DAO like follows:
public interface CountryData {
public List<Country> getByCriteria(Criteria criteria);
}
public final class SQLCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From SQL Database.
}
}
public final class GraphCountryData implements CountryData {
public List<Country> getByCriteria(Criteria criteria) {
// Get From an Injected In-Memory Graph Data Structure.
}
}
Here I have a DAO interface and 2 implementations, one that works with an SQL database and one that works with say an in-memory graph data structure. Is this correct? Or is the graph implementation meant to be created in some other kind of layer?
And if it is correct, what is the best way to abstract implementation specific details that are required by each DAO implementation?
For example, take the Criteria Class I reference above. Suppose it is like this:
public final class Criteria {
private String countryName;
public String getCountryName() {
return this.countryName;
}
public void setCountryName(String countryName) {
this.countryName = countryName;
}
}
For the SQLCountryData, it needs to somehow map the countryName property to an SQL identifier so that it can generate the proper SQL. For the GraphCountryData, perhaps some sort of Predicate Object against the countryName property needs to be created to filter out vertices from the graph that fail.
What's the best way to abstract details like this without coupling client code working against the abstract CountryData with implementation specific details like this?
Any thoughts?
EDIT:
The example I included of the Criteria Class is simple enough, but consider if I want to allow the client to construct complex criterias, where they should not only specify the property to filter on, but also the equality operator, logical operators for compound criterias, and the value.
DAO's are part of the DAL (Data Access Layer) and you can have data backed by any kind of implementation (XML, RDBMS etc.). You just need to ensure that the project instance is injected/used at runtime. DI frameworks like Spring/Guice shine in this case. Also, your Criteria interface/implementation should be generic enough so that only business details are captured (i.e country name criteria) and the actual mapping is again handled by the implementation class.
For SQL, in your case, either you can hand generate SQL, generate it using a helper library like Spring or use a full fledged framework like MyBatis. In our project, Spring XML configuration files were used to decouple the client and the implementation; it might vary in your case.
EDIT: I see that you have raised a similar concern in the previous question. The answer still remains the same. You can add as much flexibility as you want in your interface; you just need to ensure that the implementation is smart enough to make sense of all the arguments it receives and maps them appropriately to the underlying source. In our case, we retrieved the value object from the business layer and converted it to a map in the SQL implementation layer which can be used by MyBatis. Again, this process was pretty much transparent and the only way for the service layer to communicate with DAO was via the interface defined value objects.
No, I don't believe it's tied to only databases. The acronym is for Data Access Object, not "Database Access Object" so it can be usable with any type of data source.
The whole point of it is to separate the application from the backing data store so that the store can be modified at will, provided it still follows the same rules.
That doesn't just mean turfing Oracle and putting in DB2. It could also mean switching to a totally non-DBMS-based solution.
ok this is a bit philosophical question, so I'll tell what I'm thinking about it.
DAO usually stands for Data Access Object. Here the source of data is not always Data Base, although in real world, implementations are usually come to this.
It can be XML, text file, some remote system, or, like you stated in-memory graph of objects.
From what I've seen in real-world project, yes, you right, you should provide different DAO implementations for accessing the data in different ways.
In this case one dao goes to DB, and another dao implementation goes to object graph.
The interface of DAO has to be designed very carefully. Your 'Criteria' has to be generic enough to encapsulate the way you're going to get the data from.
How to achieve this level of decoupling? The answer can vary depending on your system, by in general, I would say, the answer would be "as usual, by adding an another level of indirection" :)
You can also think about your criteria object as a data object where you supply only the data needed for the query. In this case you won't even need to support different Criteria.
Each particular implementation of DAO will take this data and treat it in its own different way: one will construct query for the graph, another will bind this to your SQL.
To minimize hassling with maintenance I would suggest you to use Dependency Management frameworks (like Spring, for example). Usually these frameworks are suited well to instantiate your DAO objects and play good together.
Good Luck!
No, DAO for databases only is a common misconception.
DAO is a "Data Access Object", not a "Database Access Object". Hence anywhere you need to CRUD data to/from ( e.g. file, memory, database, etc.. ), you can use DAO.
In Domain Driven Design there is a Repository pattern. While Repository as a word is far better than three random letters (DAO), the concept is the same.
The purpose of the DAO/Repository pattern is to abstract a backing data store, which can be anything that can hold a state.

Categories