Need for separate interface and impl for DAO - java

We have a typical n-tier java application, and I noticed that our data access layers has DAO's that are of the type FooDAO and FooDAOImpl. I was looking to justify the need for the two and here is my analysis.
If you had multiple implementation for the same interface, the abstraction is helpful. But given that we have already made the choice of the framework to be used for the DAOImpl (say iBATIS), is it really required?
Help in proxying via Spring. From what I gather, classes that have interfaces can be proxied easily (going the JdkProxy route) rather than classes that have no interfaces (where the cglib route is chosen) and one has the subclass the class to be proxied. Subclassing has its problems where the class to proxied is final or has none default constructors - both of which are highly unlikely at the data access layer. Performance used to be a factor, but from what I hear, it is no longer a cause of concern.
Help in mocking. Classes with interfaces are more suited to be mocked by mocking frameworks. I have only heard this, but not seen it in practice - so can't really count on it, but maybe because of the same factors as mentioned in #2 above.
With these points, I don't feel the real need for a separate FooDAO and FooDAOImpl where a simple FooDAO should suffice. Feel free to correct any of the points that I have mentioned.
Thanks in advance!

I tried #3 with Mockito and it was able to mock POJO's without interfaces just fine. With the arguments mentioned against #1 and #2, I am inclined not to go with separate DAO and DAOImpl for now. Feel free to add other points of comparison.

I see a fourth reason:
Hide implementation details
Depending for instance on the team, the expected lifetime of the software or the amount of change anticipated in the future, it pays to hide as much as possible even if there is only one implementation.

I would say that having a FooDAO interface with a single implementation FooDAOImpl is generally an anti-pattern. The simpler solution (no separate interfaces for DAOs) is much preferable, unless you have a solid reason otherwise - which doesn't seem to be the case here.
Personally, I would go even further and say that having DAOs at all is not the best choice. I prefer having a single persistence facade class implemented on top of an ORM API such as Hibernate or JPA. Maybe iBATIS is too low-level for that, though.

Related

Service and DAO always implement interfaces

In all the MVC projects I've seen, "service" and "DAO" classes always implemented their own interfaces. But almost all the times, I haven't seen a situation in which having this interface has been useful.
Is there any reason to use interfaces in these cases? What may be the consequence of not using interfaces in "service" and "DAO" classes? I can't imagine any consequences.
Spring is an Inversion of Control container. This, in one sense, means that the implementation of classes you use doesn't fall on the application but on its configuration. If you have a class that needs a UserRepository to store User instances, it'd be something like
class UserService {
#Autowired
private UserRepository userRepository;
}
interface UserRepository {
List<User> getUsers();
User findUserBySIN(String SIN);
List<User> getUsersInCountry(Long couyntryId);
}
And you would have a bean declared for it
<bean class="com.myapp.UserRepositoryHibernateImpl">
...
</bean>
Notice this bean is UserRepositoryHibernateImpl which would implement UserRepository.
At some point in the future of the world, the Hibernate project stops being supported and you really need a feature that is only available on Mybatis so you need to change implementations. Because your UserService class is using a UserRepository declared with the interface type, only the methods visible on the interface are visible to the class. So changing the actual polymorphic type of userRepository doesn't affect the rest of the client code. All you need to change (excluding creating the new class) is
<bean class="com.myapp.future.UserRepositoryMyBatisImpl">
...
</bean>
and your application still works.
There are lots of arguments in favour of interfaces, see Google.
I can added to the points other people mentioned:
Imagine you change your DAO implementations from Hibernate to iBatis. Dependency to interface rather than implementation would be a great help for the service layer.
If you use AOP or proxies using JDK dynamic proxies then your classes must implement interfaces. This is not the case for CGLIB.
In the service layer if you want to release your methods to other clients to call, giving them "interface as a contract" would make more sense rather than implementations.
If you ever want to separate services.jar from daos.jar then having interfaces on your daos would save the services.jar from recompile in case daos.jar changes.
In short, it is just good to have interfaces!
The interface-based implementation helps in mocking them in the test suite. In our project, while testing the service layer, we mock the DAOs and provide hard coded data instead of really connecting to the DB. The same argument applies to service layer as well.
Using interfaces early on makes your application scalable-ready and consequences of not using it is sacrificing your application's scalability.
I've been asking myself the exact same question recently, feeling that creating an interface even if I know there's ever going to be a single implementing class is silly and adds to the bloat (every Java programmer who tried a more pragmatic language will know the feeling). That's yet another compilation module, often only created to satisfy one's inner dogmatist.
Spring seems to have evolved into a module/component oriented framework where the programmer only creates the blocks and the framework assembles it all together. This is why having more than one bean matching the criteria is a problem and complicates things (you end up using qualifiers which kind of kill the purpose of DI). Programmers will naturally try to avoid type ambiguities to minimise amount of required configuration, ideally making any given block fit into only one "slot".
In my opinion, DI's biggest advantage is not that it makes it easy to change implementations (by simply changing type of declared class in the config XML), but that it allows easier separation of dependencies, thus making it easier to test each component in separation. You don't need one-child interfaces for that.
Since reverse-engineering a class to extract its interface would be a purely mechanical task, I wouldn't worry about "what if I need to add another implementation?" argument.
Disclaimer: opinion of a small-to-mid applications developer; I'm sure the situation changes with large projects.

How to handle internal calls on Spring/EJB/Mockito... proxies?

As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.

Obvious flaws in my EJB3 design

I have a domain object called VehicleRecord which is a hibernate entity. CRUD operations on these VehicleRecords are handled through an entity access object implemented as a stateless session bean interface
#Local
interface VehicleRecordEao {
void add(VehicleRecord record);
List<VehicleRecord> findAll();
...
}
#Stateless
class HibernateVehicleRecordEaoBean implements VehicleRecordEao { ... }
From a business layer perspective removing and adding these records is more than just a CRUD operation. There may be logging and security requirements for example. To provide these operations to clients sessions beans are created in a business layer.
#Local
interface VehicleRecordManager {
void createVehicleRecord(VehicleRecord record);
List<VehicleRecord> findAll(String make, String model);
...
}
#Stateless
class VehicleRecordManagerBean {
public void createVehicleRecordManager(VehicleRecord record) {
//business rules such as logging, security,
// perhaps a required web service interaction
//add the new record with the Eao bean
}
...
}
Controllers work between the presentation layer and the above business layer to get work done, converting between presentation objects (like forms) and entities as necessary.
Something here is not right (smells). I have a class called Manager which has got to be a red flag, but several examples in EJB books actually suggest this kind of high level class and tend to use the name manager themselves. I am new to EJB design but in OO design making high level classes called Manager or Handler or Utility screams of procedural and requires one to rethink their design.
Are these procedural-utility-class session beans the normal pattern, or is bad to organize your session by a bunch of methods related only to the entity they operate on? Are there any naming conventions for session beans? Should the Eao's and business session beans work at the same layer?
If there is a less smelly alternative to this pattern I would love to know, thanks.
Your approach is more-or-less standard. Yes, at a fundamental level this is a procedural approach and goes hand-in-hand with what has been dubbed the Anemic Domain Model "anti-pattern". The alternative is to incorporate your business logic into your domain model so as to create a more OO design where your operations are coupled with your DO's. If you were to go down this route you should be aware of the inherent pros and cons. If you feel that your approach makes the most sense, is easy to understand, test, scale, etc... Then role with it. I have worked on several n-tier projects where this exact "procedural" style is used -- rest assured it is quite standard in EE applications to do things this way.
It's an age old discussion. Should a bread bake itself, or does an oven bake it? Does a message sends itself, or does the postoffice do it? Do I send a message to my friend Pete by asking him to send a message to himself?
According to those who came up with the term ADM, it's more "natural" and more "OO" to let each object do all those things itself. Taken to the extreme (which no one advocates of course, but just as an example) class User would contain all logic for the entire application, since ultimately the user does everything.
If you use slim entities (entities containing only data) and service or DAO objects to work with them, then you're not necessarily not doing OO. There is still a lot of OO going around. Services implement interfaces, inherit from base classes, have encapsulated state (e.g. the EntityManager in JPA), have transparent interceptors via dynamic proxies (e.g. to check security and start/commit a transaction), etc etc.
The term "Anemic" has a definite negative association, but calling the same thing "Slim" actually sounds good. This isn't just a case of using different names to hide code smells, but represents a different thinking among different groups of people.
I think it's up to your taste, and depending on the complexity of your application - these things matter less / more. I think in design, you need to simply thing around the line of:
"If someone new came in without any prior knowledge of the system in question - is it intuitive enough for that person to follow and trace the codes, and straight forward to find where things are"
In your case - naming the EJB related to the Entity Object makes it straight forward and simple - what I dont get is why did you separate it to 2 classes EAO and Manager. Why dont you just combine them into one, so if the EJB/Bean class deals with VehicleRecord Entity, then it will be the "VehicleRecordEAO" or "VehicleRecordManager" or "VehicleRecordAccess" or anything really.
I think EAO / DAO / Access sounds more like getter / setter - or any other simple operations. I dont see anything wrong with "Manager" and make it consistent across the board that all business layer will be called "Manager".
Or if you feel better, think of it as the Facade Pattern - so you can call your business layer (the Manager) as VehicleRecordFacade and VehicleRecordFacadeBean.
That way you basically follow the name and concept of Facade pattern, where it becomes intermediary between application layer and the data layer.
Something here is not right (smells).
I have a class called Manager which
has got to be a red flag
My answer here is towards this concern of yours.
Yes. It is a red flag. Naming a class like "VehicleRecordManager" would be a code smell suggesting that Single responsibility principle would be violated sooner or later.
To elaborate, let me take a few use cases to deal with VehicleRecord
Buy a vehicle
Rent a vehicle
Search for a vehicle
Sell a vehicle
Find Dealers
In most Java applications when we write up a "VehicleService" ( or "VehicleManager") all the above operations would be placed in this class ! Well, this one is easy to do, but hard to maintain. And certainly this class has a lot of responsibilities, hence many reasons to change. (violating Single responsibility principle )
Would calling it VehicleDao eliminate some of the smelliness? A simple change, but indicates clearly it's concerned with data access concerns.

Can Class Interface be used for Separation of Concerns instead of AOP?

I asked a question about Interface here How to organize class interfaces hierarchy? and someone answered Separation of Concern.
Is there a link between this separation of concern with class interface and AOP ?
AOP is just a different programming paradigm, which has OOP for a pillar beneath.
Class interfaces are something more specific and which should be used when you want to define a property that can be common between different classes.
The separation of concern that was mentioned probably is related with those different properties, which, when spotted and different, should allow for the creation of a new interface, consequently allowing other classes to implement them, and making it possible for them to have something in common that relates them and which is visible and explicit.
AOP shouldn't be used for that purpose, because it involves its own paradigm, and it is a task that you can already achieve with the interfaces. AOP changes things at another level, allowing you to change the behaviour of a whole program by defining pointcuts to be adviced.
Using an interface you can group like methods together and then encapsulate the details in the implementation. This generally makes your application more portable. For instance if you have multiple DAO implementation for different database vendors, you can create an interface and implement it for each db. You can swap out the implementation while keeping the structure of the interface as the same.
Using AOP you can decouple the cross-cutting concerns in an application. For instance if all your DAO methods requires transaction management then that's a common concern and you can utilize AOP pattern there.
Separation of concern is a generic term and a common programming principle. You want to decouple as much as you can. Using both interfaces and AOP, you can promote decoupling.

Java Interfaces Methodology: Should every class implement an interface?

I've been programming in Java for a few courses in the University and I have the following question:
Is it methodologically accepted that every class should implement an interface? Is it considered bad practice not to do so? Can you describe a situation where it's not a good idea to use interfaces?
Edit: Personally, I like the notion of using Interfaces for everything as a methodology and habit, even if it's not clearly beneficial. Eclipse automatically created a class file with all the methods, so it doesn't waste any time anyway.
You don't need to create an interface if you are not going to use it.
Typically you need an interface when:
Your program will provide several implementations for your component. For example, a default implementation which is part of your code, and a mock implementation which is used in a JUnit test. Some tools automate creating a mock implementation, like for instance EasyMock.
You want to use dependency injection for this class, with a framework such as Spring or the JBoss Micro-Container. In this case it is a good idea to specify the dependencies from one class with other classes using an interface.
Following the YAGNI principle a class should implement an interface if you really need it. Otherwise what do you gain from it?
Edit: Interfaces provide a sort of abstraction. They are particularly useful if you want to interchange between different implementations(many classes implementing the same interface). If it is just a single class, then there is no gain.
No, it's not necessary for every class to implement an interface. Use interfaces only if they make your code cleaner and easier to write.
If your program has no current need for to have more than 1 implementation for a given class, then you don't need an interface. For example, in a simple chess program I wrote, I only need 1 type of Board object. A chess board is a chess board is a chess board. Making a Board interface and implementing that would have just required more code to write and maintain.
It's so easy to switch to an interface if you eventually need it.
Every class does implement an interface (i.e. contract) insofar as it provides a non-private API. Whether you should choose to represent the interface separately as a Java interface depends on whether the implementation is "a concept that varies".
If you are absolutely certain that there is only one reasonable implementation then there is no need for an interface. Otherwise an interface will allow you to change the implementation without changing client code.
Some people will shout "YAGNI", assuming that you have complete control over changing the code should you discover a new requirement later on. Other people will be justly afraid that they will need to change the unchangeable - a published API.
If you don't implement an interface (and use some kind of factory for object creation) then certain kinds of changes will force you to break the Open-Closed Principle. In some situations this is commercially acceptable, in others it isn't.
Can you describe a situation where it's not a good idea to use interfaces?
In some languages (e.g. C++, C#, but not Java) you can get a performance benefit if your class contains no virtual methods.
In small programs, or applications without published APIs, then you might see a small cost to maintaining separate interfaces.
If you see a significant increase in complexity due to separating interface and implementation then you are probably not using interfaces as contracts. Interfaces reduce complexity. From the consumer's perspective, components become commodities that fulfil the terms of a contract instead of entities that have sophisticated implementation details in their own right.
Creating an interface for every class is unnecessary. Some commonly cited reasons include mocking (unneeded with modern mocking frameworks like Mockito) and for dependency injection (e.g. Spring, also unneeded in modern implementations).
Create an interface if you need one, especially to formally document public interfaces. There are a couple of nifty edge cases (e.g. marker interfaces).
For what it's worth, on a recent project we used interfaces for everything (both DI and mocking were cited as reasons) and it turned out to be a complete waste and added a lot of complexity - it was just as easy to add an interface when actually needed to mock something out in the rare cases it was needed. In the end, I'm sure someone will wind up going in and deleting all of the extraneous interfaces some weekend.
I do notice that C programmers first moving to Java tend to like lots of interfaces ("it's like headers"). The current version of Eclipse supports this, by allowing control-click navigation to generate a pop-up asking for interface or implementation.
To answer the OP's question in a very blunt way: no, not all classes need to implement an interface. Like for all design questions, this boils down to one's best judgment. Here are a few rule of thumbs I normally follow:
Purely functional objects probably
don't need to (e.g. Pattern,
CharMatcher – even though the
latter does implement Predicate, it
is secondary to its core function)
Pure data holders probably don't need
to (e.g. LogRecord, Locale)
If you can
envision a different implementation
of a given functionality (say, in-memory
Cache vs. disk-based Cache), try to
isolate the functionality into an interface. But don't go too far trying to predict the future either.
For testing purposes, it's
very convenient when classes that do
I/O or start threads are easily mockable, so
that users don't pay a penalty when
running their tests.
There's nothing
worse than a interface that leaks its
underlying implementation. Pay attention where you draw the line and make sure your interface's Javadoc is neutral in that way. If it's not, you probably don't need an interface.
Generally
speaking, it is preferable for
classes meant for public consumption
outside your package/project to
implement interfaces so that your
users are less coupled to your
implementation du jour.
Note that you can probably find counter-examples for each of the bullets in that list. Interfaces are very powerful, so they need to be used and created with care, especially if you're providing external APIs (watch this video to convince yourself). If you're too quick in putting an interface in front of everything, you'll probably end up leaking your single implementation, and you are only making things more complicated for the people following you. If you don't use them enough, you might end up with a codebase that is equally hard to maintain because everything is statically bound and very hard to change. The non-exhaustive list above is where I try to draw the line.
I've found that it is beneficial to define the public methods of a class in a corresponding interface and when defining references to other classes strictly use an interface reference. This allows for easy inversion of control, and it also facilitates unit testing with mocking and stubbing. It also gives you the liberty of replacing the implementation with some other class that implements that interface, so if you are into TDD it may make things easier (or more contrived if you are a critic of TDD)
Interfaces are the way to get an polymorphism. So if You have only one implementation, one class of particularly type, You don't need an interface.
A good way of learning what are considered good methodologies, especially when it comes to code structure design, is to look at freely available code. With Java, the obvious example is to take a look at the JDK system libraries.
You will find many examples of classes that do not implement any interfaces, or that are meant to be used directly, such as java.util.StringTokenizer.
If you use Service Provider Interface pattern in your application interfaces are harder to extend than abstract classes. If you add method to interface, all service providers must be rewritten. But if you add non-abstract method to the abstract class, none of the service providers must be rewritten.
Interfaces also make programming harder if only small part of the interface methods usually have meaningfull implementation.
When I design a new system from scratch I use a component oriented approach, each component (10 or more classes) provide an interface, this allows me (sometimes) to reuse them.
When designing a Tool (Or a simple system) I think this must not necessarily be an extensible framework I introduce interfaces when I need a second implementation as an option.
I saw some products which exposed nearly every functionality by an interface, it took simply too much time to understand unnecessary complexity.
An interface is like a contract between a service provider (server) and the user of such a service (client).
If we are developing a Webservice and we are exposing the rest routes
via controller classes, controller classes can implement interfaces
and those interfaces act as the agreement between web service and the
other applications which use this web service.
Java interfaces like Serializable, Clonnable and Remote
used to indicate something to compiler or JVM.When JVM sees a class
that implement these interfaces, it performs some operation on the to
support Serialization, cloning or Remote Method Invocation. If your class needs these features, then you will have to implement these interfaces.
Using Interface is about to make your application framework resilient to change. Since as I mentioned here (Multiple Inheritance Debates II: according to Stroustrup) multiple inheritance was cancelled in java and c# which I regret, one should always use Interface because you never know what the future will be.

Categories