I was asked to create documentation of classes in the business logic module of a project. I noticed that there was a pattern on how the classes where created. The pattern looks like this
public class AModel(){
//fields
//getter and setters
}
public class AService(){
public void processA(AModel model){
//creates instance of AModel, assigns values to fields
//calls ADaoService methods
}
}
public class ADaoService(){
//has methods which call ADao methods
//sample
public AModel retrieveById(long id){
log.debug(...);
return (ADao.retrieveById(id));
}
}
public class ADAo(){
//has entityManager and some query
public AModel retrieveById(long id){
return((AModel) entityManager.find(AModel.class, id));
}
}
What I don't understand is why does AService calls ADaoService methods instead of just calling ADao methods since ADaoService methods are just calling ADao methods. It seems to me that ADaoService was just a waste of code. They are usign Hibernate and JBoss server. I'm just new to this type of architecture. Hope someone can help me understand. Thanks.
Well, if ADaoService is doing nothing but delegating calls to ADao then clearly you're right - it has no existence justification at the moment.
Regarding future justifications, well, AFAIK, the typical layering does not include ADaoService layer. Where I work we don't have it. Never seen it in Hibernate docs...
Either your architects were generous with layers or they had some non-typical scenario in mind.
If there's no current usages of the layer and no clear future usages - you're better off without it.
Related
I want to add unit tests for a method in class ClassToBeTested.execute(). ClassToBeTested is a business model class received from REST api. To call that method I have to:
create a class AAAclass (which must have 2 inner class mocked and stub 7 methods to put call the method I want to test)
put that mocked AAAclass in ClassToBeTested; ClassToBeTested depends on AAAclass
The AAAclass looks like:
public class AAAclass {
#SerializedName("BBBclass")
private BBBclass BBBclass;
public class BBBclass {
#SerializedName("CCCclass")
private CCCclass ccc;
public DDDclass getDDD() {
if (ccc != null) {
return ccc.getDDD();
}
return null;
}
}
private class CCCclass {
#SerializedName("DDDclass")
private DDDclass ddd;
public DDDclass getDDD() {
return ddd;
}
}
public class DDDclass {
}
}
I got the feeling that I'm doing sth wrong and it seems to be over mocking:
Don’t mock your model: Easier to read and you will may be add convenient constructor/factory methods to your production or test codebase.
So should I really add a special constructor just to use it in unit testing?
As was already mentioned it is hard to identify what are the objects and what is the context.
But it looks like the classes you'd mentioned are some DTOs but at the same time they have some business logic in their getters.
So first of all I would recommend you to extract the business logic to some other place (for instance some service). It should not be present in dto object.
Second. Why BBBclass, CCCclass and DDDclass are inner classes of AAAclass? Can't you make them static? Or event more can you extract them into separate classes? It is very important to decrease system complexity.
I think if you solve this issues you'll not need to mock such a complex object anymore.
At the same time remember if you're thinking of adding a constructor/method just for testability it is already a bad sign. It means that your system is becoming complex and abstractions don't work well. Try to rethink your abstractions.
Let say I use JPA by using #transactions annotations.
So to have any method run under a transaction I add a #transaction annotations and BINGO my method run under a transaction.
To achieve the above we need have a interface for the class and the instance is managed by some container.
Also I should always call the method from interface reference so that the proxy object can start the transaction.
So My code will look like:
class Bar {
#Inject
private FooI foo;
...
void doWork() {
foo.methodThatRunUnderTx();
}
}
class FooImpl implements FooI {
#Override
#Transaction
public void methodThatRunUnderTx() {
// code run with jpa context and transaction open
}
}
interface FooI {
void methodThatRunUnderTx();
}
Well and Good
Now let say methodThatRunUnderTx does two logic operations
[1] call some service(long request/response cycle let say 5 sec) and fetch the results
[2] perform some jpa entity modifications
Now since this method call is long and we don't want to hold the transaction open for long time, so we change the code so that [2] happens in separate tx and methodThatRunUnderTx doesnt run in transaction
So we will remove the #Transaction from the methodThatRunUnderTx and add another method in class with #transaction let say new methods is methodThatRunUnderTx2, now to call this method from methodThatRunUnderTx we have to inject it into itself and add a method to interface so that the call happen through proxy object.
So now our code will look like:
class Bar {
#Inject
private FooI foo;
...
void doWork() {
foo.methodThatRunUnderTx();
}
}
class FooImpl implements FooI {
#Inject
private FooI self;
#Override
//#Transaction -- remove transaction from here
public void methodThatRunUnderTx() {
...
self.methodThatRunUnderTx2();// call through proxy object
}
#Override
#Transaction //add transaction from here
public void methodThatRunUnderTx2() {
// code run with jpa context and transaction open
}
}
interface FooI {
void methodThatRunUnderTx();
void methodThatRunUnderTx2();
}
NOW The Problem
We have made methodThatRunUnderTx2() to be public through interface.
But it is not what we want to expose as our api of FooI and not meant to be called from outside..
Any suggestion to solve it ?
That's why modern containers don't require any interface to be implemented - proxies are then created by dynamic subclassing or bytecode instrumentation is used.
So, the solution to your design issue is simple: Implement a helper class containing the transactional method and inject it to the class implementing the interface (and to any other class that can benefit from it).
Following the Interface Segregation Principle, separate the two logic operations into two interfaces: a fetcher and a modifier. Inject both into class Bar. This allows the two logic implementations to change independently of each other, for example allowing one to be transactional while the other is not. The second interface need not be a public class.
The question is a very valid one on handling the Transaction part. However, if you are trying to hide one functionality over other, you need to consider these :
OPTION 1 :
Considering - You would need to expose the method that does the whole functionality required by the caller
In this case of transaction handling, I would suggest you to keep the transaction open for the time being till it completes
OPTION 2:
Considering - You would need to efficiently manage transactions
Split the interface's methods based on Functionality IModifyFoo and ISelectFoo that does modify and select respectively and implement the methods and annotate with #Transactional on required methods
Interfaces are designed to be public that means that you need to be aware of what you need to expose to external world. In this scenario, you are posed to choose Principle over the technical challenge.
I can just think of these options and we are trying to address your technical challenge here that resides on basics of java. Good one to think about.
As you said, if you call a method on the same bean it'll not be proxied therefore no transaction management will happens, to solve it you can you Bean Managed Transaction where you manually start and stop the transaction:
class FooImpl implements FooI {
#Resource
private UserTransaction userTransaction;
#Override
//#Transaction -- remove transaction from here
public void methodThatRunUnderTx() {
...
self.methodThatRunUnderTx2();// call through proxy object
}
#Override
//#Transaction -- remove transaction from here too, because now you'll manage the transaction
public void methodThatRunUnderTx2() {
userTransaction.start();
// code run with jpa context and transaction open
userTransaction.commit(); // Commit or rollback do all the handling, i'm not writing it because its just an example
}
}
That way you are not exposing anything extra to public api, but you'll have a little extra code to manage the transaction.
if you want that methodThatRunUnderTx2 does not become public make it a private method and remove #Override annotation and remove it from interface.
You have to accept that transaction-based annotations won't work on private methods. So you simply cannot hide (make private) a method that is supposed to be a subject of that kind of annotation.
You can get rid of interfaces (i.e. #LocalBean in EJB world), but still, you cannot use private method...
For sure the solution for this problem are acpects. They would allow to get rid of self.methodThatRunUnderTx2() method call from the body of public void methodThatRunUnderTx(). Most probably the answer for this question could help you: Aspectj and catching private or inner methods
I'm not sure however if aspects are not too big gun for this problem, as they increase complexity and readability of code. I would rather think about changing architecture of your code in such a way, that your problem would not matter.
My project manager wants me to use DAO/DTO objects to access and retrieve data from database. Project is written in Java SE without using any framework or ORM. His arguments is to make code more testable and to improve code design. Does it make sense?
How about initializing DAO object? Should it be initialized when the instance of class having DAO field is created:
private PersonDao personDao = new PersonDaoImpl();
or rather initialized when it is necessary?
public class A {
private PersonDao person;
public List<Person> findAll() {
person = new PersonDaoImpl();
return person.getAll();
}
}
It allows to mock this interface easily, but is it right to the DAO pattern usage convention?
The Data Access Object is basically an object or an interface that provides access to an underlying database or any other persistence storage.
That definition from: http://en.wikipedia.org/wiki/Data_access_object
Maybe a simple example can help you understand the concept:
Let's say we have an entity to represent an employee:
public class Employee {
private int id;
private String name;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
The employee entities will be persisted into a corresponding Employee table in a database. A simple DAO interface to handle the database operation required to manipulate an employee entity will be like:
interface EmployeeDAO {
List<Employee> findAll();
List<Employee> findById();
List<Employee> findByName();
boolean insertEmployee(Employee employee);
boolean updateEmployee(Employee employee);
boolean deleteEmployee(Employee employee);
}
Next we have to provide a concrete implementation for that interface to deal with SQL server, and another to deal with flat files, etc...
Hope that helps
To maximimze the benefits of testability and separation of concerns you should introduce the concept of Inversion of Control (IoC). When applying IoC to the management of object lifecycles the term Dependency Injection is used. What this means is that your class A should be completely agnostic of which implementation is instantiated when.
In order to achieve this you need an extra component to bootstrap your application and inject all classes with the correct implementations.
You could set up your dependency-receiving class like this (setter injection, you can also use constructors)
public class PersonServiceImpl implements PersonService {
private PersonDao personDao;
public List<Person> findAll() {
return personDao.getAll();
}
public setPersonDaoA(PersonDao personDao) {
this.personDao = personDao;
}
}
And a component to do the dependency injection:
public class ApplicationContext {
private PersonService personService;
private PersonDao personDao ;
public PersonService getPersonService() {
if (personService == null) {
personService= new PersonServiceImpl();
personService.setPersonDao(getPersonDao());
}
return personService;
}
public PersonDao getPersonDao() {
if (personDao == null) {
personDao = new PersonDaoIml();
}
return personDao ;
}
}
Then application startup would involve this:
public class Main {
public static void main(String[] args) {
ApplicationContext ctx = new ApplicationContext();
PersonService personService = ctx.getPersonService();
personService.findAll();
}
}
As you can see, the ApplicationContext encapsulates knowlegde about:
which implementations to use
in which order to set a chain of dependencies
which dependencies are already instantiated or not
The PersonServiceImpl class is now completely testable and all concerns regarding object lifecycle management have been extracted from it.
In real life this if often done using a framework like Spring or CDI (which is becoming more and more popular recently). But in your situation, starting off with an approach like above might be a good first step. It will reap the immediate benefits mentioned by your project manager without incurring the overhead of introducing Spring, possibly changing your build too and having to learn how that works (e.g. with an XML context, source code context and/or annotations).
Introducing Spring at a later stage will be easy because all classes are already prepared for Dependency Injection. Just keep in mind that your factory (ApplicationContext in my example) should not take on any extra responsibilities like configuration management.
Also keep in mind that the above example of ApplicationContext is not a singleton. You yourself should make sure only one instance of it is created when your application starts, and all injections are handled by it. Creating duplicate instances could cause confusing bugs.
The DAO pattern is not an "enterprise" pattern. It's mostly seen in "enterprise" applications, but you can absolutely use in a an application written in SE only.
It's not because you're writing an SE application that you don't have to test, so indeed, your code will be more testable using the DAO pattern and IOC rather than using straight JDBC in your application.
The way you're implementing your class using the DAO is problematic because your class cannot be tested properly because of the tight coupling between your class A and your DAO implementation. You're better off using the IOC pattern as well with a framework like Guice or Dagger (both designed with SE in mind).
For a code example, look at slnowak's answer.
The way you are using it, it is still tightly coupled with your class A.
You should provide your DAO as a dependency, using constructor or setter. Probably the most preferable way is to use some kind of Inversion of Control (for example dependency injection framework).
Your class A should be something like:
public class A {
private PersonDao personDao;
// possibly an #Inject annotation
public A(PersonDao personDao) {
this.personDao = personDao;
}
public List<Person> findAll() {
return personDao.getAll();
}
}
And actually, I would consider it an antipattern this way. It depends on how you are about to use your class A.
If it contains different business logic - fine.
If it just dispatches a call to DAO (I don't like this name, maybe use Repository instead ;)) then it is just unnecessary layer of abstraction.
Another thing - you mentioned DTO. So a Person class is just a DTO in this case? Here we could have another antipattern. DTO is fine for example if you need to transform your business object(s) into something that is visible on the screen. Or to separate perstistence model from business model.
What I'm trying to say is: don't make a Person class just a data structure. Give it some behaviour.
I am working on a project where I am using MyBatis annotations as persistence framework. Therefore, I have to create an interface for the 'mapper' and compose the mapper in the service like :
class XYZServiceImpl{
public XYZMapper getXYZMapper(){
return SessionUtil.getSqlSession().getMapper(XYZMapper.class)
}
}
Now while unit testing the service with Mockito, I am trying to inject a mock for the mapper. But since I am injecting mock in an instance of XYZService, how can mock a method of the service itself, in this case getXYZMapper() is what I am trying to stub. Although I have got a solution of creating the instance XYZMapper in the service and not call on demand like the above code does something like :
Class XYZServiceImpl{
XYZMapper mapper;
public void useXYZMapper(){
mapper = SessionUtil.getSqlSession().getMapper(XYZMapper.class);
}
}
But that would bring a lot of code changes (ofcourse I can refactor) but is there a way to achieve without having to make code changes?
Also what would be a 'purist' way to have a mapper instance in the class is it the method 1 that is better than method 2 in terms of performance?
EDIT : Here XYZMapper is an interface. Something like :
public interface XYZMapper{
#Select("SELECT * FROM someclass WHERE id = #{id}")
public SomeClass getSomeClass(int id);
}
EDIT : I am facing a similar situation but with a variance that I have a service that I do want to test like XYZServiceImpl. Now it has a method getXYZDetails() which has a lot of business logic handled within the service. Now if getXYZDetails looks like the following :
public XYZDetails getXYZDetails(int id){
XYZDetails details = new XYZDetails();
details.set1Details(fetchSet1Details(id));
//Perform some business logic
details.set2Details(fetchSet2Details(id));
if(details.set2Details() != null){
for(int i = 0; i < details.set2Details().size(); i++){
flushTheseDetails(i);
}
}
.
.
}
Kindly notice that fetchSet1Details(), fetchSet2Details(), flushTheseDetails are public service, public and private service respectively.
I want to know of a method that can mock/stub these methods while testing getXYZDetails() thus enabling me to
There are several options you can use.
Inject dependency
This works only for simple methods like getXYZMapper when method only returns external dependency of you object. This may require to create new XYZServiceImpl instances if for example mapper is bound to connection which is opened per request.
Encapsulate method behavior in object
Another way to achieve similar result is to use a factory or service locator
like this:
public class XYZServiceImpl {
public XYZServiceImpl(XYZMapperFactory mapperFactory) {
this.mapperFactory = mapperFactory;
}
public XYZMapper getXYZMapper() {
return mapperFactory.getMapper();
}
}
This will allow you easily substitute factory in test with implementation which returns mock mapper.
The similar approach can be used for other methods fetchSet1Details, fetchSet2Details, flushTheseDetails that is moving them to other class or classes. If the method contains complex (and may be loosely related) logic it is a good candidate to be moved in separate class. Think about what these methods do. Usually you can move some essential and unrelated part of them to other class or classes and this makes mocking them much easier.
Subclass
This is not recommended but in legacy code sometimes is very helpful as a temporary solution.
In your test subclass you class under test and override methods you need:
#Test
public void someTest() {
XYZServiceImpl sut = new XYZServiceImpl() {
public XYZMapper getXYZMapper() {
return mapperMock;
}
public Whatever fetchSet1Details() {
return whateverYouNeedInTest;
}
}
sut.invokeMethodUnderTest();
}
The only thing you may need to do is to change access modifier of private method to package-private or protected so you can override them.
Spying
This method in also discouraged but you can use mockito spies:
XYZServiceImpl realService = new XYZServiceImpl();
XYZServiceImpl spy = Mockito.spy(realService);
when(spy.fetchSet1Details()).thenReturn(whaeveryouneed);
when(spy.getXYZMapper()).thenReturn(mockMapper);
spy.methodUnderTest();
I would suggest the "purist" way of doing this is to accept an XYZMapper instance in your constructor and store it in a local field.
In production use, you can pass an e.g. SQLXYZMapper, which will interact with your database. In test use, you can pass in a mocked object that you can verify interactions with.
I better explain the question with an example.
I have an Interface Model which can be used to access data.
There can be different implementations of Model which can represent the data in various format say XMl , txt format etc. Model is not concerned with the formats.
Lets say one such implementation is myxmlModel.
Now i want to force myxmlModel and every other implementation of Model to follow Singleton Pattern.The usual way is to make myxmlModels constructor private and provide a static factory method to return an instance of myModel class.But the problem is interface cannot have static method definitions and a result i cannot enforce a particular Factory method definition on all implementation of Model. So one implementation may end with providing getObject() and other may have getNewModel()..
One work around is to allow package access to myxmlModel's constructor and create a Factory class which creates the myxmlModel object and cache it for further use.
I was wondering if there is a better way to achieve the same functionality .
Make a factory that returns
instances of your interface, Model.
Make all concrete implementations of the model package-private classes
in the same package as your factory.
If your model is to be a singleton, and you are using java
5+, use enum instead of traditional
singleton, as it is safer.
public enum MyXMLModel{
INSTANCE();
//rest of class
};
EDIT:
Another possibility is to create delegate classes that do all the work and then use an enum to provide all of the Model Options.
for instance:
class MyXMLModelDelegate implements Model {
public void foo() { /*does foo*/}
...
}
class MyJSONModelDelegate implements Model {
public void foo() { /*does foo*/ }
...
}
public enum Models {
XML(new MyXMLModelDelgate()),
JSON(new MyJSONModelDelegate());
private Model delegate;
public Models(Model delegate) { this.delegate=delegate; }
public void foo() { delegate.foo(); }
}
You can use reflection. Something like this:
public interface Model {
class Singleton {
public static Model instance(Class<? extends Model> modelClass) {
try {
return (Model)modelClass.getField("instance").get(null);
} catch (blah-blah) {
blah-blah
}
}
}
public class XmlModel implements Model {
private static final Model instance = new XmlModel();
private XmlModel() {
}
}
usage:
Model.Singleton.instance(XmlModel.class)
Actually, I don't like this code much :). First, it uses reflection - very slow, second - there are possibilities of runtime errors in case of wrong definitions of classes.
Can you refactor the interface to be an abstract class? This will allow you to force a particular factory method down to all implementing classes.
I used to ask myself the same question. And I proposed the same answer ;-)
Now I normally drop the "forcing" behavior, I rely on documentation.
I found no case where the Singleton aspect was so compelling that it needed to be enforced by all means.
It is just a "best-practice" for the project.
I usually use Spring to instanciate such an object,
and it is the Spring configuration that makes it a Singleton.
Safe, and so easy ... plus additionnal Spring advantages (such as Proxying, substituing a different object once to make some tests etc...)
This is more an answer to your comment/clarification to kts's answer. Is it so, that the real problem is not using the Singleton pattern but instead defining an eclipse (equinox) extension point schema that allows contributing a singleton?
I think, this can't be done, because everytime you call IConfigurationElement.createExecutableExtension you create a new instance. This is quite incompatible with your singleton requirement. And therefore you need the public default constructor so that everybody can create instances.
Unless you can change the extension point definition so that plugins contribute a ModelFactory rather than a model, like
public interface ModelFactory {
public Model getModelInstance();
}
So the extension user will instantiate a ModelFactory and use it to obtain the singleton.
If I guessed wrong, leave a comment and I delete the answer ;)