Our applications using Spring Cache and need to know if response was returned from cache OR it was actually calculated. We are looking to add a flag in result HashMap that will indicate it. However whatever is returned by method, it is cached so not sure if we can do it in calculate method implementation.
Is there any way to know if calculate method was executed OR return value coming from cache when calling calculate method?
Code we are using for calculate method -
#Cacheable(
cacheNames = "request",
key = "#cacheMapKey",
unless = "#result['ErrorMessage'] != null")
public Map<String, Object> calculate(Map<String, Object> cacheMapKey, Map<String, Object> message) {
//method implementation
return result;
}
With a little extra work, it is rather simple to add a bit of state to your #Cacheable component service methods.
I use this technique when I am answering SO questions like this to show that the value came from the cache vs. the service method by actually computing the value. For example.
You will notice this #Cacheable, #Service class extends an abstract base class (CacheableService) to help manage the "cacheable" state. That way, multiple #Cacheable, #Service classes can utilize this functionality if need be.
The CacheableService class contains methods to query the state of the cache operation, like isCacheMiss() and isCacheHit(). Inside the #Cacheable methods, when invoked due to a "cache miss", is where you would set this bit, by calling setCacheMiss(). Again, the setCacheMiss() method is called like so, inside your #Cacheable service method.
However, a few words of caution!
First, while the abstract CacheableService class manages the state of the cacheMiss bit with a Thread-safe class (i.e. AtomicBoolean), the CacheableService class itself is not Thread-safe when used in a highly concurrent environment when you have multiple #Cacheable service methods setting the cacheMiss bit.
That is, if you have a component class with multiple #Cacheable service methods all setting the cacheMiss bit using setCacheMiss() in a multi-Threaded environment (which is especially true in a Web application) then it is possible to read stale state of cacheMiss when querying the bit. Meaning, the cacheMiss bit could be true or false depending on the state of the cache, the operation called and the interleaving of Threads. Therefore, more work is needed in this case, so be careful if you are relying on the state of the cacheMiss bit for critical decisions.
Second, this approach, using an abstract CacheableService class, does not work for Spring Data (CRUD) Repositories based on an interface. As others have mentioned in the comments, you could encapsulate this caching logic in an AOP Advice and intercept the appropriate calls, in this case. Personally, I prefer that caching, security, transactions, etc, all be managed in the Service layer of the application rather than the Data Access layer.
Finally, there are undoubtedly other limitations you might run into, as the example code I have provided above was never meant for production, only demonstration purposes. I leave it to you as an exercise to figure out how to mold these bits for your needs.
A system handles two types of resources. There are write and delete APIs for managing the resources. A client (user) will use a library API to manage these resources. Each resource write (or create) will result in updating a store or a database.
The API would look like:
1) Create Library client. The user will use the returned client to operate on the resources.
MyClient createClient(); //to create the client
2) MyClient interface. Providing operations on a resource
writeResourceType1(id);
deleteResourceType1(id);
writeResourceType2(id);
deleteResourceType2(id);
Some resources are dependent on the other. The user may write them out-of-order (might write a resource before writing its dependent). In order to prevent the system from having an inconsistent state, all changes (resource updates) will be written to a staging location. The changes will be written to the actual store only when the user indicates he/she has written everything.
This means I would need a commit kind of method in the above MyClient interface. So, access pattern will look like
Client client = provider.createClient();
..
client.writeResourceType1(..)
client.writeResourceType1(..)
client.deleteResourceType2(..)
client.commit(); //<----
I'm not comfortable having the commit API in the MyClient interface. I feel it is polluting it and a wrong it is a wrong level of abstraction.
Is there a better way to handle this?
Another option I thought of is getting all the updates as part of a single call. This API would act as a Batch API
writeOrDelete(List<Operations> writeAndDeleteOpsForAllResources)
The downside of this is this the user has to combine all the operations on their end to call this. This is also stuffing too much into a single call. So, I'm not inclined to this approach.
While both ways that you've presented can be viable options, the thing is that at some point in time, the user must somehow say: "Ok, these are are my changes, take them all or leave them". This is exactly what commit is IMO.
And this alone makes necessary some kind of call that must present in the API.
In the first approach that you've presented its obviously explicit, and is done with commit method.
In the second approach its rather implicit and is determined by the content of the list that you pass into writeOrDelete method.
So in my understanding, commit must exist somehow, but the question is how do you make it less "annoying" :)
Here are couple of tricks:
Trick 1: Builder / DSL
interface MyBuilder {
MyBuilder addResourceType1(id);
MyBuilder addResourceType2(id);
MyBuilder deleteResourceType1/2...();
BatchRequest build();
}
interface MyClient {
BatchExecutionResult executeBatchRequest(BatchRequest req);
}
This method is more or less like the second method, however it has a clear way of "adding resources". A single point of creation (pretty much like MyClient not, just I believe that eventually it will have more methods, so maybe its a good idea to separate. As you stated: "I'm not comfortable having the commit API in the MyClient interface. I feel it is polluting it and a wrong it is a wrong level of abstraction")
Additional argument for this approach is that now you know that there is a builder and its an "abstraction to go" in your code that uses this, you don't have to think about passing a reference to the list, think about what happens if someone calls stuff like clear() on this list, and so on and so forth. The builder has a precisely defined API of what can be done.
In terms of creating the builder:
You can go with something like Static Utility class or even add a method to MyClient:
// option1
public class MyClientDSL {
private MyClientDSL {}
public static MyBuilder createBuilder();
}
// option 2
public interface MyClient {
MyBuilder newBuilder();
}
References to this approach: JOOQ (they have DSL like this), OkHttp that have builders for Http Requests, Bodies and so forth (decoupled from the OkHttpClient itself).
Trick 2: Providing an execution code block
Now this can be tricky to implement depending on what kind of environment do you run in,
but basically an idea is borrowed from Spring:
In order to guarantee a transaction while working with databases they provide a special annotation #Transactional that while placed on the methods basically says: "everything inside the method is running in transaction, I'll commit it by myself so that the user won't deal with transactions/commits at all. I'll also roll back upon exception"
So in code it looks like:
class MyBusinessService {
private MyClient myClient; // injected
#Transactional
public void doSomething() {
myClient.addResourceType1();
...
myClient.addResourceType2();
...
}
}
Under the hood they should maintain ThreadLocals to make this possible in multithreaded environment, but the point is that the API is clean. The method commit might exist but probably won't be used at the most of the cases, leaving alone the really sophisticated scenarios where the user might really "need" this fine-grained control.
If you use spring/ any other containter that manages your code, you can integrate it with spring (the technical way of doing this is out of scope of this question, but you get the idea).
If not, you can provide the most simplistic way of it:
public class MyClientCommitableBlock {
public static <T> T executeInTransaction(CodeBlock<T> someBlock)
builder) {
MyBuilder builder = create...;
T result = codeBlock(builder);
// build the request, execute and commit
return result;
}
}
Here is how it looks:
static import MyClientCommitableBlock.*;
public static void main() {
Integer result = executeInTransaction(builder -> {
builder.addResourceType1();
...
return 42;
});
}
// or using method reference:
class Bar {
Integer foo() {
return executeInTransaction(this::bar);
}
private Integer bar(MyBuilder builder) {
....
}
}
In this approach a builder while still defining precisely a set of APIs might not have an "explicit" commit method exposed to the end user. Instead it can have some "package private" method to be used from within the MyClientCommitableBlock class
Try if this suits you
Let us have a flag in staging table with column named status
Status Column values
New : Record inserted by user
ReadyForProcessing : Records ready for processing
Completed : Records processed and updated in Actual Store
Add this below method instead of commit(), and once user invokes this method/service, pick up the records which are for this user and which are in status: New and post it into the Actual Store from the staging location
client.userUpdateCompleted();
There is another option as well let us take out the client intervention by giving client.commit(); or client.userUpdateCompleted(); and instead we can have a batch process using Scheduler which runs at specific intervals which scans the Staging Table and populates the meaningful and user update completed records into the Actual Store
I'm currently developing an application using 3 layers ui-service-dao . At the dao level I am using Spring's jdbcTemplate . So far so good but I encountered a situation which I like to have some more insight
My DAOs had at the beginnign only simple CRUD methods . At the service level I'm checking for input values and delegating to the DAOs and also dealing with transactions.
Now I need things more like this one below
List getAllBooksByAuthorName(String name)
My question is where to put this one? In DAO-layer using sql or in service by using core methods of CRUD and computing simply in java
I rather tend to use sql as much as possible instead of calculating at service layer. But now it seems like for every new method , I also need to change the interface of the DAO and make correspondent method in the interface of the service. Then service becomes nothing more than a delegator and parameter checker. It feels not right.
Your opinions are quite valid but i didn't get much why you are in doubt.Generally DAO pattern reduce coupling between Business logic and Persistence logic.
public interface BooksDAO{
public boolean save(Book book);
public boolean update(Book book);
public boolean findByBookIsbn(int isbn);
public boolean delete(Book book);
//here is what you want
public List<Book> getAllBooksByAuthorName(String name);
}
Now you can have different implementations for BooksDao like HibernateBooksDaoImpl or JdbcBooksDAOImpl. DAO pattern makes easy to write isolated junit test and executes faster.
If you have complex queries you can still use dao pattern. Basically there is way to write complex queries in implementation side whether it is simple jdbc (sql can be used) or spring jdbc template(still sql can be used) or hibernate use criteria.
see:
http://docs.jboss.org/hibernate/core/3.6/javadocs/org/hibernate/Criteria.html
For more information look:
http://javarevisited.blogspot.com/2013/01/data-access-object-dao-design-pattern-java-tutorial-example.html
http://www.oracle.com/technetwork/articles/entarch/spring-jdbc-dao-101284.html
That's however how it should be. In the business logic is reduced to nothing except calling a DAO method, then you are lucky to have simple business logic.
It would obviously be extremely inefficient and completely unrealistic to have the service call BookDAO.findAll() and filter the giant list of books returned by the DAO. SQL is the right tool for the job.
Note that the days where mocking was only possible with interfaces are past. Using an interface to define your DAO methods isn't really necessary anymore.
For example you could use the Entity-Control-Boundary-Pattern.
Your package structure will look like the following:
Under the namespace of your application you could introduce a package called "business", in that package there can be packages named by the business responsibility and these packages are seperated into "entity", "control" and "boundary".
com.example.myapplication.business.project.entity -> If you are using JPA all your entities can be stored in this package, contains DTOs
com.example.myapplication.business.project.control -> In this package refactored services can be stored, for example if the DAO-Code is needed in more than just one boundary, the code could be refactored in this package
com.example.myapplication.business.project.boundary -> This package contains all services that can be seen by the client (for example your web page)
In the package "presentation" your ui controllers can be stored and the ui controllers should only access the services stored in the boundary package.
com.example.myapplication.presentation.project
By using this pattern you avoid the use of delegators, because the services stored in the boundary-package can also contain sql-specific stuff and all servies and entities are in the package they belong to.
The pattern can be also used outside of JEE. Adam Bien has revolutionised this pattern in the JEE-Architecture and I´m using it also in my own projects. Here is an example -> http://www.youtube.com/watch?v=JWcoiXNoKxk#t=2380
The methods of your boundary could look like the following:
public interface ProjectService {
public Project createProject(Project project);
public Project getProjectById(String projectId);
public List<Project> getProjectList(ListConfig config); // where ListConfig is a class containing information of how the list should be sorted, optional pagination information, etc, so that the interface must not be changed every time you need a new parameter
public Project updateProject(Project project);
public void deleteProject(String projectId);
public Project addFeature(Project project, Feature feature);
}
#ayan ahmedov: Sorry, the first time I tried to answer your question I had unfortunately edit your question and my answer was in the content area of your question. I´ve 'reverted' the accidental changes.
How is it possible to keep clean layers with Hibernate/ORM (or other ORMs...)?
What I mean by clean layer separation is for exemple to keep all of the Hibernate stuff in the DAO layer.
For example, when creating a big CSV export stream, we should often do some Hibernate operations like evict to avoid OutOfMemory... The filling of the outputstream belong to the view, but the evict belongs to the DAO.
What I mean is that we are not supposed to put evict operations in the frontend / service, and neither we are supposed to put business logic in the DAO... Thus what can we do in such situations?
There are many cases where you have to do some stuff like evict, flush, clear, refresh, particularly when you play a bit with transactions, large data or things like that...
So how do you do to keep clear layers separation with an ORM tool like Hibernate?
Edit: something I don't like either at work is that we have a custom abstract DAO that permits a service to give an Hibernate criterion as an argument. This is practical, but for me in theory a service that calls this DAO shouldn't be aware of a criterion. I mean, we shouldn't have in any way to import Hibernate stuff into the business / view logic.
Is there an answer, simple or otherwise?
If by "clean" you mean that upper layers don't know about implementations of the lower layers, you can usually apply the
Tell, don't ask principle. For your CSV streaming example, it would be something like, say:
// This is a "global" API (meaning it is visible to all layers). This is ok as
// it is a specification and not an implementation.
public interface FooWriter {
void write(Foo foo);
}
// DAO layer
public class FooDaoImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
for (Foo foo: executeQueryThatReturnsLotsOfFoos(...)) {
fooWriter.write(foo);
evict(foo);
}
}
...
}
// UI layer
public class FooUI {
...
public void dumpCsv(...) {
...
fooBusiness.streamBigQueryTo(new CsvFooWriter(request.getOutputStream()), ...);
...
}
}
// Business layer
public class FooBusinessImpl {
...
public void streamBigQueryTo(FooWriter fooWriter, ...) {
...
if (user.canQueryFoos()) {
beginTransaction();
fooDao.streamBigQueryTo(fooWriter, ...);
auditAccess(...);
endTransaction();
}
...
}
}
In this way you can deal with your specific ORM with freedom. The downside of this "callback" approach: if your layers are on different JVMs then it might not be very workable (in the example you would need to be able to serialize CsvFooWriter).
About generic DAOs: I have never felt the need, most object access patterns I have found are different enough to make an specific implementation desirable. But certainly doing layer separation and forcing the business layer to create Hibernate criteria are contradictory paths. I would specify a different query method in the DAO layer for each different query, and then I would let the DAO implementation get the results in whatever way it might choose (criteria, query language, raw SQL, ...). So instead of:
public class FooDaoImpl extends AbstractDao<Foo> {
...
public Collection<Foo> getByCriteria(Criteria criteria) {
...
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Criteria criteria = ...;
// Build your criteria to get only foos between from and to
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
public void doSomethingWithActiveFoos() {
...
Criteria criteria = ...;
// Build your criteria to filter out passive foos
Collection<Foo> foos = fooDaoImpl.getByCriteria(criteria);
...
}
...
}
I would do:
public class FooDaoImpl {
...
public Collection<Foo> getFoosBetween(Date from ,Date to) {
// build and execute query according to from and to
}
public Collection<Foo> getActiveFoos() {
// build and execute query to get active foos
}
}
public class FooBusinessImpl {
...
public void doSomethingWithFoosBetween(Date from, Date to) {
...
Collection<Foo> foos = fooDaoImpl.getFoosBetween(from, to);
...
}
public void doSomethingWithActiveFoos() {
...
Collection<Foo> foos = fooDaoImpl.getActiveFoos();
...
}
...
}
Though someone could think that I'm pushing some business logic down to the DAO layer, it seems a better approach to me: changing the ORM implementation to an alternative one would be easier this way. Imagine, for example that for performance reasons you need to read Foos using raw JDBC to access some vendor-specific extension: with the generic DAO approach you would need to change both the business and DAO layers. With this approach you would just reimplement the DAO layer.
Well, you can always tell your DAO layer to do what it needs to do when you want to. Having a method like cleanUpDatasourceCache in your DAO layer, or something similar (or even a set of these methods for different objects), is not bad practice to me.
And your service layer is then able to call that method without any assumption on what is done by the DAO under the hood. A specific implementation which uses direct JDBC calls would do nothing in that method.
Usually a DAO layer to wrap the data access logic is necessary. Other times is just the EntityManager what you want to use for CRUD operations, for those cases, I wouldn't use a DAO as it would add unnecessary complexity to the code.
How should EntityManager be used in a nicely decoupled service layer and data access layer?
If you don't want to tie your code to Hibernate you can use Hibernate through JPA instead and not bother too much about abstracting everything within your DAOs. You are less likely to switch from JPA to something else than replacing Hibernate.
my 2 cents: i think the layer separation pattern is great as a starting point for most cases, but there is a point where we have to analyze each specific application case by case and design a more flexible solution. what i mean is, ask yourself for example:
is your DAO expected to be reused in another context other than
exporting csv data?
does it make sense to have another implementation of the same DAO
interface without hibernate ?
if both answers were no, maybe a little bit of coupling between persistence and data presentation is ok. i like the callback solution proposed above.
IMHO sometimes strict implementation of a pattern has a higher cost in readability, mantainability, etc. which are the very issues we were trying to fix by adopting a pattern in the first place
you can achieve layer separation by implementing DAO pattern and and doing all hibernate/JDBC/JPA related stuff in Dao itself
for eg:
you can specify a Generic Dao interface as
public interface GenericDao <T, PK extends Serializable> {
/** Persist the newInstance object into database */
PK create(T newInstance);
/** Retrieve an object that was previously persisted to the database using
* the indicated id as primary key
*/
T read(PK id);
/** Save changes made to a persistent object. */
void update(T transientObject);
/** Remove an object from persistent storage in the database */
void delete(T persistentObject);
}
and its implementaion as
public class GenericDaoHibernateImpl <T, PK extends Serializable>
implements GenericDao<T, PK>, FinderExecutor {
private Class<T> type;
public GenericDaoHibernateImpl(Class<T> type) {
this.type = type;
}
public PK create(T o) {
return (PK) getSession().save(o);
}
public T read(PK id) {
return (T) getSession().get(type, id);
}
public void update(T o) {
getSession().update(o);
}
public void delete(T o) {
getSession().delete(o);
}
}
so whenever service classes calls any method on any Dao without any assumption of the internal implementation of the method
have a look at the GenericDao link
Hibernate (either as a SessionManager or a JPA EntityManager) is the DAO. The Repository pattern is, as far as I have seen, the best starting place. There is a great image over at the DDD Sample Website which I think speaks volumes about how you keep things things separate.
My application layer has interfaces that are explicit business actions or values. The business rules are in the domain model and things like Hibernate live in the infrastructure. Services are defined at the domain layer as interfaces, and implemented in the infrastructure in my case. This means that for a given Foo domain object (an aggregate root in the DDD terminology) I usually get the Foo from a FooService and the FooService talks to a FooRepository which allows one to find a Foo based on some criteria. That criteria is expressed via method parameters (possibly complex object types) which at the implementation side, for example in a HibernateFooRepository, would be translated in to HQL or Hibernate criterion.
If you need batch processing, it should exist at the application level and use domain services to facilitate this. StartBatchTransaction/EndBatchTransaction. Hibernate may listen to start/end events in order to coordinate purging, loading, whatever.
In the specific case of serializing domain entities, though, I see nothing wrong with taking a set of criteria and iterating over them one at a time (from root entities).
I find that often, in the pursuit of separation, we often try to make things completely general. They are not one in the same - your application has to do something, and that something can and should be expressed rather explicitly.
If you can substitute an InMemoryFooRepository where a HibernateFooRepository was previously being used, you're on the right path. The natural flow through unit and integration testing your objects encourages this when you adhere or at least try to respect the layering outlined in the image I linked above.
You got some good answers here, I would like to add my thoughts on this (by the way, this is something to take care of in our code as well) I would also like to focus on the issue of having Hibernate annotations/JPA annotations on entities that you might need to use outside of your DAL (i.e - at business logic, or even send to your client side) -
A. If you use the GenericDAO pattern for a given entity, you may find your entity being annotated with Hibernate (or maybe JPA annotation) such as #Table, #ManyToOne and so on - this means that you client code may contain Hibernate/JPA annotations and you would require an appropriate jar to get it compiled, or have some other support at your client code this is for example if you use GWT as your client (which can have support for JPA annotations in order to get entities compiled), and share the entities between the server and the client code, or if you write a Java client that performs a bean lookup using InitialContext against a Java application server (in this case you will need a JAR
B. Another approach that you can have is work with Hibernate/JPA annotated code at server side, and expose Web Services (let's say RESTFul web service or SOAP) - this way, the client works with an "interface" that does not expose knowledge on Hibernate/JPA (for example - a WSDL in case of SOAP defines the contract between the client of the service and the service itself). By breaking the architecture to service oriented one, you get all kinds of benefits such as loose coupling, ease of replacement of pieces of code, and you can concentrate all the DAL logic in one service that serves the rest of your services, and later own replace the DAL if needed by another service.
C. You can use an "object to object" mapping framework such as dozer to map objects of classes with Hibernate/JPA annotations to what I call "true" POJOs - i.e - java beans with no annotations whatsoever on them.
D. Finally regarding annotations - why use annotations at all? Hibernate uses hbm xml files an alternative for doing the "ORM magic" - this way your classes can remain without annotations.
E. One last point - I would like to suggest you look at the stuff we did at Ovirt - you can dowload the code by git clone our repo. You will find there under engine/backend/manager/modules/bll - a maven project holding our bll logic, and under engine/backend/manager/moduled/dal - our DAL layer (although currently implemented with Spring-JDBC, and some hibernate experiments, you will get some good ideas on how to use it in your code. I would like to add that if you go for a similar solution, I suggest that you inject the DAOs in your code, and not hold them in a Singletone like we did with getXXXDao methods (this is legacy code we should strive to remove and move to injections).
I would recommend you let the database handle the export to CSV operation rather than building it yourself in Java, it isn't as efficient. ORM shouldn't really be used for those large scale batch operations, because ORM should only be used to manipulate transactional data.
Large scale Java batch operations should really be done by JDBC directly with transactional support turned off.
However, if you do this regularly, I recommend setting up a reporting database which is a delayed replica of the database that is not used by the application and utilizes database specific replication tools that may come with your database.
Your solution architect should be able to work with the other groups to help set this up for you.
If you really have to do it in the application tier, then using raw JDBC calls may be the better option. With raw JDBC you can perform a query to assemble the data that you require on the database side and fetch the data one row at a time then write to your output stream.
To answer your layers question. Though I don't like using the word layers because it usually implies one thing on top of another. I would rather use the word "components" and I have the following component groups.
application
domain - just annotated JPA classes, no persistence logic, usually a plain JAR file, but I recommend just plop it as a package in the EJB rather than having to deal with class path issues
contracts - WSDL and XSD files that define an interface between different components be it web services or just UI.
transaction scripts - Stateless EJBs that would have a transaction and persistence units injected into them and do the manipulation and persistence of the domain objects. These may implement the interfaces generated by the contracts.
UI - a separate WAR project with EJBs injected into them.
database
O/R diagram - this is the contract that is agreed upon by application and data team to ensure THE MINIMUM that the database will provide. It does not have to show everything.
DDLs - this is the database side implementation of the O/R diagram which will contain everything, but generally no one should care because it implementation details.
batch - batch operations such as export or replicate
reporting - provides queries to get business value reports from the system.
legacy
messaging contracts - these are contracts used by messaging systems such as JMS or WS-Notifications or standard web services.
their implementation
transformation scripts - used to transform one contract to another.
It seems to me we need to take another look at the layers.
(I hope someone corrects me if I get this wrong.)
Front End/UI
Business
Service/DAO
So for the case of Generating a Report, THe layers break down like so.
Front End/UI
will have a UI with a button "Get Some Report"
the button will then call the Business layer that knows what the report is about.
The data returned by the report generator is given any final formatting before being returned to the user.
Business
MyReportGenerator.GenerateReportData() or similar will be called
Service/DAO
inside of the report generator DAOs will be used. DAOLocator.GetDAO(Entity.class); or similar factory type methods would be used to get the DAOs. the returned DAOs will extend a Common DAO interface
Well, to get a clean separation of concern or you can say clean layer separation you can add Service layer to your application, which lies between you FrontEnd and DaoLayer.
You can put your business logic in Service layer and database related things in Dao layer using Hibernate.
So if you need to change something in your business logic, you can edit your service layer without changing the DAO and if you want to change the Dao layer, you can do without changing actual business logic i.e. Service Layer.
I have two data access objects that are reverse generated and jar'ed up for use by my application. They represent tables that are very similar. One table has a few additional columns than the other. This is out of my control due to business oriented database ownership concerns.
The application currently has two implementations of a repository that operates on these DAOs. The implementations are very similar. One has a few extra operations that correspond to the extra columns on the second DAO. However with only a few exceptions, one implementation is a copy and paste of the other. The implementations are hundreds of lines long.
So I wanted to remove the copy/paste job. Ideally I could just stick an interface in front of the DAOs, and then maybe use an abstract class to hold the shared code (nearly all of it). However, I cannot put an interface in front of the DAOs. Remember they are reverse generated, and without upgrading our ORM software I don't think this is a reasonable choice (Kodo 3.x I believe, changing this is not in scope).
The only thing I can think of that would even work is some nastiness with reflection but that results in something much worse than I have now.
Any clever solutions?
edit: Here is very watered down code example
package one.dao
//reverse generated
class UserDao {
getFirstName(..);
setFirstName(..);
getLastName(..);
.... 50 more just like this
}
package two.dao
//reverse generated
class UserDao {
getFirstName(..);
setFirstName(..);
getLastName(..);
.... the same 50 more as above
getSomethingElse(..); //doesn't exist in one.dao.UserDao
setSomethingElse(..); //doesn't exist in one.dao.UserDao
}
class RepositoryOne(one.dao.UserDao userDao) {
//insert code here. perform operations on nearly all methods, lots of code
}
class RepositoryTwo(two.dao.UserDao userDao) {
//insert code here. same as Repository one
//some extra code that isn't above, maybe 10 lines
}
I am assuming you have some control over the duplicated code. If your code generator is producing all of it, you'll need to search for solutions within its API & configuration, I suspect.
When Inheritance doesn't work, try Composition. Make a third class to hold the shared code (SharedCode). Give each of the two existing classes a private member instance of the SharedCode class and make all routines implemented in SharedCode pass through methods to the member instance.