3-tier architecture correctness - java

I have a project implemented in a (flawed) 3-tier architecture. My job is making it more generic so that it would be easy to add a new database into the project.
Concrete: there is a databaseFacade for an SQL database and i have to make it more generic so we can add multiple databases very easy. In this case writing it to a CSV file.
My idea in the database layer was to make a interface where all the methods are defined. Then having the database facade (depending which you want to use) implementing this interface so that it becomes more generic.
Then i have some kind of DBmanager class. This DBmanager class will read out a config file so he knows what database to use. Based on this info he will create an instance of the interface and return this to the application layer.
However this is where I don't know if i'm correct. The application layer now has a DBmanager class (where everything is correctly encapsulated only 1 method is public for returning the facade) and after that the DBfacade.
Any thoughts about the correctness of this? Since I'm having doubts.

I've seen a PHP system (Moodle) use almost exactly this pattern and it works fine. All that happens is that the DB type is specified as a config variable and the concrete DB access class is instantiated as the global DB manager object, providing the facade methods e.g. get_records(), which returns a standardised array of row objects. Arguable whether you would call this facade or adapter, but that's hardly a worry.
I'd say go for it with your current plan. You seem to have decoupled the layers properly and understood the purpose of the patterns. Also, the way your low level (DB) and high level (application controller) components both depend on a single DB facade interface in the middle is a good example of dependency inversion, so bonus points for that! :)

This is the correct approach. One minor quibble is that your DBManager actually follows the Factory pattern, and so should be called DatabaseFacadeFactory, assuming that your facade class is called DatabaseFacade.
As you become more comfortable with Java, check out Spring. It provides a lot of tools and techniques that automatically handle situations such as this, and remove the need for much of the boilerplate code. For more information, see dependency-injection.

To me, it seems legit. I'm not an expert in software architecture yet, but your description describes similar concept in comparison to how JDBC was designed.

Related

Why don't we override the methods in Spring CRUD Repository

I am looking into Spring data and one thing I've noticed is that we're able to perform CRUD operations just by creating an interface which implements the CRUD repository and by default, we're given access to generated queieres to the db via the method name.
I thought whenever we implement an Interface, we need to provide an implementation to the methods. So why don't we override anything when we use an interface which implements from the CrudRepository interface?
One of the goals of Spring Data is to make database access easy, without the need to manually write a lot of boilerplate code.
Traditionally, one of the things developers commonly did when working with a database is write DAOs (database access objects) with methods, where each method would do a specific query. Such methods would typically be boilerplate code - simple, repetitive code that's a lot of work to write and maintain and that doesn't contain any business logic.
When you use Spring Data, all this code is automatically generated for you. The only thing you have to do is specify in a repository interface what query you want to do, and Spring Data then interprets the meaning of the method name to automatically generate the code that does the query for you.
That saves you a lot of time and helps you a great deal to keep your own code concise; it also helps with the prevention of bugs.
The implementation of a Spring Data repository interface is generated automatically at runtime. This isn't done by generating source code which is compiled - behind the scenes Spring Data directly generates the bytecode of the implementation of the interface.

Dao interface for multiple databases?

There is a pattern of making a DAO interface before DAO implementation. I googled the advantages of this pattern and one striking point was to support multiple databases.
Now, what i could understand is that multiple databases here means different database engines rather than multiple datasources. Obviously multiple datasources should not have affect on how DAO implementations are made out of DAO interface.
My question is what can be the situations where we may need to support multiple database engines catering the same data? Also if such need arises, how will the REST endpoints be managed to support different databases?
Will they be like for e.g. /db1/courses/, /db2/courses ? Do correct me if i have made any wrong assumption or statement in this question.
I just wanted to add my answer to this about beginning Spring development. This is one of the things that initially will not make sense at first. You will end up asking yourself:
There will be only 1 database, so this doesn't make sense why do it?
Why would I define an interface when there will only ever be 1 implementation?
But really neither of these are really why you do this. But it is the convention and pattern and this style is just what people are use to and you will like it better overtime. There are some other reasons too:
Spring Data - this is an alternative to using an entity manager, whereby you only define interfaces and Spring will actually create beans which implement your repository functionality for you.
Design - ensuring you define an interface will help keep your repository a repository.
Easier Mocking - although arguably you can still do this in Spring without needing to define an interface it is still a bit cleaner when you want to replace the implementation with another.
But really it is just the Spring way, people will find it easier to understand your code if you do this.
I came across this situation where I had to check two DBs and get the data. The other DB was a back up one.
So this was the flow.
RestController --> Service --> DBService
--> DB1Repository --> Connect to DB1
--> DB2Repository --> Connect to DB2
We can design as we want, all it matters at the end is that we follow SOLID principles.
Basically the high level components should not depend on the low level components, but both should depend on the abstractions.
Ill pop in here to describe a real world example.
We recently wanted to change out a large production database (Oracle) to a different one (SQL Server).
For different areas of the database, we had different DAO interfaces and implementations. For example, CustomerDAO, AccountsDAO, etc.
For each interace (like CustomerDAO) we had an implementation (CustomerDAOImplOracle).
It was relatively straight forward for us to write SQL Server versions of the DAO's (the SQL syntax and jdbc libraries were of course different) and swap them over with minimal changes to our business logic (the services which use the DAO's).
So, CustomerDAOImplOracle was reimplemented at CustomerDAOImplSQLServer. And so on...
What we learn:
Interfaces provide good abstractiuon and allow for multiple implementations
The DAO layer allows us to "switch out" the database (or its client libraries) if necessary
Hiding implementation details of the database from the business logic reduces coupling and complexity

Java Architecture - Self managed classes vs Manager classes

I am using Spring with Hibernate.
My hibernate model I am using is 'NodeInstanceLog' which is the object that is retrieved from the database.
My current structure:
At the moment, NodeInstanceLogDAO is handling the retrieving of the data from the database.
The other option would be to change my structure to make it so NodeInstanceLog is fetchable and make it manage itself. Ie being able to retrieve its data from the database.
What are the advantages and disadvantages of each?
It's a matter of separation of concern. A model represents a part of your problem domain, while the DAO is concerned with getting data in and out of a datastore. Two completely different problems, requiring dedicated classes.
In general, the more you split up responsibilities, the more modular your code base is with many advantages:
* our brains tend to be good in focussing on one small thing at a time, so reading (=maintaining) your code will be easier, as it's more structured.
* testing is easier when different responsibilities are separated in small classes: a test can manipulate one simple focussed class at a time
* reuse is more likely: if you want to do something else with a model instance that has nothing to do with DAO, that DAO code in there would be dragged into the other thing you wanna do for nothing
Anyway, there is probably a lot more to say. Try googling "separation of concern", "loose coupling", ... But take it from me: splitting responsibilities is the way to go :)
In plain java, using DAOs / Repositories is usually better as otherwise your objects will need to have quite a lot of database logic. Database logic is NOT business logice, and your model should only represent the business model.
Play is a framework that can weave a lot of the persistence logic automagically into your classes (using aspects), in this way your model class has methods to query the DB, but it doesn't have the logic.
If you're learning this stuff, I would suggest you to implement both and experience what pains each solution creates (e.g. how do you deal with transactions? from where do you take a DB Connection?)
I also suggest you to read the book Patterns of Enterprise Application Architecture, in particular Active Record (having the logic weaved into your class) and Unit of Work (Hibernate)

DAO and Service layer design

I am developing web application with Java EE 6. In order to minimize calls to database will it be a good idea to have classes:
Data access class (DAO) will call only basic methods getAllClients, getAllProducts, getAllOrders, delete, update methods - CRUD methods.
Service class which will call CRUD methods but in addition filter methods e.g. findClientByName, findProuctByType, findProductByYear, findOrderFullyPaid/NotPaid etc... which will be based on basic DAO methods.
Thank you
In my experience (albeit, limited) DAO classes tend to have all the possible database operations which the application is allowed to perform. So in your case, it will have methods such as getAllClients() and getClientByName(String name), etc.
Getting all the users in your DAO and iterating all over them until you find the one you need will result in unneeded waste of computational time and memory consumption.
If you want to reduce the amount of times that your database is hit you could, maybe, implement some caching mechanism. An ORM framework such as Hibernate should be able to provide what you need as shown here.
EDIT:
As per your comment question, no, your service will not be made redundant. What one does is to usually use a Service layer to expose the DAO functionalities. This will, basically, not make the DAO visible from the from front end of your application. It usually also allows for extra methods, such as, for instance, public String getUserFormatted(String userName). This will make use of the getUserByName function offered by the DAO but provide some extra functionality.
The Service layer will also make itself useful should there be a change in specification and you now also need a web service to interface with your application. Having a service layer in between will allow the web service to query the DAO through the Service layer.
So basically, the DAO layer will still worry about the database stuff (CRUD Operations) while the service will adapt the data returned by the DAO without exposing the DAO.
It's hard to say without more information, but I think it's probably a good idea to leverage your database more than with just CRUD operations. Databases are good at searching, provided you configure them correctly, so IMHO it's a good idea to let your database handle the searching in your find methods for you. This means that your find methods would probably go in your DAOs...
It's good to think about/be aware of the implications of DB access on performance, but don't go overboard. Also, your approach implies that since your services are going to be doing the filtering, you are going to load a large amount of DB data into your application, which is a bad idea. The bottom line is you should use your RDBMS as it is intended to be used, and worry about performance due to over-access when you can show its a problem. I doubt you will run into that scenario.
I would say that you're better off having your DAO be more fine grained than you've specified.
I'd suggest putting findClientByName, findProuctByType, findProductByYear, findOrderFullyPaid/NotPaid on your DAO as well in some way because your database will most likely be better at filtering and sorting data than your in memory code.
Imagine you have 10 years of data and you call findProductsByYear on your service class and it then calls getAllProducts and then throws away 9 years of data in memory. You're far better off getting your database to only return you the year you are interested in.
Yes, this is the right way to do it.
The service will own the transactions. You should write these as POJOs; that way you can expose them as SOAO or REST web services, EJBs, or anything else that you want later on.

Creating Service layer and DAO layer (interface+implementation) or implementation only

I am confused about the structure of creating service layer and DAO layer:
in some examples I see some people creating interface+implementation for both service and DAO and in other examples I see people creating implementation only specially when the DAOs extends an AbstractDao class that contains generic methods for those DAOs, so I am confused about what to go for, why to go for this solution or the other one, and what is the best practise (commonly used) please advise.
I suggest to create interfaces for service and for DAO. Very often you would like to mock service in unit tests of code, that use this serice.
Also Spring, for example, forces you to use interfaces when you are using some Spring proxies for example for transactions. So you should have an interface for service.
DAO is more internal part, but I always try to use interfaces for them to simplify testing.
I prefer interface + implementations for the following reasons:
Interfaces becomes contracts: they tell you what is available to call, and you never worry about the implementation thereof, provided that the result is expected.
You can create customizable implementation of the interface without breaking other implementations of the same interface (generally useful when writing unit test). Customizing an implemented only class can bring more error than you don't notice easily.
It creates a framework that can be documented.
Implemented subclasses are used to create the business/application logic that conforms to the interface contract.
I have only done the implementations of service layer, didn't bother with interfaces (except where I had to). I probably should get around to writing the interfaces, but no problems so far. I am doing unit testing just fine without mocking the service layer.
Also, I don't have a DAO layer, as I am using hibernate and it seemed overkill. A lot of my reasoning is based on this blog, eloquently written by Bozho.
I think it is quite debatable (whether to have DAO and hibernate), however I am quite happy with my decision, I pass in thick domain objects and then just make a call to the hibernate session. Each method on the dao layer would literally only be one line (session.persist(mObject), or similar).
One argument I heard against it was a dao layer would make it easier to change/remove orm at a later date. I am not sure if the time spent coding the dao layer in the first place added to the time coding the change, would be less than coding the change without dao layer on its own. I have never had to change ORM technology any where I worked so its a small risk.
From my point of view when you say Service you should have interfaces and if you can't provide or will not provide that, then you don't have the contract between the service and the consumer and it's not a service anymore, you can call it anything else
The interface+implementation is used in order to have loose coupling. You have the flexibility of changing or switching implementations easily without major changes in code.
Think of a scenario where you are using Hibernate for persistence(DAO Layer) and you need to switch to JPA or iBatis or any other ORM for that matter.
If you are using interfaces, you can simply write an implementation specific to JPA and "plug" it in place of Hibernate. The service code remains the same.
Another argument for interface+implementation model is that proxies for interfaces are supported by Java itself while creating proxies for implementations requires using library such as cglib. And this proxies are necessary for transaction support etc.
Check out my post about "fastcode" an eclipse-spring plugin that generates the service layer off your DAO's. Works like a charm.
generate service /dao layer for GWT/Spring/Hibernate/PostgreSQL

Categories