Pattern to implement different storage strategies - java

I am going to implement Data Mapper pattern to store data in different storages/databases.
What is the best OOP pattern to implement this concept?
For example, I have User model class
public class User
{
private int id;
private String name;
private String surname;
/* getters, setters and model-level business logic */
}
and appropriate Data Mapper class
public class UserMapper
{
public User findById(int id)
{
// perform query to MySQL, Redis or another DB
}
/* other methods */
}
Is it a good idea to use Strategy pattern by creating multiple storage strategy classes and then inject them into DataMapper class?
public class UserMySQLStorageStrategy extends UserStorageStrategy
{
public User findById(int id)
{
// perform query to MySQL
}
}
public class UserRedisStorageStrategy extends UserStorageStrategy
{
public User findById(int id)
{
// perform query to Redis
}
}
public class UserMapper
{
protected UserStorageStrategy _storageStrategy;
public UserMapper(UserStorageStrategy storageStrategy)
{
this._storageStrategy = storageStrategy;
}
public User findById(int id)
{
return this._storageStrategy.findById(id);
}
/* other methods */
}

Your strategy looks awfully like the mapper class, itself. It may make sense to make your mapper and user objects into interfaces, instead, and then your specific implementations choose how/where to store them. The strategy approach makes sense if your UserMapper class does many operations that are also unrelated to storage and that do not need to change despite the difference in storage; but if all your UserMapper class does is the storage, then an interface and multiple implementations would be simpler.

You do not need any particular OOP Design Pattern. What you require is a interface that provide the functionality.
Then your different data storage should implement it. And then you just need a strategy that provide the expected instance for the work flow of your program.

I'd make UserMapper an interface with multiple implementation concrete classes first and call the interface UserDao.
I'd call the implementation classes User{Mysql|Redis|etc}DAO. If you find any common piece of code between them it could be extracted into a common abstract base class.
At that point the logic of UserMapper class could be called to UserDaoResolver, that chooses and returns the concrete implementation upon some input or if you use some dependency injection framework (like Spring) you can delegate that function to it.
The current UserMapper caller would use the DAO implementation through its interface and obtain it by one of the method mentioned above.

Related

Instance created by new depend on #Service member operation

First, please let me introduce a minimal scene demo to explain the problem.
Let's say i have a strategy pattern interface.
public interface CollectAlgorithm<T> {
public List<T> collect();
}
And a implementation of this strategy, the ConcreteAlgorithm.
public class ConcreteAlgorithm implements CollectAlgorithm<Integer> {
#Resource
QueryService queryService;
#Override
public List<Integer> collect() {
// dummy ...
return Lists.newArrayList();
}
}
As you can see, the implementation depend on some query operation provided by a #Service component.
The ConcreteAlgorithm class will be created by new in some places, then the collect method will be called.
I've read some related link like Spring #Autowired on a class new instance, and know that the above code cannot work, since the instance created by new has a #Resource annotated member.
I'm new to Spring/Java, and i wonder if there are some ways, or different design, to make scene like above work.
I've thought about use factory method, but it seems that it will involve many unchecked type assignment since i provided a generic interface.
UPDATE
To make it more clear, i add some detail about the problem.
I provide a RPC service for some consumers, with an interface like:
public interface TemplateRecommendService {
List<Long> recommendTemplate(TemplateRecommendDTO recommendDTO);
}
#Service
public class TemplateRecommandServiceImpl implements TemplateRecommendService {
#Override
public List<Long> recommendTemplate(TemplateRecommendDTO recommendDTO) {
TemplateRecommendContext context = TemplateRecommendContextFactory.getContext(recommendDTO.getBizType());
return context.process(recommendDTO);
}
}
As you can see, i will create different context by a user pass field, which represent different recommendation strategy. All the context should return List<Long>, but the pipeline inside context is totally different with each other.
Generally there are three main stage of the context process pipeline. Each stage's logic might be complicated and varied. So there exists another layer of strategy pattern.
public abstract class TemplateRecommendContextImpl<CollectOut, PredictOut> implements TemplateRecommendContext {
private CollectAlgorithm<CollectOut> collectAlgorithm;
private PredictAlgorithm<CollectOut, PredictOut> predictAlgorithm;
private PostProcessRule<PredictOut> postProcessRule;
protected List<CollectOut> collect(TemplateRecommendDTO recommendDTO){
return collectAlgorithm.collect(recommendDTO);
}
protected List<PredictOut> predict(TemplateRecommendDTO recommendDTO, List<CollectOut> predictIn){
return predictAlgorithm.predict(recommendDTO, predictIn);
}
protected List<Long> postProcess(TemplateRecommendDTO recommendDTO, List<PredictOut> postProcessIn){
return postProcessRule.postProcess(recommendDTO, postProcessIn);
}
public /*final*/ List<Long> process(TemplateRecommendDTO recommendDTO){
// pipeline:
// dataCollect -> CollectOut -> predict -> Precision -> postProcess -> Final
List<CollectOut> collectOuts = collect(recommendDTO);
List<PredictOut> predictOuts = predict(recommendDTO, collectOuts);
return postProcess(recommendDTO, predictOuts);
}
}
As for one specific RecommendContext, its creation likes below:
public class ConcreteContextImpl extends TemplateRecommendContextImpl<GenericTempDO, Long> {
// collectOut, predictOut
ConcreteContextImpl(){
super();
setCollectAlgorithm(new ShopDecorateCrowdCollect());
setPredictAlgorithm(new ShopDecorateCrowdPredict());
setPostProcessRule(new ShopDecorateCrowdPostProcess());
}
}
Instead od using field oriented autowiring use constructor oriented one - that will force the user, creating the implementation instance, to provide proper dependency during creation with new
#Service
public class ConcreteAlgorithm implements CollectAlgorithm<Integer> {
private QueryService queryService;
#Autowired // or #Inject, you cannot use #Resource on constructor
public ConcreteAlgorithm(QueryService queryService) {
this.queryService = queryService;
}
#Override
public List<Integer> collect() {
// dummy ...
return Lists.newArrayList();
}
}
There are 4 (+1 Bonus) possible approaches I can think of, depending on your "taste" and on your requirements.
1. Pass the service in the constructor.
When you create instances of your ConcreteAlgorithm class you provide the instance of the QueryService. Your ConcreteAlgorithm may need to extend a base class.
CollectAlgorithm<Integer> myalg = new ConcreteAlgorithm(queryService);
...
This works when the algorithm is a stateful object that needs to be created every time or, with some variations, when you actually don't know the algorithm at all as it comes from another library (in which case you might have a factory or, in rare cases which most likely don't fit your scenario, create the object through reflection).
2. Turn your algorithm into a #Component
Annotate your ConcreteAlgorithm with the #Component annotation and then reference it wherever you want. Spring will take care of injecting the service dependency when the bean is created.
#Component
public class ConcreteAlgorithm implements CollectAlgorithm<Integer> {
#Resource
QueryService queryService;
....
}
This is the standard and usually preferred way in Spring. It works when you know ahead of time what all the possible algorithms are and such algorithms are stateless.
This is the typical scenario. I don't know if it fits your needs but I would expect most people to be looking for this particular option.
Note that in the above scenario the recommendation is to use constructor-based injection. In other words, I would modify your implementation as follows:
#Component
public class ConcreteAlgorithm implements CollectAlgorithm<Integer> {
final QueryService queryService;
#Autowired
public ConcreteAlgorithm(QueryService queryService) {
this.queryService = queryService;
}
#Override
public List<Integer> collect() {
// dummy ...
return Lists.newArrayList();
}
}
On the most recent versions of Spring you can even omit the #Autowired annotation.
3. Implement and call a setter
Add a setter for the QueryService and call it as needed.
CollectAlgorithm<Integer> myalg = new ConcreteAlgorithm();
myalg.setQueryService(queryService);
...
This works in scenarios like those of (1), but lifts you from the need of passing parameters to the constructor, which "may" help getting rid of reflection in some cases.
I don't endorse this particular solution however as it forces to know that you have to call the setQueryService method prior to invoking other methods. Quite error-prone.
4. Pass the QueryService directly to your collect method.
Possibly the easiest solution.
public interface CollectAlgorithm<T> {
public List<T> collect(QueryService queryService);
}
public class ConcreteAlgorithm implements CollectAlgorithm<Integer> {
#Override
public List<Integer> collect(QueryService queryService) {
// dummy ...
return Lists.newArrayList();
}
}
This works well if you want your interface to be a functional one, to be used in collections.
Bonus: Spring's SCOPE_PROTOTYPE
Spring doesn't only allow to instantiate singleton beans but also prototype beans. This effectively means it will act as a factory for you.
I will leave this to an external example, at the following URL:
https://www.boraji.com/spring-prototype-scope-example-using-scope-annotation
This "can" be useful in specific scenarios but I don't feel comfortable recommending it straight away as it's significantly more cumbersome.

Facade or Decorator

Context :
I have a REST Service let's say CustomerService which for now has one method getCustomer(id, country). Now requirement is that depending upon country I have to perform different business logic like access different database or some custom rules and then report that I received such a request.
Firstly to solve different implementations depending upon country I used Factory pattern as shown below :
Common interface for all country based implementations
public Interface CustomerServiceHandler{
Cusomer getCustomer(String id, String country);
}
Then factory as
public class CustomerServiceHandlerFactory{
public CustomerServiceHandler getHandler(String country){...};
}
Implementation Detail using Facade
Note this facade is called from REST class i.e.CustomerService
public CustomerServiceFacade{
public Customer getCustomer(String id, String country){
//use factory to get handler and then handler.getCustomer
Customer customer = factory.getHandler(country).getCustomer(id,country);
//report the request
reportingService.report('fetch-customer',....);
return customer;
}
}
Going by the SRP(Single Responsibility Principle), this facade is not achieving a single objective. It is fetcing customer as well as reporting that such request was received. So I thought of decorator pattern as follow.
Implementation using Decorator Pattern:
//this is called from Rest layer
public ReportingCustomerHandler implements CustomerServiceHandler{
//this delegate is basically the default implementation and has factory too
private CustomerServiceHandler delegate;
private ReportingService reporting;
public Customer getCustomer(String id, String country){
Customer customer = delegate.getCustomer(id, country);
reporting.report(....);
return customer;
}
}
//this is called from ReportingCustomerHandler
public DefaultCustomerServiceHandler implements CustomerServiceHandler{
private CustomerServiceHandlerFactory factory;
public Customer getCustomer(String id, String country){
//get factory object else use itself, even default implementation is provided by factory
CustomerServiceHandler handler = factory.getHandler(country);
return handler.getCustomer(id,country);
}
}
Note: In second approach I am reusing the interface CustomerServiceHandler(shown in factory code) for Reporting and Default implementations also.
So what is the correct way, or what is the alternative to this if something more suitable is present.
Second part of the question
What if I have to maintain two different interfaces i.e. one CustomerServiceHandler to implement different countries' implementation and one to serve REST Layer. Then what can be the design or alternative. In this case i think facade would fit.
So what is the correct way, or what is the alternative to this
You have a solid design here and great use of the factory pattern. What I offer are suggestions on this good work but I think there are many ways to enhance what you have.
I can see where the CustomerServiceFacade method getCustomer is breaking the SRP. It combines retrieving the Customer with the reporting aspect. I agree that it would be cleaner to move reporting out of that method.
Then your object would look like this:
public CustomerServiceFacade{
public Customer getCustomer(String id, String country){
return factory.getHandler(country).getCustomer(id,country);
}
}
So where do we put the reporting?
You could move/manage the reporting through a separate interface. This would allow flexibility in implementing different reporting approaches and make testing easier (ie mock the reporting piece).
public interface ReportService {
void report(Customer c, String id, String country);
}
How does the REST layer access reporting?
Option 1: REST accesses various Customer functions through multiple objects
The implementation of ReportService can be injected into the REST Controller with the CustomerServiceFacade.
Not sure what framework you are using for REST but here is what that might look like:
#GET
#Path("/customer/{Id}/{country}")
public Response getCustomer(#PathParam("Id") String id, #PathParam("country") String country){
Response r = null;
// injected implementation of CustomerServiceFacade
Customer c = customerServiceFacade.getCustomer(id, country);
if (c!=null){
// injected implementation of ReportService
reportService.report(c, id, country);
}
else {
// handle errors ...
}
return r;
}
Option 2: REST accesses various Customer functions through one Facade/Service
You could allow your service facade layer to serve the function of providing a simplified interface to a larger set of objects that provide capabilties. This could be done by having multiple customer servicing methods that enable the REST layer to access various capabilities through one object but still have the benefit of having each method adhere more closely to the SRP.
Here we inject CustomerServiceFacade into the REST Controller and it calls the two methods 1) to get the customer and 2) to handle reporting. The facade uses the implementation of the ReportService interface from above.
public CustomerServiceFacade{
public Customer getCustomer(String id, String country){
// call the correct CustomerServiceHandler (based on country)
return factory.getHandler(country).getCustomer(id,country);
}
public void report(Customer c, String id, String country){
// call the reporting service
reportService.report(c, id, country);
}
}
I think this is a reasonable use of the Facade pattern while still having SRP within the actual methods.
If the reporting implementation differs by country in the same way that the Customer does you could use another factory.
public void report(Customer c, String id, String country){
// call the correct reporting service (based on country)
rptFactory.getInstance(country).report(c,id,country);
}

How do Generics and Fields assist DAO Pattern in Standalone Java applications

//Interface DAO
public abstract class BaseDAO<T extends BaseDTO> {
public void update(T t) throws DBException {
Field[] fieldsToInsert = t.getClass().getDeclaredFields();
//code to update database object academic or event
}
public Integer create(T t) throws DBException {
Field[] fieldsToInsert = t.getClass().getDeclaredFields();
//code to create academic or event in database
}
}
//Concrete DAOs
public class AcademicDAO extends BaseDAO<AcademicDTO> {
//provide implementation
}
public class EventDAO extends BaseDAO<EventDTO> {
//provide implementation
}
//Transfer object
public class AcademicDTO extends BaseDTO {
String title;
String surname;
//getters and setters
}
public class BaseDTO {
protected Integer ID;
public Integer getID() {
return ID;
}
public void setID(Integer ID) {
this.ID = ID;
}
}
Hello Guys, I have a sample code on me that follows the above structure to create a small java application to manage academics and events. It is leniently following this pattern
1- You experts are familiar with this pattern more than me. I would like to understand why generics are used in this case so DAOs can extend and implement a generic base class. It would be great if one can show how generics here may be advantageous using an example.
2 - I have also witnessed the use of java Fields. Is there a link between generics and Fields?
I would like to document DAO pattern in an academic report, but I am finding difficult to understand how Generics and Reflect Field play a part here. Do they support flexibility and loose coupling?
The code you've provided is reusable set of logic to load and persist entities. Many times, in an application of non-trivial size, you'll wind up persisting many different types of objects. In this example, you can define as many objects as necessary, but only define the logic to actually save and load once. By asking the DTO what Field objects are there, it can get at the data to help construct queries for loading and saving.
Generics allow you to use this pattern while maintaining type safety. AcademicDAO can only handle AcadmeicDTO. You can't use AcademicDAO to store EventDTO. Generics allow the instance of the class to rely on a more specific type when dealing with the Field objects. If you didn't have generics, the BaseDAO would take Object, and you wouldn't be able to access any methods except those that Object provides because the JVM wouldn't know what class is provided, so it has to limit it's knowledge to that of Object. Using getClass().getDeclaredFields() bypasses that limitation because getClass() returns the actual class of the Object parameter.
Field is just a way to use reflection to access the values of the properties in each DTO. If you had to access the fields directly, with getTitle(), you couldn't reuse a generic base class to do your persistence. What would happen when you needed to access EventDTO? You would have to provide logic for that. Field allows you to skip that logic.
Edit:
To explain what I mean by accessing getID, you could do the following within BaseDAO because T is known to be a BaseDTO with a getID() method defined:
public abstract class BaseDAO<T extends BaseDTO> {
public boolean update(T t) throws DBException {
Integer id = t.getID();
Field[] fields = t.getClass().getDeclaredFields();
// Assuming you have a db object to execute queries using bind variables:
boolean success = db.execute("UPDATE table SET ... WHERE id = ?", id.intValue());
return success;
}
}
If you had this instead (in a non-generic class):
public boolean update(Object o) throws DBException {
// This line doesn't work, since Object doesn't have a getID() method.
Integer id = t.getID();
Field[] fields = o.getClass().getDeclaredFields();
boolean success = db.execute("UPDATE table SET ... WHERE id = ?", id.intValue());
return success;
}
You'd have to look through those Field objects, or ask for the ID field and assume it existed.
For question 1. The use of generics allows the same implementations of update and create to be used regardless of the type of the DTO. Consider if you didn't use generics. Then the best you could do for the parameter type of update would be BaseDTO, but then you could call
academicDAO.update( eventDTO )
which doesn't make sense. With the code as you have it, this would be a type error. So the main advantage is: better type checking.
For question 2. The use of Fields allows a single implementation of update and create to work on DTO object of various concrete types.

Is my DAO strategy ok?

I'm using Hibernate. The question is at the bottom.
The current strategy
It's simple.
First of all, I have a basic Dao<T>.
public class Dao<T> {
private Class<T> persistentClass;
private Session session;
public Dao(Class<T> persistentClass) {
this.persistenClass = persistentClass;
this.session = HibernateUtil.getCurrentSession();
}
It's nice as a base class and it passes the most common methods up to its Session.
public T get(Serializable id) {
#SuppressWarnings("unchecked")
T t = (T) this.session.get(this.persistentClass, id);
return t;
}
protected Criteria getCriteria() {
return this.session.createCriteria(this.persistentClass);
}
When there's need to use queries on the model, it goes into a specific DAO for that piece of model, which inherits from Dao<T>.
public class DaoTask extends Dao<Task> {
public DaoTask() {
super(Task.class);
}
public List<Task> searchActiveTasks() {
#SuppressWarnings("unchecked")
List<Task> list = (List<Task>) this.getCriteria()
.add(Restrictions.eq("active", true))
.list();
return list;
}
}
This approach has always worked well.
However...
However, today I found that many times an instance needs reattachment to the Session and a line similar to the following ends up happening:
new Dao<Book>(Book.class).update(book);
... which I find to be bad, because
I don't like specifying the redundant Book.class
If ever a DaoBook arises, this construct will become obsolete.
So I turned Dao<T> into an abstract class, and went on to refactor the old code.
Question
In order to remove the Dao<T> references from the codebase, I thought of two approaches:
Create specific DAOs for every class that ever needs attachment, which would generate many almost empty DaoBooks and the sort.
Create a class that owns a Dao<Object> and exposes only the attachment methods (i.e. save(), update() etc).
I'm tending to go with #2, but I thought this "AttacherDao" pattern might be bad, so I'd like your opinion.
Any cons for #2? Also, do you find anything wrong with "the current strategy"?
Our approach is to have a DAO object (derived from a commonDao) for each persistent class. In fact we define interface for this DAO class and each DAO decides which interfaces are opened up.
Using the following code, user cannot delete the PersistentClass.
interface PersistentClassDao {
void save(PersistentClass persistentObject);
}
Class PersistentClassDaoImpl extends CommonDao implements PersistentClassDao {
void save(persistentObject) {
persist(persistentObject);
}
Even though it has some additional overhead, this approach helps in unit testing appropriate code before exposing an interface.
We've chosen an approach similar to lud0h's, with the following twist:
abstract class<T extends IModelObject> JdbcCrudDao<T>{
void create(T dbo){}
T findByFoo(String foo){}
void update(T dbo){}
void delete(T dbo){}
}
class BarDao extends JdbcCrudDao<Bar>{
}
But, the twist is that we selectively expose methods on the Dao through a facade and forward only those we absolutely must.
class BarController implements IController{
private static final BarDao dao;
// ...
void update( IBar bar ){
dao.update(bar);
}
}
The only short-coming in all this is it requires some casting about if you wish to hide your database keys behind an interface type (which we do), but it's a pretty minor inconvenience versus the alternative (database code outside of the Daos).
Couple of questions
Are you frequently creating your DAO to do a single task or are these long lived?
What about using a static function? Clearly your Book object can be bind the DAO function to without the Book.class reference...
Otherwise, I'm a little worried about keeping the session object around instead of fetching whatever the current session is - isn't it considered "bad" to have long lived session objects? I'm not a master of DAO, so maybe I'm missing something here.

Interface too general

In the Java code I'm working with we have an interface to define our Data Access Objects(DAO). Most of the methods take a parameter of a Data Transfer Object (DTO). The problem occurs when an implementation of the DAO needs to refer to a specific type of DTO. The method then needs to do a (to me completely unnecessary cast of the DTO to SpecificDTO. Not only that but the compiler can't enforce any type of type checking for specific implementations of the DAO which should only take as parameters their specifc types of DTOs.
My question is: how do I fix this in the smallest possible manner?
You could use generics:
DAO<SpecificDTO> dao = new SpecificDAO();
dao.save(new SpecificDTO());
etc.
Your DAO class would look like:
interface DAO<T extends DTO> {
void save(T);
}
class SpecificDAO implements DAO<SpecificDTO> {
void save(SpecificDTO) {
// implementation.
}
// etc.
}
SpecificDTO would extend or implement DTO.
Refactoring to generics is no small amount of pain (even though it's most likely worth it).
This will be especially horrendous if code uses your DTO interface like so:
DTO user = userDAO.getById(45);
((UserDTO)user).setEmail(newEmail)
userDAO.update(user);
I've seen this done (in much more subtle ways).
You could do this:
public DeprecatedDAO implements DAO
{
public void save(DTO dto)
{
logger.warn("Use type-specific calls from now on", new Exception());
}
}
public UserDAO extends DeprecatedDAO
{
#Deprecated
public void save(DTO dto)
{
super.save(dto);
save((UserDTO)dto);
}
public void save(UserDTO dto)
{
// do whatever you do to save the object
}
}
This is not a great solution, but might be easier to implement; your legacy code should still work, but it will produce warnings and stack traces to help you hunt them down, and you have a type-safe implementation as well.

Categories