Where is the Balance Between Dependency Injection and Abstraction? - java

Many Architects and Engineers recommend Dependency Injection and other Inversion of Control patterns as a way to improve the testability of your code. There's no denying that Dependency Injection makes code more testable, however, isn't it also a completing goal to Abstraction in general?
I feel conflicted! I wrote an example to illustrate this; it's not super-realistic and I wouldn't design it this way, but I needed a quick and simple example of a class structure with multiple dependencies. The first example is without Dependency Injection, and the second uses Injected Dependencies.
Non-DI Example
package com.stackoverflow.di;
public class EmployeeInventoryAnswerer()
{
/* In reality, at least the store name and product name would be
* passed in, but this example can't be 8 pages long or the point
* may be lost.
*/
public void myEntryPoint()
{
Store oaklandStore = new Store('Oakland, CA');
StoreInventoryManager inventoryManager = new StoreInventoryManager(oaklandStore);
Product fancyNewProduct = new Product('My Awesome Product');
if (inventoryManager.isProductInStock(fancyNewProduct))
{
System.out.println("Product is in stock.");
}
}
}
public class StoreInventoryManager
{
protected Store store;
protected InventoryCatalog catalog;
public StoreInventoryManager(Store store)
{
this.store = store;
this.catalog = new InventoryCatalog();
}
public void addProduct(Product product, int quantity)
{
this.catalog.addProduct(this.store, product, quantity);
}
public boolean isProductInStock(Product product)
{
return this.catalog.isInStock(this.store, this.product);
}
}
public class InventoryCatalog
{
protected Database db;
public InventoryCatalog()
{
this.db = new Database('productReadWrite');
}
public void addProduct(Store store, Product product, int initialQuantity)
{
this.db.query(
'INSERT INTO store_inventory SET store_id = %d, product_id = %d, quantity = %d'
).format(
store.id, product.id, initialQuantity
);
}
public boolean isInStock(Store store, Product product)
{
QueryResult qr;
qr = this.db.query(
'SELECT quantity FROM store_inventory WHERE store_id = %d AND product_id = %d'
).format(
store.id, product.id
);
if (qr.quantity.toInt() > 0)
{
return true;
}
return false;
}
}
Dependency-Injected Example
package com.stackoverflow.di;
public class EmployeeInventoryAnswerer()
{
public void myEntryPoint()
{
Database db = new Database('productReadWrite');
InventoryCatalog catalog = new InventoryCatalog(db);
Store oaklandStore = new Store('Oakland, CA');
StoreInventoryManager inventoryManager = new StoreInventoryManager(oaklandStore, catalog);
Product fancyNewProduct = new Product('My Awesome Product');
if (inventoryManager.isProductInStock(fancyNewProduct))
{
System.out.println("Product is in stock.");
}
}
}
public class StoreInventoryManager
{
protected Store store;
protected InventoryCatalog catalog;
public StoreInventoryManager(Store store, InventoryCatalog catalog)
{
this.store = store;
this.catalog = catalog;
}
public void addProduct(Product product, int quantity)
{
this.catalog.addProduct(this.store, product, quantity);
}
public boolean isProductInStock(Product product)
{
return this.catalog.isInStock(this.store, this.product);
}
}
public class InventoryCatalog
{
protected Database db;
public InventoryCatalog(Database db)
{
this.db = db;
}
public void addProduct(Store store, Product product, int initialQuantity)
{
this.db.query(
'INSERT INTO store_inventory SET store_id = %d, product_id = %d, quantity = %d'
).format(
store.id, product.id, initialQuantity
);
}
public boolean isInStock(Store store, Product product)
{
QueryResult qr;
qr = this.db.query(
'SELECT quantity FROM store_inventory WHERE store_id = %d AND product_id = %d'
).format(
store.id, product.id
);
if (qr.quantity.toInt() > 0)
{
return true;
}
return false;
}
}
(Please feel to make my example better if you have any ideas! It might not be the best example.)
In my example, I feel that Abstraction has been completely violated by EmployeeInventoryAnswerer having knowledge of underlying implementation details of StoreInventoryManager.
Shouldn't EmployeeInventoryAnswererhave the perspective of, "Okay, I'll just grab a StoreInventoryManager, give it the name of the product the customer is looking for, and what store I want to check, and it will tell me if the product is in stock."? Shouldn't it not know a single thing about Databases or InventoryCatalogs, as from its perspective, that's an implementation detail it need not concern itself with?
So, where's the balance between testable code with injected dependencies, and information-hiding as a principal of abstraction? Even if the middle-classes are merely passing-through dependencies, the constructor signature alone reveals irrelevant details, right?
More realistically, let's say this a long-running background application processing data from a DBMS; at what "layer" of the call-graph is it appropriate to create and pass around a database connector, while still making your code testable without a running DBMS?
I'm very interested in learning about both OOP theory and practicality here, as well as clarifying what seems to be a paradox between DI and Information Hiding/Abstraction.

The Dependency Inversion Principle and, more specifically, Dependency Injection tackle the problem of how make application code loosely coupled. This means that in many cases you want to prevent the classes in your application from depending on other concrete types, in case those dependent types contain volatile behavior. A volatile dependency is a dependency that, among other things, communicates with out-of-process resources, is non-deterministic, or needs to be replaceable. Tightly coupling to volatile dependencies hinders testability, but also limits the maintainability and flexibility of your application.
But no matter what you do, and no matter how many abstractions you introduce, somewhere in your application you need to take a dependency on a concrete type. So you can’t get rid of this coupling completely—but this shouldn't be a problem: An application that is 100% abstract is also 100% useless.
What this means is that you want to reduce the amount of coupling between classes and modules in your application, and the best way of doing that is to have a single place in the application that depends on all concrete types and will instantiate that for you. This is most beneficial because:
You will only have one place in the application that knows about the composition of object graphs, instead of having this knowledge scattered throughout the application
You will only have one place to change if you want to change implementations, or intercept/decorate instances to apply cross-cutting concerns.
This place where you wire everything up should be in your entry-point assembly. It should be the entry-point assembly, because this assembly already depends on all other assemblies anyway, making it already the most volatile part of your application.
According to the Stable-Dependencies Principle (2) dependencies should point in direction of stability, and since the part of the application where you compose your object graphs will be the most volatile part, nothing should depend on it. That’s why this place where you compose your object graphs should be in your entry point assembly.
This entry point in the application where you compose your object graphs is commonly referred to as the Composition Root.
If you feel that EmployeeInventoryAnswerer should not know anything about databases and InventoryCatalogs, it might be the case that the EmployeeInventoryAnswerer is mixing infrastructural logic (to build up the object graphs) and application logic. Iin other words, it might be violating the Single Responsibility Principle. In that case your EmployeeInventoryAnswerer should not be the entry point. Instead you should have a different entry point and the EmployeeInventoryAnswerer should only get a StoreInventoryManager injected. Your new entry point can than build up the object graph starting with EmployeeInventoryAnswerer and call its AnswerInventoryQuestion method (or whatever you decide to call it).
where's the balance between testable code with injected dependencies,
and information-hiding as a principal of abstraction?
The constructor is an implementation detail. Only the Composition Root knows about concrete types and it is, therefore, the only one calling those constructors. When a consuming class depends on abstractions as its incoming/injected dependencies (e.g. by specifying its constructor arguments as abstractions), the consumer knows nothing about the implementation and that makes it easier to prevent leaking implementation details onto the consumer. If the abstraction itself would leak implementation details, on the other hand, it would violate the Dependency Inversion Principle. And if the consumer would decide to cast the dependency back to the implementation, it would in turn violate the Liskov Substitition Principle. Both violations should be prevented.
But even if you would have a consumer that depends on a concrete component, that component can still do information-hiding—it doesn't have to expose its own dependencies (or other values) through public properties. And the fact that this component has a constructor that takes in the component's dependencies, does not make it violate information-hiding, because it is impossible to retrieve the component's dependencies through its constructor (you can only insert dependencies through the constructor; not receive them). And you can't change the component's dependencies, because that component itself will be injected into the consumer, and you can't call a constructor on an already created instance.
As I see it, when talking about a "balance," you are providing a false choice. Instead, it's a matter of applying the SOLID principles correctly, because without applying the SOLID principles, you'll be in a bad place (from a maintainability perspective) anyway—and application of the SOLID principles undoubtedly leads to Dependency Injection.
at what "layer" of the call-graph is it appropriate to create and pass around a database connector
At the very least, the entry point knows about the database connection, because it is only the entry point that should read from the configuration file. Reading from the configuration file should be done up front and in one single place. This allows the application to fail fast if it is misconfigured and prevents you from having reads to the config file scattered throughout your application.
But whether the entry point should be responsible of creating the database connection, that can depend on a lot of factors. I usually have some sort of ConnectionFactory abstraction for this, but YMMV.
UPDATE
I don't want to pass around a Context or an AppConfig to everything and end up passing dependencies classes don't need
Passing dependencies a class itself doesn't need is typically not the best solution, and might indicate that you are violating the Dependency Inversion Principle and applying the Control Freak anti-pattern. Here's an example of such problem:
public class Service : ServiceAbs
{
private IOtherService otherService;
public Service(IDep1 dep1, IDep2 dep2, IDep3 dep3) {
this.otherService = new OtherService(dep1, dep2, dep3);
}
}
Here you see a class Service that takes in 3 dependencies, but it doesn't use them at all. It only forwards them to the OtherService's constructor which it creates. When OtherService is not local to Service (i.e. lives in a different module or layer), it means that Service violates the Dependency Inversion Principle—Service is now tightly coupled with OtherService. Instead, this is how Service should look like:
public class Service : IService
{
private IOtherService otherService;
public Service(IOtherService otherService) {
this.otherService = otherService;
}
}
Here Service only takes in what it really needs and doesn't depend on any concrete types.
but I also don't want to pass the same 4 things to several different classes
If you have a group of dependencies that are often all injected together into a consumer, changes are that you are violating the Single Responsibility Principle: the consumer might do too much—know too much.
There are several solutions for this, depending on the problem at hand. One thing that comes to mind is refactoring to Facade Services.
It might also be the case that those injected dependencies are cross-cutting concerns. It's often much better to apply cross-cutting concerns transparently, instead of injecting it into dozens or hundreds of consumers (which is a violation of the Open/Closed principle). You can use the Decorator design pattern, Chain-of-Responsibility design pattern, or dynamic interception for this.

Related

How can I refactor my service use single responsibility principle?

I read "Clean Code" book ((c) Robert C. Martin) and try to use SRP(single responsibility principle). And I have some questions about it. I have some service in my application, and I do not know how can I refactor it so it matched the right approach. For example, I have service:
public interface SendRequestToThirdPartySystemService {
void sendRequest();
}
What does it do if you look at the class name? - send a request to the third party system. But I have this implementation:
#Slf4j
#Service
public class SendRequestToThirdPartySystemServiceImpl implements SendRequestToThirdPartySystemService {
#Value("${topic.name}")
private String topicName;
private final EventBus eventBus;
private final ThirdPartyClient thirdPartyClient;
private final CryptoService cryptoService;
private final Marshaller marshaller;
public SendRequestToThirdPartySystemServiceImpl(EventBus eventBus, ThirdPartyClient thirdPartyClient, CryptoService cryptoService, Marshaller marshaller) {
this.eventBus = eventBus;
this.thirdPartyClient = thirdPartyClient;
this.cryptoService = cryptoService;
this.marshaller = marshaller;
}
#Override
public void sendRequest() {
try {
ThirdPartyRequest thirdPartyRequest = createThirdPartyRequest();
Signature signature = signRequest(thirdPartyRequest);
thirdPartyRequest.setSignature(signature);
ThirdPartyResponse response = thirdPartyClient.getResponse(thirdPartyRequest);
byte[] serialize = SerializationUtils.serialize(response);
eventBus.sendToQueue(topicName, serialize);
} catch (Exception e) {
log.error("Send request was filed with exception: {}", e.getMessage());
}
}
private ThirdPartyRequest createThirdPartyRequest() {
...
return thirdPartyRequest;
}
private Signature signRequest(ThirdPartyRequest thirdPartyRequest) {
byte[] elementForSignBytes = marshaller.marshal(thirdPartyRequest);
Element element = cryptoService.signElement(elementForSignBytes);
Signature signature = new Signature(element);
return signature;
}
What does it do actually? - create a request -> sign this request -> send this request -> to send the response to Queue
This service inject 4 another services: eventBus, thirdPartyClient, cryptoSevice and marshaller. And in sendRequest method calls each this service.
If I want to create a unit test for this service, I need mock 4 services. I think it's too much.
Can somebody indicate how can this service be changed?
Change the class name and leave as is?
Split into several classes?
Something else?
The SRP is a tricky one.
Let's ask two questions:
What is a responsibility?
What are the different types of responsibilities?
One important thing about responsibilities is that they have a Scope and you can define them in different levels of Granularity. and are hierarchical in nature.
Everything in your application can have a responsibility.
Let's start with Modules. Each module has responsibilities an can adhere to the SRP.
Then this Module can be made of Layers. Each Layer has a responsibility and can adhere to the SRP.
Each Layer is made of different Objects, Functions etc. Each Object and/or Function has responsibilities and can adhere to the SRP.
Each Object has Methods. Each Method can adhere to the SRP. Objects can contain other objects and so on.
Each Function or Method in an Object is made of statements and can be broken down to more Functions/Methods. Each statement can have responsibilities too.
Let's give an example. Let's say we have a Billing module. If this module is implemented in a single huge class, does this module adhere to the SRP?
From the point of view of the system, the module does indeed adhere to the SRP. The fact that it's a mess doesn't affect this fact.
From the point of view of the module, the class that represents this module doesn't adhere to the SRP as it will do a lot of other things, like communicate with DB, send Emails, do business logic etc.
Let's take a look at the different types of responsibilities.
When something should be done
How it should be dome
Let's take an example.
public class UserService_v1 {
public class SomeOperation(Guid userID) {
var user = getUserByID(userID);
// do something with the user
}
public User GetUserByID(Guid userID) {
var query = "SELECT * FROM USERS WHERE ID = {userID}";
var dbResult = db.ExecuteQuery(query);
return CreateUserFromDBResult(dbResult);
}
public User CreateUserFromDBResult(DbResult result) {
// parse and return User
}
}
public class UserService_v2 {
public void SomeOperation(Guid userID) {
var user = UserRepository.getByID(userID);
// do something with the user
}
}
Let's take a look at these two implementations.
UserService_v1 and UserService_v2 do exactly the same thing but different ways. From the point of view of the System, these services adhere to the SRP as they contain operations related to Users.
Now let's take a look at what they actually do to complete their work.
UserService_v1 does these things:
Builds a SQL query string.
Calls the db to execute the query
Takes the specific DbResult and creates a User from it.
Does the operation on the User
UserService_v2 does these things:
1. Requests from the repository the User by ID
2. Does the operation on the User
UserService_v1 contains:
How specific query is build
How the specific DbResult is mapped to a User
When this query need to be called (in the begging of the operation in this case)
UserService_v1 contains:
When a User should be retrieved from the DB
UserRepository contains:
How specific query is build
How the specific DbResult is mapped to a User
What we do here is to move the responsibility of How from the Service to the Repository. This way each class has one reason to change. If how changes, we change the Repository. If when changes, we change the Service.
This way we create objects that collaborate with each other to do specific work, by dividing responsibilities. The tricky parts is: what responsibilities we divide?
If we have a UserService and OrderService we don't divide when and how here. We divide what so we can have one service per Entity in our system.
It's natural for there services to need other objects to do their work. We can of course add all of the responsibilities of what, when and how to a single object but that just makes to the messy, unreadable and hard to change.
In this regard the SRP helps us to achieve cleaner code by having more smaller parts that collaborate with and use each other.
Let's take a look at your specific case.
If you can move the responsibility of how the ClientRequest is created and signed by moving it to the ThirdPartyClient, your SendRequestToThirdPartySystemService will only tell when this request should be sent. This will remove Marshaller, and CryptoService as dependencies from your SendRequestToThirdPartySystemService.
Also you have SerializationUtils that you probably rename to Serializer to capture the intent better as Utils is something that we stick to objects that we just don't know how to name and contains a lot of logic (and probably multiple responsibilities).
This will reduce the number of dependencies and your tests will have less things to mock.
Here's a version of the sendRequest method with less responsibilities.
#Override
public void sendRequest() {
try {
// params are not clear as you don't show them to your code
ThirdPartyResponse response = thirdPartyClient.sendRequest(param1, param2);
byte[] serializedMessage = SerializationUtils.serialize(response);
eventBus.sendToQueue(topicName, serialize);
} catch (Exception e) {
log.error("Send request was filed with exception: {}", e.getMessage());
}
}
From your code I'm not sure if you can also move the responsibility of serialization and deserialization to the EventBus, but if you can do that, it will remove Seriazaliation from your service also. This will make the EventBus responsible for how it serialized and stores the things inside it making it more cohesive. Other objects that collaborate with it will just tell it to send and object to the queue not caring how this objects get's there.

Abstraction Layer (Java)

I'm currently working on a project that involves creating an abstraction layer. The goal of the project is to support multiple implementations of server software in the event that I might need to switch over to it. The list of features to be abstracted is rather long, so I'm going to want to look into a rather painless way to do it.
Other applications will be able to interact with my project and make calls that will eventually boil down to being passed to the server I'm using.
Herein lies the problem. I haven't much experience in this area and I'm really not sure how to make this not become a sandwich of death. Here's a chain of roughly what it's supposed to look like (and what I'm trying to accomplish).
/*
Software that is dependent on mine
|
Public API layer (called by other software)
|
Abstraction between API and my own internal code (this is the issue)
|
Internal code (this gets replaced per-implementation, as in, each implementation needs its own layer of this, so it's a different package of entirely different classes for each implementation)
|
The software I'm actually using to write this (which is called by the internal code)
*/
The abstraction layer (the one in the very middle, obviously) is what I'm struggling to put together.
Now, I'm only stuck on one silly aspect. How can I possibly make the abstraction layer something that isn't a series of
public void someMethod() {
if(Implementation.getCurrentImplementation() == Implementation.TYPE1) {
// whatever we need to do for this specific implementation
else {
throw new NotImplementedException();
}
}
(forgive the pseudo-code; also, imagine the same situation but for a switch/case since that's probably better than a chain of if's for each method) for each and every method in each and every abstraction-level class.
This seems very elementary but I can't come up with a logical solution to address this. If I haven't explained my point clearly, please explain with what I need to elaborate on. Maybe I'm thinking about this whole thing wrong?
Why not using inversion of control ?
You have your set of abstractions, you create several implementations, and then you configure your public api to use one of the implementations.
Your API is protected by the set of interfaces that the implementations inherit. You can add new implementations later without modifying the API code, and you can switch even at runtime.
I don't know anymore if inversion of control IS dependency injection, or if DI is a form of Ioc but... it's just that you remove the responsibility of dependency management from your component.
Here, you are going to have
API layer (interface that the client uses)
implementations (infinite)
wrapper (that does the IoC by bringing the impl)
API layer:
// my-api.jar
public interface MyAPI {
String doSomething();
}
public interface MyAPIFactory {
MyAPI getImplementationOfMyAPI();
}
implementations:
// red-my-api.jar
public class RedMyAPI implements MyAPI {
public String doSomething() {
return "red";
}
}
// green-my-api.jar
public class GreenMyAPI implements MyAPI {
public String doSomething() {
return "green";
}
}
// black-my-api.jar
public class BlackMyAPI implements MyAPI {
public String doSomething() {
return "black";
}
}
Some wrapper provide a way to configure the right implementation. Here, you can hide your switch case in the factory, or load the impl from a config.
// wrapper-my-api.jar
public class NotFunnyMyAPIFactory implements MyAPIFactory {
private Config config;
public MyAPI getImplementationOfMyAPI() {
if (config.implType == GREEN) {
return new GreenMyAPI();
} else if (config.implType == BLACK) {
return new BlackMyAPI();
} else if (config.implType == RED) {
return new RedMyAPI();
} else {
// throw...
}
}
}
public class ReflectionMyAPIFactory implements MyAPIFactory {
private Properties prop;
public MyAPI getImplementationOfMyAPI() {
return (MyAPI) Class.forName(prop.get('myApi.implementation.className'))
}
}
// other possible strategies
The factory allows to use several strategies to load the class. Depending on the solution, you only have to add a new dependency and change a configuration (and reload the app... or not) to change the implementation.
You might want to test the performances as well.
If you use Spring, you can only use the interface in your code, and you inject the right implementation from a configuration class (Spring is a DI container). But no need to use Spring, you can do that on the Main entry point directly (you inject from the nearest of your entry point).
The my-api.jar does not have dependencies (or maybe some towards the internal layers).
All the jar for implementations depend on my-api.jar and on you internal code.
The wrapper jar depends on my-api.jar and on some of the impl jar.
So the client load the jar he wants, use the factory he wants or a configuration that inject the impl, and use your code. It depends also on how you expose your api.

Java OOP in MVC Legacy

My question, overall, is in regards to best practices and efficiency. Today, my teacher and I had a discussion about OOP in MVC Legacy. We were going through a previous project of mine and the question is "What's the point?"
The way my project (and all my projects) are structured, it doesn't make sense to me. Here is an example followed by my code.
Controller- Gets the String values from the form/View and passes them to the Service class. Single Responsibility would state that this is all it is responsible for, not creating an object to pass, but creating a < Map < String,Object >> would be totally fine from my understanding.
Service Class- Following best practices/single responsibility these methods are not supposed to do anything other than call the requested method/pass values.
DAO- The DAO is supposed to be responsible for transferring all data/objects into usable form for the D.B accessor and to return them as such.
But why build an object just to tear it down? Especially when you could just pass a list of values down as a Map< String,Object > so all the values and columns match up?
The following are code snippets to help illustrate my question:
Service Class:
public class ClientService {
private Client_SQL_DAO_Strategy dao;
public ClientService(Client_SQL_DAO_Strategy dao){
setDaoStrategy(dao);
}
public void sendClientToStorage(Client client) throws SQLException, ClassNotFoundException{
dao.sendClientToDatabase(client);
}
public void updateClient(List values) throws SQLException, ClassNotFoundException {
dao.updateClient(values);
}
}
DAO
public void saveClient(Client client) throws ClassNotFoundException, SQLException {
List columns = new ArrayList<>();
columns.add("Last_Name");
columns.add("First_Name");
columns.add("Business_Name");
columns.add("Phone");
List<Object> values = new ArrayList<>();
values.add(client.getClientLastName());
values.add(client.getClientFirstName());
values.add(client.getClientBusiness());
values.add(client.getClientPhone());
accessor.createRecord(TABLE_NAME, columns, values);
}
public void updateClient(List listOfValues) throws ClassNotFoundException, SQLException{
List<Object> columns = new ArrayList<>();
columns.add("Last_Name");
columns.add("First_Name");
columns.add("Business_Name");
columns.add("Phone");
int primaryKey = Integer.valueOf(listOfValues.get(0).toString());
accessor.updateRecord(TABLE_NAME, columns, listOfValues, PK_COLUMN, primaryKey);
}
Comparing the two methods provided in the DAO which approach makes more sense? Creating the Client to tear it down or passing associated values and columns to the accessor? This Map< String,Object > seems ideal for both methods.
And yes, I'm aware of the newer techniques as well, but at the current time in the semester Legacy is the lesson of the week.
Spring MVC implements the Model-View-Controller design pattern.
The responsibility of the Controller is to obtain/create/populate the Model and prepare the environment for the View.
The View is responsible for displaying Model data, in Spring MVC usually through JSP, but you can also specify View classes which do things like render Excel or PDF, for example.
The Model implements the domain logic. Depending on your implementation, this could be a "view model" containing only logic for the front end, or it can contain the real business rules. It should be a real class. Do not EVER use Map<String, Object>. Such usage of maps sacrifices type safety and is not OOP.
The Service class is like a controller class for external service coordination, like persistence, email, payment, etc.
The DAO class is a service provider just for persistence. It translates the object representation to database operations. This layer can be replaced by an ORM. Do not pass around Map<String, Object>!
If the only external service used by your app is persistence, you can avoid separate Service and DAO classes, and defer separation until you require more services.
For more information about this type of object modeling, check out Domain Driven Design.

How to avoid anemic data model? Can repositories be injected into entities?

I have an immutable User entity:
public class User {
final LocalDate lastPasswordChangeDate;
// final id, name, email, etc.
}
I need to add a method that will return information if the user's password must be changed i.d. it has not been changed for more than the passwordValidIntervalInDays system setting.
The current approach:
public class UserPasswordService {
private SettingsRepository settingsRepository;
#Inject
public UserPasswordService(SettingsRepository settingsRepository) {
this.settingsRepository = settingsRepository;
}
public boolean passwordMustBeChanged(User user) {
return user.lastPasswordChangeDate.plusDays(
settingsRepository.get().passwordValidIntervalInDays
).isBefore(LocalDate.now());
}
}
The question is how to make the above code more object oriented and avoid the anemic domain model antipattern? Should the passwordMustBeChanged method be moved to User if so how to access SettingsRepository, should it be injected into User's constructor, or should a Settings instance be provided to the ctor, or should the passwordMustBeChanged method require a Settings instance to be provided?
The code of Settings and SettingsRepository is not important, but for completness, here it is:
public class Settings {
int passwordValidIntervalInDays;
public Settings(int passwordValidIntervalInDays) {
this.passwordValidIntervalInDays = passwordValidIntervalInDays;
}
}
public class SettingsRepository {
public Settings get() {
// load the settings from the persistent storage
return new Settings(10);
}
}
For a system-wide password expiration policy your approach is not that bad, as long as your UserPasswordService is a domain service, not an application service. Embedding the password expiration policy within User would be a violation of the SRP IMHO, which is not much better.
You could also consider something like (where the factory was initialized with the correct settings):
PasswordExpirationPolicy policy = passwordExpirationPolicyFactory().createDefault();
boolean mustChangePassword = user.passwordMustBeChanged(policy);
//class User
public boolean passwordMustBeChanged(PasswordExpirationPolicy policy) {
return policy.hasExpired(currentDate, this.lastPasswordChangeDate);
}
If eventually the policy can be specified for individual users then you can simply store policy objects on User.
You could also make use of the ISP with you current design and implement a PasswordExpirationPolicy interface on your UserPasswordService service. That will give you the flexibility of refactoring into real policy objects later on without having to change how the User interacts with the policy.
If you had a Password value object you may also make things slightly more cohesive, by having something like (the password creation date would be embedded in the password VO):
//class User
public boolean passwordMustBeChanged(PasswordExpirationPolicy policy) {
return this.password.hasExpired(policy);
}
just to throw out another possible solution would be to implement a long-running process that could do the expiration check and send a command to a PasswordExpiredHandler that could mark the user with having an expired password.
I have stumbled upon a document that provides an answer to my question:
A common problem in applying DDD is when an entity requires access to data in a repository or other gateway in order to carry out a business operation. One solution is to inject repository dependencies directly into the entity, however this is often frowned upon. One reason for this is because it requires the plain-old-(C#, Java, etc…) objects implementing entities to be part of an application dependency graph. Another reason is that is makes reasoning about the behavior of entities more difficult since the Single-Responsibility Principle is violated. A better solution is to have an application service retrieve the information required by an entity, effectively setting up the execution environment, and provide it to the entity.
http://gorodinski.com/blog/2012/04/14/services-in-domain-driven-design-ddd/

Inversion of Control, Dependency Injection and Strategy Pattern with examples in java

I am often confused by these three terms. These three look similar to me. Can someone please explain them to me clearly, with examples.
I have seen similar posts and don't understand completely.
Dependency Injection refers to the pattern of telling a class what its dependencies will be, rather than requiring the class to know where to find all of its dependencies.
So, for example, you go from this:
public class UserFetcher {
private final DbConnection conn =
new DbConnection("10.167.1.25", "username", "password");
public List<User> getUsers() {
return conn.fetch(...);
}
}
to something like this:
public class UserFetcher {
private final DbConnection conn;
public UserFetcher(DbConnection conn) {
this.conn = conn;
}
public List<User> getUsers() {
return conn.fetch(...);
}
}
This reduces coupling in the code, which is especially useful if you want to unit test UserFetcher. Now, instead of UserFetcher always running against a database found at 10.167.1.25, you can pass in a DbConnection to a test database. Or, even more useful in a fast test, you can pass in an implementation or subclass of DbConnection that doesn't even connect to a database, it just discards the requests!
However, this sort of primitive dependency injection makes wiring (providing an object with its dependencies) more difficult, because you've replaced accessing the dependency using a global variable (or a locally instantiated object) with passing the dependency around through the whole object graph.
Think of a case where UserFetcher is a dependency of AccountManager, which is a dependency of AdminConsole. Then AdminConsole needs to pass the DbConnection instance to AccountManager, and AccountManager needs to pass it to UserFetcher...even if neither AdminConsole nor AccountManager need to use the DbConnection directly!
An inversion of control container (Spring, Guice, etc) aims to make dependency injection easier by automatically wiring (providing) the dependencies. To do this, you tell your IoC container once how to provide an object (in Spring, this is called a bean) and whenever another object asks for that dependency, it will be provided by the container.
So our last example might look like this with Guice, if we used constructor injection:
public class UserFetcher {
private final DbConnection conn;
#Inject //or #Autowired for Spring
public UserFetcher(DbConnection conn) {
this.conn = conn;
}
public List<User> getUsers() {
return conn.fetch(...);
}
}
And we have to configure the IoC container. In Guice this is done via an implementation of Module; in Spring you configure an application context, often through XML.
public class MyGuiceModule extends AbstractModule {
#Override
public void configure() {
bind(DbConnection.class).toInstance(
new DbConnection("localhost", "username", "password"));
}
}
Now when UserFetcher is constructed by Guice or Spring, the DbConnection is automatically provided.
Guice has a really good Wiki article on the motivation behind dependency injection, and further using an IoC container. It's worth reading all the way through.
The strategy pattern is just a special case of dependency injection, where you inject logic instead of an object (even though in Java, the logic will be encapsulated in an object). It's a way of decoupling independent business logic.
For example, you might have code like this:
public Currency computeTotal(List<Product> products) {
Currency beforeTax = computeBeforeTax(products);
Currency afterTax = beforeTax.times(1.10);
}
But what if you wanted to extend this code to a new jurisdiction, with a different sales tax scheme? You could inject the logic to compute the tax, like this:
public interface TaxScheme {
public Currency applyTax(Currency beforeTax);
}
public class TenPercentTax implements TaxScheme {
public Currency applyTax(Currency beforeTax) {
return beforeTax.times(1.10);
}
}
public Currency computeTotal(List<Product> products, TaxScheme taxScheme) {
Currency beforeTax = computeBeforeTax(products);
Currency afterTax = taxScheme.applyTax(beforeTax);
return afterTax;
}
Inversion of control means that a runtime framework wires all components together (for example Spring). Dependency injection is a form of IoC (I don't know if another form of IoC exists) (see: http://en.wikipedia.org/wiki/Inversion_of_control).
Strategy pattern is a design pattern (defined by the GoF) where the algorithm can be replaced by another (see: http://en.wikipedia.org/wiki/Strategy_pattern). This is archived by provided several implementation of a same interface. When using an IoC like Spring, if you have several implementations of an interface, and if you can switch from an implementation to another by configuration, you are using the strategy pattern.
I also recommend reading the introduction chapter of Spring's documentation, which focuses on this issue: Introduction to Spring Framework
The first few paragraphs should do.
And this also links to: Inversion of Control Containers and the Dependency Injection pattern
Which also provides a motivational view of these very important core concepts.

Categories