I've been following DDD principles (following the Eric Evans book on the topic) however I recently started re-reading the book and noticed that I appear to have strayed from one of the principles for repositories...
"For each type of object that needs global access, create an object
that can provide the illusion of an in-memory collection..."
I've strayed from this in that I create a repository for every aggregate and have found that this has suited me well. Even when an aggregate is itself associated with another aggregate it's a simple matter of referring to the associated aggregate's repository during creation of the entity (usually inside a factory).
The benefits of doing this that I've found are when performing operations such as caching in my repositories. It also really simplifies the divide between object creation/persistence and the domain.
Can somebody give me an example of where this "Global access" is not appropriate to help me understand where I've gone wrong.
I believe you are using wrong terms... Aggregate can't contain other aggregate. You probably have an association between those aggregates.
In any case, every aggregate should have repository, otherwise you are doing something wrong. Also it's better to use repository to load aggregate instead of association from other aggregate. If you keep your association (probably because your ORM makes it easy) you didn't split aggregates completely.
Related
I am starting with DDD and Spring JPA. The conception of separating persistence and domain layers looks and works fine for me but I see there one problem: we are losing lazy loading, am I right? Is it possible to map a domain object with an entity without loading all data of the database?
For the reason, I see that it can be a better idea to stop using one-to-many relationships in entities and aggregations in domain objects. So I ask you about advice. Is it a good idea?
I have a conception to delegate some methods from domain objects to services by stop using aggregation in a domain.
So I want to change sth like that (I omitted unnecessary to understand conception elements). For me, it makes simpler to build business logic because we have access to all objects directly, but we waste resources because we need to load a lot of data from database each time we need some object. :
to sth like that. It makes that we can better control which objects we need, but we are losing convenience in the creation of business logic because we need to delegate some methods to services:
What do you think about it? Is it a good idea or there is a better way to solve the problem. It is a little problematic to load so much data from the database to restore objects to memory.
It is a little confusing for me because I see that the standard conceptions of object-oriented programming are problematic to implement in the application of an external database.
I don't know why you say that you always have to load the entire database.
You would load just the data of an aggregate, since a rule is that an aggregate references another by the id. So in a one to many relationship you have a list of ids.
A repository loads / stores the data of an aggregate.
I currently have two aggregate roots - Customer and AddressBook. Both have some invariants that need to be protected. Customer has reference to AddressBook and I am not sure whether that is the correct way to model my domain because one cannot live without the other and since domain objects should be created using factories I feel like I should not allow creation of Customer without AddressBook and vice versa but obviously one needs to be created before the other. Hope it makes sense.
How should I address my problem?
Other question would be: can we create multiple aggregate roots in a single transaction? I've red that it should not be done in case of update.
I currently have two aggregate roots - Customer and AddressBook. Both have some invariants that need to be protected. Customer has reference to AddressBook and I am not sure whether that is the correct way to model my domain because one cannot live without the other
If they really don't make sense without the other, you may want to review the design to see if they are really part of the same consistency boundary.
Can we create multiple aggregate roots in a single transaction?
Technically, yes. It may not be a good idea.
When all of the logically distinct aggregates are stored together, then creating them in a single transaction is straightforward.
But that also introduces a constraint: that those aggregates need to be stored "together". If all of your aggregates are in the same relational database, an all or nothing transaction is not going to be a problem. On the other hand, if each aggregate is persisted into a document store, then you need a store that allows you to insert multiple documents in the same write.
And if your aggregates should happen to be stored in different document stores, then coordinating the writes becomes even more difficult.
I would like to create closely associated AddressBook with him.... Maybe a domain event would be a more suitable option?
Perhaps; using a domain event to signal a handler to invoke another transaction is a common pattern for automating work. See Evolving Business Processes a la Lokad for a good introduction to process managers.
In our code base we make extensive use of DAOs. In essence a layer that exposes a low level read/write api and where each DAO maps to a table in the database.
My question is should the dao's update methods take entity id's or entity references as arguments if we have different kinds of updates on an entity.
For example, say we have customers and adressess. We could have
customer.address = newAddress;
customerDao.updateCustomerAddress(customer);
or we could have
customerDao.updateCustomerAddress(customer.getId(), newAddress);
Which approach would you say is better?
The latter is more convenient since if we have the entity we always have the id, so it will always work. The converse is not always the case though, but would have to be preceded with getting the entity before performing the update.
In DDD we have Aggregates and Repositories. Aggregates ensure that the business invariants hold and Repositories handle the persistence.
I recommend that Aggregates should be pure, with no dependencies to any infrastructure code; that is, Aggregates should not know anything about persistence.
Also, you should use the Ubiquitous language in your domain code. That being said, your code should look like this (in the application layer):
customer = customerRepository.loadById(customerId);
customer.changeAddress(address);
customerRepository.save(customer);
I assume your question is
Which approach of the two is better?
I would prefer the second approach. It states clearly what will be done. The update object will be freshly loaded and it is absolutely clear that only the address will be updated. The first approach leaves room for doubt. What happens if customer.name has a new value aswell? Will it also be update?
I'm trying to put together a project in which I have to persist some entity classes using different spring data repositories (gemfire, jpa, mongodb etc). As the data is more or less the same that needs to go into these repositories, I was wondering if I can use the same entity class for all of them to save me from converting from one object to another?
I got it working for gemfire and jpa but the entity class is already starting to looking a bit wired.
#Id // spring-data-gemfire
#javax.persistence.Id // jpa
#GeneratedValue
private Long id;
So far I can see following options:
Create an interface based separate Entity (domain) classes - Trying to re-use same class looks like a bit of premature optimization.
Externalize xml based mapping for JPA, not sure if gemfire and mongodb mapping can be externalized.
Use different concrete entity classes and use some copy constructor/converter for the conversion.
Been literally hitting my head against the wall to find the best approach - Any response is much appreciated. Thanks
If by weird, you mean your application domain objects/entity classes are starting to accumulate many different, but separate (mapping) annotations (some semantically the same even, e.g. SD Common's o.s.data.annotation.Id and JPA's #javax.persistence.Id) for the different data stores in which those entities will be persisted, then I suppose that is understandable.
The annotation pollution only increases too as the number of representations for your entities increases. For example, think Jackson annotations for JSON mapping or JAXB for XML, etc. Pretty soon, you have more meta-data then actual data, :-)
However, it is more a matter of preference, convenience, simplicity, really.
Some developers are purists and like to externalize everything. Others like to keep information (meta-data) close to the code using it. Even certain patterns have emerged to address these type of concerns... DTOs, Bounded Contexts (see Fowler's BoundedContext, which has a strong correlation to DDD and Microservices).
Personally, I use the following rules when designing and applying architectural principals/decisions in my code, especially when introducing something new:
Simplicity
Consistency
DRY
Test
Refactor
(along with a few others as well... good OOD, SoC, SOLID, Design Patterns, etc).
In that order too. If something starts getting too complex, refactor and simplify it. Be consistent in what you do by following/using patterns, conventions; familiarity is 1 key to consistency. But, don't keep repeating yourself either.
At the end of the day, it is really about maintaining the application. Will someone else who picks up where you left off be able to understand the organization and logic quickly, and be able to maintain it... simplicity is king. It does not mean it is so simple it is not viable or valuable. Even complex things can be simple if organized properly. However, breaking things apart and introducing abstractions can have hidden costs (see closing thoughts).
To more concretely answer (a few of) your questions...
I am not certain about MongoDB, but (Spring Data) GemFire does not have an external mapping. Minimally, #Region (on the entity class) and #Id are required, along with #PersistenceConstructor if your entity class has more than 1 constructor. For example.
This sounds sneakingly like to DTOs. Personally, I think BoundContexts are a better, more natural model of the application's data since the domain model should not be unduly tied to any persistent store or external representation (e.g. JSON, XML, etc). The application domain model is the 1 true state of the application and it should model the concept that is represents in a natural way, not superficially to satisfy some representation or persistent store (hence the mapping/conversion).
Anyway, try not to beat yourself up too much. It is all about managing complexity. Try to let yourself just do and use testing and other feedback loops to find an answer that is right for your application. You'll know.
Hope this helps.
I'm a bit puzzled in figuring out the differences between these three. Presumed I have a Customer -> Address relation the (JPA) Detached Entity will have this as well (Eager Loading presumed). Where is the need to have an additional Aggregate Root? Where is the need to have a DTO? Is it all more or less the same?
One of the reasons might be that the JPA compliant Entity has some info the client is simply not interested in, e.g. #Entity, #Id, #OneToMany.
I can convert it easily to JSON/XML using JAX-RS/-WS and almost every client can deal with it, so where is the need for having it? Is it all almost the same or do I miss something important?
You will create aggregate root if you follow DDD principles wether you're using JPA or not. This is one of the very fundamental building blocks in DDD. From the book Eric Evan's DDD book:
Aggregates mark off the scope within which invariants have to be
maintaned at every stage of lifecycle. The following patterns,
factories and repositories, operate on aggregates.
DTO and detached entity are related to JPA (technical constraints). An aggregate root is also an entity. When the aggregate root become unmanaged (by persistence context), it is called detached entity.
Perhaps your question can be rephrased into: should I return aggregate root as detached entity or DTO? The answer is subjective and depends on your environment.
The benefit of returning aggregate root as detached entity is you don't need to create a new DTO class. You can also call methods owned by the aggregate root. The disadvantage is you usually won't populate the complete object graph for performance reason because some aggregate roots can have a very deep hierarchy. This will lead to lazy loading exception if not handled properly.
Returning DTO instead of aggregate root is considered as more robust design. You will need to create a new DTO class for every 'use case' of the aggregate root. This maybe too cumbersome for small system, but if you're using DDD, I believe your requirement is complex.
Humm, I don't really understand what is your real need ? What is the problem that you try to solve with trying to use DDD concepts or DTO...
DTO is not comparable to aggregate root (an entity as well), an entity has datas and behaviors (that is not the case for DTO, that is simply data).
So, the domain model, should be domain Driven ;-), and some building blocks are usefull to implement it, ex: Entity, Aggregate... And when you use an ORM, it can be hard to isolate your domain, so you have to try to have something pure in your Domain with less noise as you can. There are many strategies to solve it.
You can find more there : http://elegantcode.com/2009/11/13/dtos-ddd-the-anemic-domain-model/