How to manage Many To Many relations in microservices architecture? - java

If i have the below entities in tow separate microservices :
class Employee {
#Id
private Long employeeId;
private String name;
...
}
class Department {
#Id
private Long deptId;
private String name;
...
}
How can i add a many to many relation between the entities ?
I thought of combine the two entities in one entity on the gateway :
class Empl_Dept{
List<Long> employeeIds;
List<Long> departmentIds;
}
so the junction table will be on the gateway side.
Is there any better solution??

Assuming you have your domains modeled properly, this seems like an easy fix with Integration Events.
Add an EmployeeIds table to your Departments Service and a DepartmentIds table to your Employees service. When you make, break, or change an assignment between an Employee and a Department, publish a EmployeeDepartmentUpdated event that both services subscribe to. Then, each service can process the event and update their own data to keep in sync.
You do NOT want to start putting data into your gateway API, that's not what it's for (and it means that if you have multiple gateways to the same back-end services, only one will know that information).
Embrace Eventual Consistency and your microservices journey will be the better for it!
EDIT:
To your question about the impact of Events on performance and complexity, the answers are "no" and "yes."
First, no I would not expect event-sourcing to have a negative impact on system performance. In fact, their asynchronous nature makes event processing a separate concern from API responsiveness.
I'm sure there are ways to build a Service Oriented Architecture (SOA, of which microservices is essentially a subset) without a messaging plane, but in my experience having one is a fantastic way to let loosely-coupled communication happen.
Any direct calls between services - regardless of protocol (HTTP, gRPC, etc.) means tight coupling between those services. Endpoint names, arguments, etc. are all opportunities for breaking changes. When you use messaging, each service is responsible for emitting backward-compatible events and every other service can choose which events it cares about, subscribe to them, and never have any knowledge of whether the emitting service is running, dead, changed, etc.
To your second question, the answer is absolutely "yes" - event processing is additional complexity. However, it's part of the complexity you sign up for (and far from the worst of it) when you choose a microservices architecture style. Distributed authorization, keeping UI performant with multiple back-end calls organized between multiple services, fault tolerance and health/performance monitoring are all (at least in my experience) bigger challenges.
For the record, we use a hosted instance of RabbitMQ from CloudAMQP.com and it works great. Performance is good, they have lots of scalable packages to choose from, and we've had zero issues with performance or downtime. The latest RabbitMQ 3.8 release now includes OAuth as well so we are currently working to integrate our Authz flows with our message broker and will have a nice end-to-end security solution.

Each microservice sould have it own database, so the junction table has no sense because it's a relational solution.
You may consideer one of this two approaches, if the Empl_Dept is a Domain object you should put it in a third microsevice, if it isn't put the employees relation into department and vice-versa.
I hope it helps.

I am assuming, This many to many map is required in a third microservice. Then It's straightforward you create a map table between employee and department
If you have only those 2 microservices then you will have to add employee_department_map in one service and department_employee_map in second service.
While designing a microservice always consider that you can't change anything in another service (let's say if it's third party api)

Related

How do we choose the business logic when building a Jhipster application?

When creating a entity with the Jhipster helper, it asks
? Do you want to use separate service class for your business logic? (Use arrow keys)
> No, the REST controller should use the repository directly
Yes, generate a separate service class
Yes, generate a separate service interface and implementation
In which case should I use which option?
What are the benefits and flaws of each solution?
Is it possible to change easily the architecture once everything is set?
IMHO it depends on how complex your application is going to be and how long you plan on having to maintain it.
If your domain model is quite simple and your REST controllers are straightforward CRUD operations without complex mapping, you can get away without using a separate service layer.
If your domain model or interactions get more complex, you might need a 'Separation of Concerns': your Controller classes should just map REST calls from/to the correct DTO's for the REST API, and business logic and coordination between different entities should go in a service class that does not have anything to do with the REST API. In the long term, that makes it easier to make changes in the REST API separate from changes in the business logic.
Some blog posts to read:
https://www.petrikainulainen.net/software-development/design/understanding-spring-web-application-architecture-the-classic-way/
https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-architecture.html
Then about the decision to use interfaces or not. The main advantages of using interfaces used to be that it allowed better testing and avoided coupling modules too close. But since 2010, there has been a lot of discussion whether it's worth the overhead. Maybe start reading the discussion underneath Adam Bien's original post:
https://www.adam-bien.com/roller/abien/entry/service_s_new_serviceimpl_why

Should I use microservices or mapped entities to share data between multiple applications?

I have a scenario with multiples applications, each of them taking care of their own data. But as the business is coupled, we need to access each other's application's data.
We basically have a database dependency strategy, sharing entities with some applications or mapping views in others. It's created a dependency hell!
How can microservices help us in such a scenario?
How is accessing multiples url's a better strategy than using database JOINs?
(ex EntityA has a mapping dependency with EntityB. If I use a microservice strategy, I would have to call /apirest/resourceA and /apirest/resourceA/resourceB, right? How would this be better/faster than having a select * from entityA inner join entityB?)
How can I decouple the data between all the applications (it's like, 10 applications that, at some point, access the same data)?
Any material/articles/technologies indication?
Microservices define a clear boundary. They will not miraculously make everything faster.
Database joins are viable solutions too. If you have very strong coupling between your data microservices might not be the right option. Microservices allow your different services to use technology indepent from the rest. Say you have services A and B. You could use a relational database in service A for data that is strongly coupled and needs transactional security and a graph database in service B to manage relational data. Services A and B do not care what the other side is using since it is hidden behind the service boundary.
If you can divide your data into domains with minimal interaction these domain boundaries are a good starting point for your services. Services will eventually have to reference entities of another service. Two possible options are either always calling the other service or keeping a minimal local copy of the remote data (only the primary key of the remote entity for example). The local copy needs to be kept up to data of course ( with events?). At the moment we are struggeling with exactly this point...
If you google microservices you will drown in information. I like https://martinfowler.com/articles/microservices.html as a starting point.
I am sure I did not touch all aspects of microservices here, this is just meant to be a short hint in a direction to start your journey...

Microservices Restful API - DTOs or not?

REST API - DTOs or not?
I would like to re-ask this question in Microservices' context. Here is the quote from original question.
I am currently creating a REST-API for a project and have been reading
article upon article about best practices. Many seem to be against
DTOs and simply just expose the domain model, while others seem to
think DTOs (or User Models or whatever you want to call it) are bad
practice. Personally, I thought that this article made a lot of sense.
However, I also understand the drawbacks of DTOs with all the extra
mapping code, domain models that might be 100% identical to their
DTO-counterpart and so on.
Now, My question
I am more aligned towards using one Object through all the layers of my application (In other words, just expose Domain Object rather than creating DTO and manually copying over each fields). And the differences in my Rest contract vs domain object can be addressed using Jackson annotations like #JsonIgnore or #JsonProperty(access = Access.WRITE_ONLY) or #JsonView etc). Or if there is one or two fields that needs a transformation which cannot be done using Jackson Annotation, then I will write custom logic to handle just that (Trust me, I haven't come across this scenario not even once in my 5+ years long journey in Rest services)
I would like to know if I am missing any real bad effects for not copying the Domain to DTO
I would vote for using DTOs and here is why:
Different requests (events) and your DB entities. Often it happens that your requests/responses different from what you have in the domain model. Especially it makes sense in microservice architecture, where you have a lot of events coming from other microservices. For instance, you have Order entity, but the event you get from another microservice is OrderItemAdded. Even if half of the events (or requests) are the same as entities it still does make sense to have a DTOs for all of them in order to avoid a mess.
Coupling between DB schema and API you expose. When using entities you basically expose how you model your DB in a particular microservice. In MySQL you probably would want to have your entities to have relations, they will be pretty massive in terms of composition. In other types of DBs, you would have flat entities without lots of inner objects. This means that if you use entities to expose your API and want to change your DB from let's say MySQL to Cassandra - you'll need to change your API as well which is obviously a bad thing to have.
Consumer Driven Contracts. Probably this is related to the previous bullet, but DTOs makes it easier to make sure that communication between microservices is not broken whilst their evolution. Because contracts and DB are not coupled this is just easier to test.
Aggregation. Sometimes you need to return more than you have in one single DB entity. In this case, your DTO will be just an aggregator.
Performance. Microservices implies a lot of data transferring over the network, which may cost you issues with performance. If clients of your microservice need less data than you store in DB - you should provide them less data. Again - just make a DTO and your network load will be decreased.
Forget about LazyInitializationException. DTOs doesn't have any lazy loading and proxying as opposed to domain entities managed by your ORM.
DTO layer is not that hard to support with right tools. Usually, there is a problem when mapping entities to DTOs and backwards - you need to set right fields manually each time you want to make a conversion. It's easy to forget about setting the mapping when adding new fields to the entity and to the DTO, but fortunately, there are a lot of tools that can do this task for you. For instance, we used to have MapStruct on our project - it can generate conversion for you automatically and in compile time.
The Pros of Just exposing Domain Objects
The less code you write, the less bugs you produce.
despite of having extensive (arguable) test cases in our code base, I have came across bugs due to missed/wrong copying of fields from domain to DTO or viceversa.
Maintainability - Less boiler plate code.
If I have to add a new attribute, I don't have to add in Domain, DTO, Mapper and the testcases, of course. Don't tell me that this can be achieved using a reflection beanCopy utils, it defeats the whole purpose.
Lombok, Groovy, Kotlin I know, but it will save me only getter setter headache.
DRY
Performance
I know this falls under the category of "premature performance optimization is the root of all evil". But still this will save some CPU cycles for not having to create (and later garbage collect) one more Object (at the very least) per request
Cons
DTOs will give you more flexibility in the long run
If only I ever need that flexibility. At least, whatever I came across so far are CRUD operations over http which I can manage using couple of #JsonIgnores. Or if there is one or two fields that needs a transformation which cannot be done using Jackson Annotation, As I said earlier, I can write custom logic to handle just that.
Domain Objects getting bloated with Annotations.
This is a valid concern. If I use JPA or MyBatis as my persistent framework, domain object might have those annotations, then there will be Jackson annotations too. In my case, this is not much applicable though, I am using Spring boot and I can get away by using application-wide properties like mybatis.configuration.map-underscore-to-camel-case: true , spring.jackson.property-naming-strategy: SNAKE_CASE
Short story, at least in my case, cons doesn't outweigh the pros, so it doesn't make any sense to repeat myself by having a new POJO as DTO. Less code, less chances of bugs. So, going ahead with exposing the Domain object and not having a separate "view" object.
Disclaimer: This may or may not be applicable in your use case. This observation is per my usecase (basically a CRUD api having 15ish endpoints)
The decision is a much simpler one in case you use CQRS because:
for the write side you use Commands that are already DTOs; Aggregates - the rich behavior objects in your domain layer - are not exposed/queried so there is no problem there.
for the read side, because you use a thin layer, the objects fetched from the persistence should be already DTOs. There should be no mapping problem because you can have a readmodel for every use case. In worst case you can use something like GraphQL to select only the fields you need.
If you do not split the read from write then the decision is harder because there are tradeoffs in both solutions.

Why are java hibernate-mapped objects encouraged to be POJO's?

Is it discouraged to have "extra" functionality inside a hibernate javabean? For example, a "save", "publish", or even static "get by id" method? And other potential variables such as locks, bells and whistles?
If so, where are we supposed to put these extra features that are supposed to be in each object we are dealing with? If for example we created a wrapper class ArticleWrapper that includes the POJO Article as its own private member variable, which doesn't have a mapping to Hibernate, then it wouldn't work because Hibernate can only get a list of Articles, not a list of ArticleWrappers.
I guess the reasons for this are because they follow a specific well tested pattern for this, but that is not the only pattern to deal with database manipulation.
The one you describe sounds more like the Active Record Pattern Pattern.
Some frameworks implement Active Record and then their object models mix data and functionality together, pretty much like you describe. I have seen this pattern in Ruby on Rails Active Records and in Python's framework named Django.
In this pattern every domain object represents a row in the database and carries both data and behavior.
Martin Fowler in his book on Enterprise Application Architecture Patterns (and corresponding catalog page) mentions a few other well known ways to deal with your data source layer:
Table Data Gateways
Row Data Gateway
Data Mapper
Active Record
The book and the catalog delves into many other patterns for object-relational mapping.
Layered Design
In the classical way you describe with Hibernate, the entities are just placeholders for data, but contain no logic whatsoever. Under this pattern you would most likely have a data access layer or repository layer around your entities that deals with recovering entities from the underlying data source and updating them back.
This layer is the one that deals with CRUD operations.
interface ArticleRepository {
Article findById(Integer articleId);
List<Article> findByAuthor(Integer authorId);
Article save(Article article);
void delete(Integer articleId);
}
On top of this layer, you have a service layer which is the one that exposes the business logic to the users of your application.
interface ArticleService {
void publishArticle(String author, Date date, String title, String contents);
List<Article> getFeaturedArticles(Date date);
void unpublishArticle(Integer articleId);
}
On top of this layer, most likely you define some form of integration layer to expose this service layer to the application users in many different ways, like through RESTful or SOAP Web services, or RMI, EJBs or whatever other technology you know out there.
By not putting any kind of logic in your entities, they serve well their purpose of data carriers and can be reused in different service layers if necessary.
You may want to take a look at the framework like Spring Data that fosters this type of design. It makes it all more clear where every piece should go.
I guess writing code is not only about making it compiling and running. To make code clean and easy to maintain developers have come up with various design patterns such as DataAccessObject (or dao) for this particular situation. Following OO good practices will make developers' work more efficient, especially if they work on big projects, with time code that doesn't present decent level of cohesion and looks like a bucket of dirty laundry will become impossible to maintain. Remember - just because you can do something doesn't mean you should.

How can I resolve the conflict between loose coupling/dependency injection and a rich domain model?

Edit: This is not a conflict on the theoretical level but a conflict on an implementation level.
Another Edit:
The problem is not having domain models as data-only/DTOs versus richer, more complex object map where Order has OrderItems and some calculateTotal logic. The specific problem is when, for example, that Order needs to grab the latest wholesale prices of the OrderItem from some web service in China (for example). So you have some Spring Service running that allows calls to this PriceQuery service in China. Order has calculateTotal which iterates over every OrderItem, gets the latest price, and adds it to the total.
So how would you ensure that every Order has a reference to this PriceQuery service? How would you restore it upon de-serializations, loading from DBs, and fresh instantiations? This is my exact question.
The easy way would be to pass a reference to the calculateTotal method, but what if your Object uses this service internally throughout its lifetime? What if it's used in 10 methods? It gets messy to pass references around every time.
Another way would be to move calculateTotal out of the Order and into the OrderService, but that breaks OO design and we move towards the old "Transaction Script" way of things.
Original post:
Short version:
Rich domain objects require references to many components, but these objects get persisted or serialized, so any references they hold to outside components (Spring beans in this case: services, repositories, anything) are transient and get wiped out. They need to be re-injected when the object is de-serialized or loaded from the DB, but this is extremely ugly and I can't see an elegant way to do it.
Longer version:
For a while now I've practiced loose coupling and DI with the help of Spring. It's helped me a lot in keeping things manageable and testable. A while ago, however, I read Domain-Driven Design and some Martin Fowler. As a result, I've been trying to convert my domain models from simple DTOs (usually simple representations of a table row, just data no logic) into a more rich domain model.
As my domain grows and takes on new responsibilities, my domain objects are starting to require some of the beans (services, repositories, components) that I have in my Spring context. This has quickly become a nightmare and one of the most difficult parts of converting to a rich domain design.
Basically there are points where I am manually injecting a reference to the application context into my domain:
when object is loaded from Repository or other responsible Entity since the component references are transient and obviously don't get persisted
when object is created from Factory since a newly created object lacks the component references
when object is de-serialized in a Quartz job or some other place since the transient component references get wiped
First, it's ugly because I'm passing the object an application context reference and expecting it to pull out by name references to the components it needs. This isn't injection, it's direct pulling.
Second, it's ugly code because in all of those mentioned places I need logic for injecting an appContext
Third, it's error prone because I have to remember to inject in all those places for all those objects, which is harder than it sounds.
There has got to be a better way and I'm hoping you can shed some light on it.
I would venture to say that there are many shades of gray between having an "anemic domain model" and cramming all of your services into your domain objects. And quite often, at least in business domains and in my experience, an object might actually be nothing more than just the data; for example, whenever the operations that can be performed on that particular object depend on multitude of other objects and some localized context, say an address for example.
In my review of the domain-driven literature on the net, I have found a lot of vague ideas and writings, but I was not unable to find a proper, non-trivial example of where the boundaries between methods and operations should lie, and, what's more, how to implement that with current technology stack. So for the purpose of this answer, I will make up a small example to illustrate my points:
Consider the age-old example of Orders and OrderItems. An "anemic" domain model would look something like:
class Order {
Long orderId;
Date orderDate;
Long receivedById; // user which received the order
}
class OrderItem {
Long orderId; // order to which this item belongs
Long productId; // product id
BigDecimal amount;
BigDecimal price;
}
In my opinion, the point of the domain-driven design is to use classes to better model the relationships between entities. So, an non-anemic model would look something like:
class Order {
Long orderId;
Date orderDate;
User receivedBy;
Set<OrderItem> items;
}
class OrderItem {
Order order;
Product product;
BigDecimal amount;
BigDecimal price;
}
Supposedly, you would be using an ORM solution to do the mapping here. In this model, you would be able to write a method such as Order.calculateTotal(), that would sum up all the amount*price for each order item.
So, the model would be rich, in a sense that operations that make sense from a business perspective, like calculateTotal, would be placed in an Order domain object. But, at least in my view, domain-driven design does not mean that the Order should know about your persistence services. That should be done in a separate and independent layer. Persistence operations are not part of the business domain, they are the part of the implementation.
And even in this simple example, there are many pitfalls to consider. Should the entire Product be loaded with each OrderItem? If there is a huge number of order items, and you need a summary report for a huge number of orders, would you be using Java, loading objects in memory and invoking calculateTotal() on each order? Or is an SQL query a much better solution, from every aspect. That is why a decent ORM solution like Hibernate, offers mechanisms for solving precisely these kind of practical problems: lazy-loading with proxies for the former and HQL for the latter. What good would be a theoretically sound model be, if report generation takes ages?
Of course, the entire issue is quite complex, much more that I'm able to write or consider in one sitting. And I'm not speaking from a position of authority, but simple, everyday practice in deploying business apps. Hopefully, you'll get something out of this answer. Feel free to provide some additional details and examples of what you're dealing with...
Edit: Regarding the PriceQuery service, and the example of sending an email after the total has been calculated, I would make a distinction between:
the fact that an email should be sent after price calculation
what part of an order should be sent? (this could also include, say, email templates)
the actual method of sending an email
Furthermore, one has to wonder, is sending of an email an inherent ability of an Order, or yet another thing that can be done with it, like persisting it, serialization to different formats (XML, CSV, Excel) etc.
What I would do, and what I consider a good OOP approach is the following. Define an interface encapsulating operations of preparing and sending an email:
interface EmailSender {
public void setSubject(String subject);
public void addRecipient(String address, RecipientType type);
public void setMessageBody(String body);
public void send();
}
Now, inside Order class, define an operation by which an order "knows" how to send itself as an email, using an email sender:
class Order {
...
public void sendTotalEmail(EmailSender sender) {
sender.setSubject("Order " + this.orderId);
sender.addRecipient(receivedBy.getEmailAddress(), RecipientType.TO);
sender.addRecipient(receivedBy.getSupervisor().getEmailAddress(), RecipientType.BCC);
sender.setMessageBody("Order total is: " + calculateTotal());
sender.send();
}
Finally, you should have a facade towards your application operations, a point where the actual response to user action happens. In my opinion, this is where you should obtain (by Spring DI) the actual implementations of services. This can, for example, be the Spring MVC Controller class:
public class OrderEmailController extends BaseFormController {
// injected by Spring
private OrderManager orderManager; // persistence
private EmailSender emailSender; // actual sending of email
public ModelAndView processFormSubmission(HttpServletRequest request,
HttpServletResponse response, ...) {
String id = request.getParameter("id");
Order order = orderManager.getOrder(id);
order.sendTotalEmail(emailSender);
return new ModelAndView(...);
}
Here's what you get with this approach:
domain objects don't contain services, they use them
domain objects are decoupled from actual service implementation (e.g. SMTP, sending in separate thread etc.), by the nature of the interface mechanism
services interfaces are generic, reusable, but don't know about any actual domain objects. For example, if order gets an extra field, you need change only the Order class.
you can mock services easily, and test domain objects easily
you can test actual services implementations easily
I don't know if this is by standards of certain gurus, but it a down-to-earth approach that works reasonably well in practice.
Regardinig
What if your Order needs to send out
an e-mail every time the total is
calculated?
I would employ events.
If it has some meaning for you when an order computes its total, let it raise an event as eventDispatcher.raiseEvent(new ComputedTotalEvent(this)).
Then you listen for this type of events, and callback your order as said before to let it format an email template, and you send it.
Your domain objects remains lean, with no knowledge about this your requirement.
In short, split your problem into 2 requirements:
- I want to know when an order computes its total;
- I want to send an email when an order has a (new and different) total;
I've found the answer, at least for those using Spring:
6.8.1. Using AspectJ to dependency inject domain objects with Spring
The simplest approach that I can think is to add some logic into your data access layer that will inject a domain object with its dependencies before returning it to a higher layer (usually called the service layer). You could annotate each class's properties to indicate what needs to get wired up. If you're not on Java 5+, you could implement an interface for each component that needs to be injected, or even declare this all in XML and feed that data to the context that will do the wiring. If you wanted to get fancy, you could pull this out into an aspect and apply it globally across your data access layer so all methods that pull out domain objects will wire up them up just after they are returned.
Perhaps what you want is a kind on reference object, that would serialize as a global reference (an URI for instance) and that would be able to resurrect as a proxy when de-serialized elsewhere.
The Identity Map pattern may help with your scenario. Check the article Patterns In Practice written by Jeremy Miller where he discuss about this pattern.

Categories