Microservices with a shared lib dependency - java

I'm working on a microservice project, and I have a question about best practices.
We are using Java Spring, and all of our models are packaged in a single JAR. Each microservice depends on this JAR to function. Is it okay for a microservice to depend on a JAR containing models outside of its scope like this, or is it better practice to split this JAR up?

A very good article by Bartosz Jedrzejewski here
To quote a relevant part from his artcile...
If the service code should be completely separate, but we need to consume possibly complicated responses in the clients- clients should write their own libraries for consuming the service.
By using client-libraries for consuming the service the following benefits are achieved:
Service is fully decoupled from the clients and no services depend on one another- the library is separate and client specific. It can be even technology specific if we have mix of technologies
Releasing new version of the service is not coupled with clients- they may not even need to know if the backward compatibility is still there, it is the clients who maintain the library
The clients are now DRY - no needless code is copy pasted
It is quicker to integrate with the service - this is achieved without losing any of the microservices benefits
This solution is not something entirely new- it was successfully implemented in Scott Logic projects, is recommended in the “Building Microservices” by Sam Newman (highly recommended) and similar ideas can be seen in many successful microservices architectures.
There are some pitfalls as well, better read the entire article...

Sharing the domain models is an indicator of bad design. If services share a domain, they should not be split. For Microservices, teams working on one service should be able to modify their domain objects anytime without impacting other services/teams.
There can be done exceptions though, e.g. if the model objects are non-specific enough to be reusable in any service. As an example a domain of geometry could be contained in a geometry library. There can be other exceptions.

Related

How to share java models between microservices in microservice architecture

I am designing the architecture of my new app.I chose microservice architecture.In my architecture I noticed that I have models that are used by diffrent microservices. I want to know if there is a way to share models code between microservices instaed of writing them in each microservice.
By the way I am using the spring boot framework for my app.
You should only be sharing models that define the API of your micro-service e.g. Protobuff .proto files or the Java classes generated from them.
This is normally done by creating either a separate project or converting your micro-service projects into a multi-module projects where one of the modules is a thin API module with interface definition.
There is nothing wrong in sharing code between micro-services but you have to be careful. Share too much internal implementation details and you end up with a distributed monolith instead of micro-services.
You can create a separate project with common models, create a jar of this project and add dependency of this jar in other microservices.
But I have a practical experience, its a nightmare to maintain this common project, because for every change you have to create a new version and update the build scripts of all the microservices.
In my opinion we should not share models among the microservices.
In a Microservices architecture, each one is absolutely independent of the others and it must hide the details of the internal implementation.
If you share the model you are coupling microservices and lose one of the greatest advantages in which each team can develop its microservice without restrictions and the need of knowing how evolve others microservices. Remember that you can even use different languages in each one, this would be difficult if you start to couple microservices.
https://softwareengineering.stackexchange.com/questions/290922/shared-domain-model-between-different-microservices
If you are draconian about this decision you will run into unsatisfactory conditions one way or the other. It depends on your SDLC and team dynamics. At Ipswitch, we have many services that all collaborate and there are highly shared concepts like device and monitor. Having each service have its own model for those would be unsustainable. We did that in one case and the translation just created extra work and introduced inconsistency defects. But that whole system is built together and by one large dev team. So sharing makes the most sense there. But for an enterprise, where you have multiple teams and multiple SDLCs across microservices, it makes more sense to isolate the models to avoid coupling. Even then, however, a set of closely collaborating services that are managed by a given team can certainly share a model if the team accepts the risk/benefit of doing so. There is nothing wrong with that beyond academics and philosophy.
So in short, share minimally but also avoid unnecessary work for your team.
You could move your model classes to a different project/repository and add it as a dependency to your microservices that need to share it.
Not sure if your microservices use Swagger, but, you can use Swagger Codegen to generate your models.
For example, If you have UserService which accepts and/or returns User object. The consumer of UserService can use the Swagger Codegen plugin to auto-generate the User class at build time.
You can use Swagger Codengen maven or gradle plugin pretty easily.

Migrating multi-module project to microservices?

I have multi module application. To be more explicit these are maven modules where high level modules depends on low level modules.
Below are the some of the modules :-
user-management
common-services
utils
emails
For example :- If user management module wants to use any services from utils module, it can call its services as dependency of utils is already injected under user-management.
To convert/call my project truly following microserives architecture, I believe i need to convert each module as independently deployable services where each module is a war module
and provides its services over http(mainly as resful web services) . Is that correct or anything else need to be taken care of as well ?
Probably each modules now has to be secured and authentication layer as well ?
If that's the crux of microservices I really do not understand when someone ask whether you have worked on microservices as to me Its not tool/platform/framework but a simple
concept to divide you monolithic application in to smaller set of deployable modules whose services is available through HTTP. Is n't it ? May be its another buzz word.
Update:-
Obviously there are adavantages going micro services way like independent unit testablemodule, scalable as it can be deployed on separate machine, loose coupling etc but I see I need to handle two complex concerns also
Authentication:- For each module I need to ensure it authenticates the request which is not the case right now
Transaction:- I can not maintain the transaction atomicity across different services which I could do very easily at present
What you have understood is perfectly good and you have found the right area where microservices are getting complex over monoliths (Distributed Transaction) but let me clear up some points about microservices.
Microservice doesn't mean independent services exposed over HTTP: A microservice can communicate with other services either in a synchronous or asynchronous way, so REST is one of the solutions and it is applicable for synchronous communication but you can perform asynchronous communication too like message-driven using Kafka or hornetq etc. In synchronous communication an underlying service can call over Thrift protocol also.
Microservice following SRP: The beauty of microservices is that each service concentrates over only one business domain use case, so it handles only one domain object's functionality. But utils module is for common methods so every microservice depends on it. So even a small change in the utils module needs to build all other microservices so it is a violation of the microservices 12 principles so dissolve the utils service and make it local with each service.
Handling Authentication: To be specific a microservice can be one of three types:
a. Core service: Just perform a domain operation (like account creation/updation/deletion)
b. Aggregate service: Call one or more core service, gather results and perform some operation on it.
c. Edge service: Exposed to a client (like Mobile/browser etc). We sometimes call it a gateway service; the crux of this service is take a user request and based on the URL forward it to an actual microservice. So it is the ideal place to put authentication if it is common for all microservices.
Handling Distributed Transaction: Yes this is the hardest part of microservices but you can achieve it through an event-driven/message-driven way. Every action pops an event; a subscriber of this event receives the same and does some operation and generates another event. In case of failure it generates a reverse event which compensates the first event created.
For example, say from micoservice A we debited 100 rupees so create an AccountDebited event. Now in microservice B we try to credit the account. If it is successful we create AccountCredited which is received by A and creates another event AmountTransfered. In case of failure we generate an AccountCreditedFailed event which is received by A and generates a reverse event - AccountSpecialCredit - which maintains the atomicity.
What you have is mostly correct, but you appear to be considering some things as requirements when they are not, and you are forgetting one very important characteristic that microservices are supposed to have.
The main characteristics of microservices are statelessness and independence. Whether they are "WAR" modules and whether they provide their services over "HTTP" (and certainly whether they are RESTful) are secondary concerns and you may hear arguments to the contrary.
Statelessness means that no individual microservice may contain state. (Except for caches.) Microservices are supposed to always delegate the task of persisting data to some database module so they don't keep any state in memory. The idea is that this way, if one microservice fails, (or if an entire machine containing many microservices fails,) you can just route incoming requests to another instance (or another machine) and everything will continue working.
(Of course, if you want my opinion, it is just a cowardly acknowledgement of the fact that we don't know how to write reliable highly concurrent software, but the database guys are smart and they seem to have figured it all out, so we will just delegate the problem of maintaining our state to the software that they have written.)
In my opinion microservice architecture marries well with DDD
I think you should consider your multi-module project as a "monolith" and do your microservice separation based on domain concepts and not on maven projects.
Ex: Do not create a microservice called "utils" but rather a microservice called "accounts" or "user-management" or whatever your domain is. I think without domain driven development it kinda loses its usefulness.
It is really easy afterwards to work on different aspects of the domain knowing that it is separated by the rest. You should check out hexagonal architecture by Alistair Cockburn
How you split your application depends on the type of modules you have. If the module contains business logic than it makes sense to create a new service and communicate via Http or Messaging. On the other hand if your module has no business logic, but just a set of helper functions in might be better to extract it to a separate public/private maven package and just use it as a dependency.
Yes, microservice is a buzz-word that just recently became popular, but a concept has been around for a while. But it also comes with more than that. While it gives a benefits of scaling and independent service deployments, it comes with a price of complexity of managing and orchestrating big amount services.
For example in monolith application when you just call a function from another module you know for sure that it is always available for calling. With microservices some of the services might go down because of disruption or deployment, in which case you have to think about handling these situations gracefully (for example apply circuit breaker pattern).
There are many other things to consider when doing microservices and there are many literature available on this topic. I read Microservices: From Design to Deployment from Nginx. It's short and sweet.
So when people ask you Have you worked with microservices before? I guess they want to know if you familiar and had some experience with all the pitfalls of this concept.
In one way you are correct, in Microservices from outside it looks like this. When you go inside as you rightly mention about two complex concern :
Authentication:- For each module I need to ensure it authenticates the request which is not the case right now
Transaction:- I can not maintain the transaction atomicity across different services which I could do very easily at present
Apart from this there are various things which one need to understand otherwise doing and deploying microservices would be very tough:
I am mentioning some of them here, complete list you can see from my post:
What exactly is a microservice? Some said it should not exceed 1,000 lines of code.Some say it should fit one bounded context (if you don't know what a bounded context is, don't bother with it right now; keep reading).
Even before deciding on what the "micro"service will be, what exactly is a service?
Microservices do not allow updating multiple entities at once; how will I maintain consistency between entities?
Should I have a single database cluster for all my microservices?
What is this eventual consistency thing everyone is talking about?
How will I collate data which is composed of multiple entities residing in different services?
What would happen if one service goes down? How would the dependent services behave?
Should I make a sync invocation between microservices to always get consistent data?
How will I manage version upgrades to a few or all microservices? Is it always possible to do it without downtime?
And the last unavoidable question - how do I test the entire application as an integrated application?
How to do circuit breaking? (if one service down should not impact other)
CI/CD pipelines and .......
What we understood while starting our journey in microservices are:
Design pattern for breaking business problem in microservices is Domain Driven Design
Platform which support microservices development. (we used Lagom for this) which address some of the above concern out of the box
So in all while moving towards multiple process arch. communicating using Rest or some other methods, new considerations needs to be taken care which are not directly visible in Monolithic, and people want to whether you know about those considerations or not.

Dropwizard: handling multiple dropwizard instances

As I'm developing micro-services using Dropwizard I'm trying to find a balance between having many resources on one running instance/application of Dropwizard versus many instances.
For example - I have a project-A having 3 resources. In another project-B I would like to use one of the resources in project-A. The resource in common is related to user data.
Now I have options like :
make http call to user resource in project-A from project-B. I can use client approach of dropwizard here
as user resource is common - I can take it out from project-A to say project-C. And the I need to create client code in both project-A and project-B
i can extract jar containing user code and use in project-B. this will avoid making http calls.
Another point where I would like to have expert opinion is how to balance/minimize network calls associated with communication between different instances of microservice. In general should one use http to communicate between different instances? or can any other inter-process communication approach be used for performance perse [particularly if different instances are on same system]?
I feel this could be common problem/confusion for new comers in the world of micro-services. And hence would like to know any general guideline or best practices.
many thanks
Pradeep
make http call to user resource in project-A from project-B. I can use client approach of dropwizard here
I would not pursue this option if I were you. It's going to slow down your service unnecessarily, create potential logging headaches, and just feels wrong. The only time this might make sense is when the code is out of your control (but even then there's probably a better solution).
as user resource is common - I can take it out from project-A to say project-C. And the I need to create client code in both project-A and project-B
i can extract jar containing user code and use in project-B. this will avoid making http calls.
It sounds like project A and project B are logically different units with some common dependencies. It might make sense to consider a multi-module project (or a multi-module Maven project if you're using Maven). You could have a module containing any common code (and resources) that gets referenced by separate project modules. This is where Maven really excels since it can manage all these dependencies for you. It's like a combination of the last two options you listed.
One of the main advantages of micro-services is the opportunity to release and deploy each of them separately. Whatever option you choose make sure you don't loose this property.
Another property of a micro-service should be that it has only one responsibility. So it is all about finding the right boundaries for your services (in DDD-terms 'bounded contexts'), and indeed it is not easy to find the right boundaries. It is a balancing act.
For instance in your theoretical case:
If the communication between A and C will be very chatty, then it is not a great idea to extract C.
If A and C have a different lifecycle (business-wise), then it is a good idea to extract C.
That's essentially a design choice: are you ready to trade the simplicity of each one of your small services against the complexity of having to orchestrate them and the outcome of the overall latency.
If you choose the small service approach, you could stick to the documentation guidelines at http://dropwizard.io/manual/core.html#organizing-your-project : 1 project with 3 modules for api (that can be referenced from consumers), application and the optional client (also potentially used in consumers)
Other questions you will have to answer:
- each of your service will be hosted on a separate SCM repository...or not
- each of your service could (should?) have it's own version
If the user you feel is bounded context as if user management like user registration, authentication etc. This can certainly be a separate micro service. However you should invoke the user API from a single API gateway and convert it to a JWT token and pass it on to your other APIs in header.
In another case if your Business use case requires to invoke multiple micro services that logic (orchestration) should be developed in composite service layer.
Regarding inter micro service communication - talking each other through API calls takes you back to "point to point" communication introducing a lot of complexity and difficult to manage for a large project.
As per bounded context theory none of the transaction should go beyond one micro service. However in real world scenarios I think we still have dependency at least for the validation of the reference data. Example order service needs to validate product IDs. In this case the best I can think is to have eventing between microservices to feed each other with the reference data. You can try event sourcing for generating business events and async io for publish / subscribe.
Thanks,
Amit

External Systems Integration Best Practice

Quick question on what is the best practice for integrating with external systems.
We have a system that deals with Companies which we represent by our own objects. We also use an external system via SOAP that returns a Organization object. They are very similar but not the same (ours is a subset of theirs).
My question is, should we wrap the SOAP service via a Facade so we return only Company objects to our application, or should we return another type of object (e.g. OrgCompany), or even just use the Organization object in our code.
The SOAP service and Organization object are defined by an external company (a bank), who we have no control over.
Any advice and justification is much appreciated.
My two cents, Introducing external objects into application is always a problem. Especially during maintenance. A small service change might lead into big code change in the application.
It's always good to have a layer abstraction between the external service and application. I would suggest to create a service layer which will do the translation of external service object to your application domain objects and use them within the application. A clear separation / decoupling helps a lot in maintenance.
The below diagram depicts the above content.
Your decision here is how you want to manage external code dependencies in your application. Some factors that should play into your decision:
1) How often will the API change, and what's the expected nature of the changes?
2) What's the utility of your application outside its depdencies? If you removed the SOAP service dependency, would your app still serve a purpose?
A defensive approach is to build a facade or adapter around SOAP service, so that your code only depends on your object model. This gives you a lot of control and a relatively loose coupling between your code/logic and the service. The price that you pay for this control is that when the SOAP contract changes, you must also usually also change a layer of your code.
A different approach is to use the objects you're getting from the WSDL directly. This is beneficial when it doesn't make sense to introduce a level of indirection in your application between the client code, i.e. your application is just a feeder into a different system and the whole point of the app is to stuff the Organization object into a JMS pipeline or something similar. If the SOAP API contract never changes and you don't expect the output of your app to change much, then introducing an extra layer of indirection will just hinder the readability of your codebase long term.
Most j2ee developers tend to take the former approach in my experience, both because of the nature of their applications, and wanting to separate their application logic from the details of the data source.
hope this helps.
I can't think of any situation where it's good to use the objects that another company controls. The first thing you should do is bridge those objects into your own. Also, by having your own objects, you can expand their functionality beyond the one that is provided by the third party you connect to (for example if in the future you need to talk to more than one Company object provider)
Look at the Adapter pattern.
I'd support Sridhars suggestion, I'd like just to add that for translating external service objects to your application domain you can use Dozer :
http://dozer.sourceforge.net/documentation/mappings.html
I typically always Adapt externally defined domain objects to an internal representation.
I also create a comprehensive suite of tests against the external domain object, that will highlight any problems quickly if the external vendor produces a new release.
The Enterprise service bus Architecture might be useful here
Its primary use is in Enterprise Application Integration of
heterogeneous and complex landscapes.
(from Wikipedia)
I would check out open source Mule if you are looking for an open source solution

Concerns with managing JAX WS artifacts

I'm developing an application that makes heavy use of web services. I will be developing both the client and server ends of this application. I'd like to use JAX WS (which I am new to), because it seems to be the future for web services for Java, but I have a number of concerns related to the artifacts. None of these concerns is a deal-breaker, but collectively, JAX WS seems to create a lot of inconvenience. I'm new to JAX WS, so perhaps there are things I am unaware of that would alleviate my concerns.
Here are my concerns:
I anticipate having a fairly large number of POJOs that are passed between client and server (for lack of a better term, I'll call these transport objects). I would like to include documentation and business logic in these objects (for starters, equals, hashcode, toString). If I have business logic in these classes, then I cannot use wsimport to create the annotations for them, and I have to manage those by hand. Seems cumbersome and error-prone.
I have a choice of having the build system create artifacts, or having developers create artifacts and check them into source control. If artifacts are produced by the build system, then whenever a member of the team updates an API, everyone must generate artifacts in their own development environments. If artifacts are produced by developers and checked into source control, any time a member of the team renames or deletes an API, he must remember to delete wrapper artifacts. Either approach seems to be cumbersome. What's the best practice here?
wsimport creates all the artifacts in the same package. I will be creating multiple services, and I will have some transport objects that are shared, and therefore I need to wsimport all my services into the same package. If two services have an API with the same name, the wrapper artifacts will collide.
I anticipate having at least a hundred API's in my services. This means at least 200 wrapper classes. Seems like a huge amount of clutter. Lots and lots of classes that are of no interest for development. To make matters worse, these wrapper classes will reside in the same package as the transport objects, which will be some of the most highly-used classes in my system. Signal to noise ratio is very low for the most important package in my system.
Any pointers anyone can give me to ease development of my application would be greatly appreciated.
If you have control over both the client and the server you don't really have to generate the client with wsimport. I currently do it as follows: One project defines the API for the web service. The API consists of the interface and all classes of the "transfer objects". Another project implements the service. You can now distribute the API to the client who can now use the service and may leverage all your additional business methods.
Assuming ServiceInterface is your service interface a client might look like this:
Service s = Service.create(
new URL("http://example.com/your_service?wsdl"),
new QName("http://example.com/your_namespace", "YourServiceName"));
ServiceInterface yourService = s.getPort(
new QName("http://example.com/your_namespace", "YourPortName"),
ServiceInterface.class);
And just like that you have a service client. That way you can use all your methods (1), you have full control over your packages (3) and you don't have any wrapper classes lying around as they are all generated at runtime (4). I think (2) is solved by this as well.
Your question is quite large so if I fail to address a point sufficiently, leave a comment and I try to get into more detail.

Categories