I am designing the architecture of my new app.I chose microservice architecture.In my architecture I noticed that I have models that are used by diffrent microservices. I want to know if there is a way to share models code between microservices instaed of writing them in each microservice.
By the way I am using the spring boot framework for my app.
You should only be sharing models that define the API of your micro-service e.g. Protobuff .proto files or the Java classes generated from them.
This is normally done by creating either a separate project or converting your micro-service projects into a multi-module projects where one of the modules is a thin API module with interface definition.
There is nothing wrong in sharing code between micro-services but you have to be careful. Share too much internal implementation details and you end up with a distributed monolith instead of micro-services.
You can create a separate project with common models, create a jar of this project and add dependency of this jar in other microservices.
But I have a practical experience, its a nightmare to maintain this common project, because for every change you have to create a new version and update the build scripts of all the microservices.
In my opinion we should not share models among the microservices.
In a Microservices architecture, each one is absolutely independent of the others and it must hide the details of the internal implementation.
If you share the model you are coupling microservices and lose one of the greatest advantages in which each team can develop its microservice without restrictions and the need of knowing how evolve others microservices. Remember that you can even use different languages in each one, this would be difficult if you start to couple microservices.
https://softwareengineering.stackexchange.com/questions/290922/shared-domain-model-between-different-microservices
If you are draconian about this decision you will run into unsatisfactory conditions one way or the other. It depends on your SDLC and team dynamics. At Ipswitch, we have many services that all collaborate and there are highly shared concepts like device and monitor. Having each service have its own model for those would be unsustainable. We did that in one case and the translation just created extra work and introduced inconsistency defects. But that whole system is built together and by one large dev team. So sharing makes the most sense there. But for an enterprise, where you have multiple teams and multiple SDLCs across microservices, it makes more sense to isolate the models to avoid coupling. Even then, however, a set of closely collaborating services that are managed by a given team can certainly share a model if the team accepts the risk/benefit of doing so. There is nothing wrong with that beyond academics and philosophy.
So in short, share minimally but also avoid unnecessary work for your team.
You could move your model classes to a different project/repository and add it as a dependency to your microservices that need to share it.
Not sure if your microservices use Swagger, but, you can use Swagger Codegen to generate your models.
For example, If you have UserService which accepts and/or returns User object. The consumer of UserService can use the Swagger Codegen plugin to auto-generate the User class at build time.
You can use Swagger Codengen maven or gradle plugin pretty easily.
Related
Say I have 5 different microservices. Each use one common DTO, that is UserDTO.java or an entity User.java.
If these DTOs are to be accessed by all, I came up with two approaches
To place all these DTOs in all the microservices
To create a project say Project-commons and keep all the common DTOs there and then add the dependency of this project in all microservices
I know that the sharing common DTOs across microservices would kill the concept/principles of microservices, but I wanted to know is there any other approaches available at all?
Actually, sharing a common code library does not kill the concept/principle of keeping changes isolated to one microservice, as long as you strictly version that library. That way, one microservice team can decide it needs new functionality in the shared library, and can bump the version number of the library and add the new functionality. Existing microservices that use that library won't be affected until they independently decide to start using the newer version of that library.
So this is what we do. We have jar files that we share across our microservices, but we push specific versions of those jar files to Nexus and reference those jar files by version number in each microservice.
Note that this concept is no different than two microservices depending on a third party component, like an Apache Commons library. Just because Apache comes out with a new version of a library doesn't mean anyone's binaries change. Each codebase that depends on that library can decide if and when it moves to the new version independent of what version another codebase might be using or what the most recent version of the library might be.
I'm working on a microservice project, and I have a question about best practices.
We are using Java Spring, and all of our models are packaged in a single JAR. Each microservice depends on this JAR to function. Is it okay for a microservice to depend on a JAR containing models outside of its scope like this, or is it better practice to split this JAR up?
A very good article by Bartosz Jedrzejewski here
To quote a relevant part from his artcile...
If the service code should be completely separate, but we need to consume possibly complicated responses in the clients- clients should write their own libraries for consuming the service.
By using client-libraries for consuming the service the following benefits are achieved:
Service is fully decoupled from the clients and no services depend on one another- the library is separate and client specific. It can be even technology specific if we have mix of technologies
Releasing new version of the service is not coupled with clients- they may not even need to know if the backward compatibility is still there, it is the clients who maintain the library
The clients are now DRY - no needless code is copy pasted
It is quicker to integrate with the service - this is achieved without losing any of the microservices benefits
This solution is not something entirely new- it was successfully implemented in Scott Logic projects, is recommended in the “Building Microservices” by Sam Newman (highly recommended) and similar ideas can be seen in many successful microservices architectures.
There are some pitfalls as well, better read the entire article...
Sharing the domain models is an indicator of bad design. If services share a domain, they should not be split. For Microservices, teams working on one service should be able to modify their domain objects anytime without impacting other services/teams.
There can be done exceptions though, e.g. if the model objects are non-specific enough to be reusable in any service. As an example a domain of geometry could be contained in a geometry library. There can be other exceptions.
As I'm developing micro-services using Dropwizard I'm trying to find a balance between having many resources on one running instance/application of Dropwizard versus many instances.
For example - I have a project-A having 3 resources. In another project-B I would like to use one of the resources in project-A. The resource in common is related to user data.
Now I have options like :
make http call to user resource in project-A from project-B. I can use client approach of dropwizard here
as user resource is common - I can take it out from project-A to say project-C. And the I need to create client code in both project-A and project-B
i can extract jar containing user code and use in project-B. this will avoid making http calls.
Another point where I would like to have expert opinion is how to balance/minimize network calls associated with communication between different instances of microservice. In general should one use http to communicate between different instances? or can any other inter-process communication approach be used for performance perse [particularly if different instances are on same system]?
I feel this could be common problem/confusion for new comers in the world of micro-services. And hence would like to know any general guideline or best practices.
many thanks
Pradeep
make http call to user resource in project-A from project-B. I can use client approach of dropwizard here
I would not pursue this option if I were you. It's going to slow down your service unnecessarily, create potential logging headaches, and just feels wrong. The only time this might make sense is when the code is out of your control (but even then there's probably a better solution).
as user resource is common - I can take it out from project-A to say project-C. And the I need to create client code in both project-A and project-B
i can extract jar containing user code and use in project-B. this will avoid making http calls.
It sounds like project A and project B are logically different units with some common dependencies. It might make sense to consider a multi-module project (or a multi-module Maven project if you're using Maven). You could have a module containing any common code (and resources) that gets referenced by separate project modules. This is where Maven really excels since it can manage all these dependencies for you. It's like a combination of the last two options you listed.
One of the main advantages of micro-services is the opportunity to release and deploy each of them separately. Whatever option you choose make sure you don't loose this property.
Another property of a micro-service should be that it has only one responsibility. So it is all about finding the right boundaries for your services (in DDD-terms 'bounded contexts'), and indeed it is not easy to find the right boundaries. It is a balancing act.
For instance in your theoretical case:
If the communication between A and C will be very chatty, then it is not a great idea to extract C.
If A and C have a different lifecycle (business-wise), then it is a good idea to extract C.
That's essentially a design choice: are you ready to trade the simplicity of each one of your small services against the complexity of having to orchestrate them and the outcome of the overall latency.
If you choose the small service approach, you could stick to the documentation guidelines at http://dropwizard.io/manual/core.html#organizing-your-project : 1 project with 3 modules for api (that can be referenced from consumers), application and the optional client (also potentially used in consumers)
Other questions you will have to answer:
- each of your service will be hosted on a separate SCM repository...or not
- each of your service could (should?) have it's own version
If the user you feel is bounded context as if user management like user registration, authentication etc. This can certainly be a separate micro service. However you should invoke the user API from a single API gateway and convert it to a JWT token and pass it on to your other APIs in header.
In another case if your Business use case requires to invoke multiple micro services that logic (orchestration) should be developed in composite service layer.
Regarding inter micro service communication - talking each other through API calls takes you back to "point to point" communication introducing a lot of complexity and difficult to manage for a large project.
As per bounded context theory none of the transaction should go beyond one micro service. However in real world scenarios I think we still have dependency at least for the validation of the reference data. Example order service needs to validate product IDs. In this case the best I can think is to have eventing between microservices to feed each other with the reference data. You can try event sourcing for generating business events and async io for publish / subscribe.
Thanks,
Amit
I'm developing an application that makes heavy use of web services. I will be developing both the client and server ends of this application. I'd like to use JAX WS (which I am new to), because it seems to be the future for web services for Java, but I have a number of concerns related to the artifacts. None of these concerns is a deal-breaker, but collectively, JAX WS seems to create a lot of inconvenience. I'm new to JAX WS, so perhaps there are things I am unaware of that would alleviate my concerns.
Here are my concerns:
I anticipate having a fairly large number of POJOs that are passed between client and server (for lack of a better term, I'll call these transport objects). I would like to include documentation and business logic in these objects (for starters, equals, hashcode, toString). If I have business logic in these classes, then I cannot use wsimport to create the annotations for them, and I have to manage those by hand. Seems cumbersome and error-prone.
I have a choice of having the build system create artifacts, or having developers create artifacts and check them into source control. If artifacts are produced by the build system, then whenever a member of the team updates an API, everyone must generate artifacts in their own development environments. If artifacts are produced by developers and checked into source control, any time a member of the team renames or deletes an API, he must remember to delete wrapper artifacts. Either approach seems to be cumbersome. What's the best practice here?
wsimport creates all the artifacts in the same package. I will be creating multiple services, and I will have some transport objects that are shared, and therefore I need to wsimport all my services into the same package. If two services have an API with the same name, the wrapper artifacts will collide.
I anticipate having at least a hundred API's in my services. This means at least 200 wrapper classes. Seems like a huge amount of clutter. Lots and lots of classes that are of no interest for development. To make matters worse, these wrapper classes will reside in the same package as the transport objects, which will be some of the most highly-used classes in my system. Signal to noise ratio is very low for the most important package in my system.
Any pointers anyone can give me to ease development of my application would be greatly appreciated.
If you have control over both the client and the server you don't really have to generate the client with wsimport. I currently do it as follows: One project defines the API for the web service. The API consists of the interface and all classes of the "transfer objects". Another project implements the service. You can now distribute the API to the client who can now use the service and may leverage all your additional business methods.
Assuming ServiceInterface is your service interface a client might look like this:
Service s = Service.create(
new URL("http://example.com/your_service?wsdl"),
new QName("http://example.com/your_namespace", "YourServiceName"));
ServiceInterface yourService = s.getPort(
new QName("http://example.com/your_namespace", "YourPortName"),
ServiceInterface.class);
And just like that you have a service client. That way you can use all your methods (1), you have full control over your packages (3) and you don't have any wrapper classes lying around as they are all generated at runtime (4). I think (2) is solved by this as well.
Your question is quite large so if I fail to address a point sufficiently, leave a comment and I try to get into more detail.
My domain classes and persistance logic (Hibernate) are in one project called model. This jar is included within all of my apps.
Packaged com.company.model & com.company.persistance
Another Utils.jar - contains DateTime, String, Thread, etc general helper classes. This again is included within all of my apps.
Packaged com.company.utils
I have a CXF/Spring app that exposes services for manipulating my data. CRUD functionality, ALL other common functions. This is the 'way in' to my database for any app designed.
Packaged com.company.services and running on Glassfish app server
I have other apps that use the web services (Spring injected) to manipulate my data. Including a web app that will use YUI widgets and the XML/JSON from the web services for a nice smooth UI.
I understand its not really a question! I suppose Im looking for confirmation that this is how others are designing their software. If my architecture makes good, logical sense! Obviously there are security concerns - I will want some applications allowed to only access service x. I will address these later.
Sounds good.
It depends also of the type of application you're developing and the specific requirements for that ( it has to be deployed every week, it has to be deployed in several locations etc )
But so far sounds good enough.
Looks like you can formulate a question from here in the future for some specific scenario.
Since this is not a question, mine is not really an answer. CW
My only comment would be to put the persistence and Hibernate classes into a separate module; so that the model module can be purely beans/POJO/your domain classes.
Here's how I've organized a few multi-module projects before
project-data - contains domain classes and DAOs (interfaces only)
project-services - "Business logic" layer services, makes use of DAO interfaces.
Depends on project-data.
project-hibernate - Hibernate implementation of DAO interfaces.
Depends on project-data.
Conceivably if I were to use some other sort of data-access method I would just create a separate module for that. Client apps could then choose which modules to be dependent on.
Only suggestion I might have is that when you're creating service/models that you group them by subpackage name. ie
com.company.model.core
com.company.service.core
com.company.model.billing
com.company.service.billing
Also, be careful to ensure that no controller code (manipulating your UI) ends up in the services.