I have a web service that return an xml file. Now I want to create a new web service that uses a part of information that contains the output of first web service. My question is what is better solution:
1.call first web service and use result in new web service?
recreate a part of code of first web service and create an new web service which is independent from first.
I know with first solution I can handle better the code but maybe it will be slower and use more traffic in network.
Οn the other hand,the second proposal solution don't reuse code and it will be more difficult to conserve the project. I would like an complete answer using arguments.
Thank you
Each has its own pros and cons. I'm trying to describe them below:
Using the service has the advantage of not engineering (Dev and Qa) the same functionality twice. You can call a remote service, and just use the functionality it provides. That's why services exist, isn't it? Also, when the service gets evolved, you do not have to worry about anything, like changing your local jar, etc. However, the service runs as a separate component. It thus needs to be monitored and operationally managed.
Using a jar/library in place of a service has the advantage of "locality". The code being executed within the process is much high performing. You can avoid creating a socket connection, and dealing with complexities of calling another service. However, you might have to abstract out the function as a library.
It altogether depends on "what" functionality you are looking at, how complex is it to implement, will the functionality frequently evolve, and would it be useful for other set of clients? If it's complex, evolving, and useful to a multitude of clients, the answer is "write a service".
Experts advice to avoid the 'remote' solutions for as long as possible, as they brings complexity in testing, debugging, maintaining, etc. Just imagine, how would you handle situation when the first WS is going down? How would you test all this? How would you handle situation when the first WS reponds with some invalid data? Lots of questions arises here.
It is really hard to give you the 'right' answer, because it always depends on context and your needs. Sometimes there is just no choice, but implementing the distributed solution.
For inspiration, I would strongly recommend reading the "Enterprise Architecture Patterns" book of M. Fowler.
Diplomatic Question. Its depends on organization to organization , strategy, architecture and design you have followed.
My Suggestion is :
Go with Option 1:
That is divide and rule, if it is service oriented architecture with many consumers for your service
Go with Option 2:
If you are the consumer and provider for this service.
Related
I'm working on a microservice project, and I have a question about best practices.
We are using Java Spring, and all of our models are packaged in a single JAR. Each microservice depends on this JAR to function. Is it okay for a microservice to depend on a JAR containing models outside of its scope like this, or is it better practice to split this JAR up?
A very good article by Bartosz Jedrzejewski here
To quote a relevant part from his artcile...
If the service code should be completely separate, but we need to consume possibly complicated responses in the clients- clients should write their own libraries for consuming the service.
By using client-libraries for consuming the service the following benefits are achieved:
Service is fully decoupled from the clients and no services depend on one another- the library is separate and client specific. It can be even technology specific if we have mix of technologies
Releasing new version of the service is not coupled with clients- they may not even need to know if the backward compatibility is still there, it is the clients who maintain the library
The clients are now DRY - no needless code is copy pasted
It is quicker to integrate with the service - this is achieved without losing any of the microservices benefits
This solution is not something entirely new- it was successfully implemented in Scott Logic projects, is recommended in the “Building Microservices” by Sam Newman (highly recommended) and similar ideas can be seen in many successful microservices architectures.
There are some pitfalls as well, better read the entire article...
Sharing the domain models is an indicator of bad design. If services share a domain, they should not be split. For Microservices, teams working on one service should be able to modify their domain objects anytime without impacting other services/teams.
There can be done exceptions though, e.g. if the model objects are non-specific enough to be reusable in any service. As an example a domain of geometry could be contained in a geometry library. There can be other exceptions.
I'm "playing" with Axon Framework with some small examples where the query and command services (and the logic behind them) are running as separated applications in several Docker containers.
Everything works fine so far and I started to evolve the event versioning topic. I haven't implemented that yet, but I like the idea to share the events as an API via JSON schema. But I've got stuck using that idea with the potential need of event upcasters.
If I understand that approach correctly every listening component has to upcasts the events independently, therefore it might be a good idea to share the upcasters, there is no need for different implementations, right? But then the upcasters seem to became a part of the API, or am I missing something?
How do you deal with that situation? Or generally, what are the best practices for API definitions in such scenario?
When accessing a microservices environment with distinct repositories for the different services, I feel it is common place to have a dedicated module/package/repository for the API of the given microservice. Or, a dedicated module for the shared language within a Bounded Context.
Especially when following the notion of Bounded Context, thus that every service within the context speaks the same language, to me emphasizes the requirement to share the created upcasters as well.
So shortly yes, I would group upcasters together with the API in question.
Schema languages typically also have solutions in place to support several versions of a message for example. Thus if you would be to use a schema language as your core API, that would also include a (although different) form of upcaster.
This is my 2 cents on the situation; hope this helps you out!
I have multi module application. To be more explicit these are maven modules where high level modules depends on low level modules.
Below are the some of the modules :-
user-management
common-services
utils
emails
For example :- If user management module wants to use any services from utils module, it can call its services as dependency of utils is already injected under user-management.
To convert/call my project truly following microserives architecture, I believe i need to convert each module as independently deployable services where each module is a war module
and provides its services over http(mainly as resful web services) . Is that correct or anything else need to be taken care of as well ?
Probably each modules now has to be secured and authentication layer as well ?
If that's the crux of microservices I really do not understand when someone ask whether you have worked on microservices as to me Its not tool/platform/framework but a simple
concept to divide you monolithic application in to smaller set of deployable modules whose services is available through HTTP. Is n't it ? May be its another buzz word.
Update:-
Obviously there are adavantages going micro services way like independent unit testablemodule, scalable as it can be deployed on separate machine, loose coupling etc but I see I need to handle two complex concerns also
Authentication:- For each module I need to ensure it authenticates the request which is not the case right now
Transaction:- I can not maintain the transaction atomicity across different services which I could do very easily at present
What you have understood is perfectly good and you have found the right area where microservices are getting complex over monoliths (Distributed Transaction) but let me clear up some points about microservices.
Microservice doesn't mean independent services exposed over HTTP: A microservice can communicate with other services either in a synchronous or asynchronous way, so REST is one of the solutions and it is applicable for synchronous communication but you can perform asynchronous communication too like message-driven using Kafka or hornetq etc. In synchronous communication an underlying service can call over Thrift protocol also.
Microservice following SRP: The beauty of microservices is that each service concentrates over only one business domain use case, so it handles only one domain object's functionality. But utils module is for common methods so every microservice depends on it. So even a small change in the utils module needs to build all other microservices so it is a violation of the microservices 12 principles so dissolve the utils service and make it local with each service.
Handling Authentication: To be specific a microservice can be one of three types:
a. Core service: Just perform a domain operation (like account creation/updation/deletion)
b. Aggregate service: Call one or more core service, gather results and perform some operation on it.
c. Edge service: Exposed to a client (like Mobile/browser etc). We sometimes call it a gateway service; the crux of this service is take a user request and based on the URL forward it to an actual microservice. So it is the ideal place to put authentication if it is common for all microservices.
Handling Distributed Transaction: Yes this is the hardest part of microservices but you can achieve it through an event-driven/message-driven way. Every action pops an event; a subscriber of this event receives the same and does some operation and generates another event. In case of failure it generates a reverse event which compensates the first event created.
For example, say from micoservice A we debited 100 rupees so create an AccountDebited event. Now in microservice B we try to credit the account. If it is successful we create AccountCredited which is received by A and creates another event AmountTransfered. In case of failure we generate an AccountCreditedFailed event which is received by A and generates a reverse event - AccountSpecialCredit - which maintains the atomicity.
What you have is mostly correct, but you appear to be considering some things as requirements when they are not, and you are forgetting one very important characteristic that microservices are supposed to have.
The main characteristics of microservices are statelessness and independence. Whether they are "WAR" modules and whether they provide their services over "HTTP" (and certainly whether they are RESTful) are secondary concerns and you may hear arguments to the contrary.
Statelessness means that no individual microservice may contain state. (Except for caches.) Microservices are supposed to always delegate the task of persisting data to some database module so they don't keep any state in memory. The idea is that this way, if one microservice fails, (or if an entire machine containing many microservices fails,) you can just route incoming requests to another instance (or another machine) and everything will continue working.
(Of course, if you want my opinion, it is just a cowardly acknowledgement of the fact that we don't know how to write reliable highly concurrent software, but the database guys are smart and they seem to have figured it all out, so we will just delegate the problem of maintaining our state to the software that they have written.)
In my opinion microservice architecture marries well with DDD
I think you should consider your multi-module project as a "monolith" and do your microservice separation based on domain concepts and not on maven projects.
Ex: Do not create a microservice called "utils" but rather a microservice called "accounts" or "user-management" or whatever your domain is. I think without domain driven development it kinda loses its usefulness.
It is really easy afterwards to work on different aspects of the domain knowing that it is separated by the rest. You should check out hexagonal architecture by Alistair Cockburn
How you split your application depends on the type of modules you have. If the module contains business logic than it makes sense to create a new service and communicate via Http or Messaging. On the other hand if your module has no business logic, but just a set of helper functions in might be better to extract it to a separate public/private maven package and just use it as a dependency.
Yes, microservice is a buzz-word that just recently became popular, but a concept has been around for a while. But it also comes with more than that. While it gives a benefits of scaling and independent service deployments, it comes with a price of complexity of managing and orchestrating big amount services.
For example in monolith application when you just call a function from another module you know for sure that it is always available for calling. With microservices some of the services might go down because of disruption or deployment, in which case you have to think about handling these situations gracefully (for example apply circuit breaker pattern).
There are many other things to consider when doing microservices and there are many literature available on this topic. I read Microservices: From Design to Deployment from Nginx. It's short and sweet.
So when people ask you Have you worked with microservices before? I guess they want to know if you familiar and had some experience with all the pitfalls of this concept.
In one way you are correct, in Microservices from outside it looks like this. When you go inside as you rightly mention about two complex concern :
Authentication:- For each module I need to ensure it authenticates the request which is not the case right now
Transaction:- I can not maintain the transaction atomicity across different services which I could do very easily at present
Apart from this there are various things which one need to understand otherwise doing and deploying microservices would be very tough:
I am mentioning some of them here, complete list you can see from my post:
What exactly is a microservice? Some said it should not exceed 1,000 lines of code.Some say it should fit one bounded context (if you don't know what a bounded context is, don't bother with it right now; keep reading).
Even before deciding on what the "micro"service will be, what exactly is a service?
Microservices do not allow updating multiple entities at once; how will I maintain consistency between entities?
Should I have a single database cluster for all my microservices?
What is this eventual consistency thing everyone is talking about?
How will I collate data which is composed of multiple entities residing in different services?
What would happen if one service goes down? How would the dependent services behave?
Should I make a sync invocation between microservices to always get consistent data?
How will I manage version upgrades to a few or all microservices? Is it always possible to do it without downtime?
And the last unavoidable question - how do I test the entire application as an integrated application?
How to do circuit breaking? (if one service down should not impact other)
CI/CD pipelines and .......
What we understood while starting our journey in microservices are:
Design pattern for breaking business problem in microservices is Domain Driven Design
Platform which support microservices development. (we used Lagom for this) which address some of the above concern out of the box
So in all while moving towards multiple process arch. communicating using Rest or some other methods, new considerations needs to be taken care which are not directly visible in Monolithic, and people want to whether you know about those considerations or not.
Quick question on what is the best practice for integrating with external systems.
We have a system that deals with Companies which we represent by our own objects. We also use an external system via SOAP that returns a Organization object. They are very similar but not the same (ours is a subset of theirs).
My question is, should we wrap the SOAP service via a Facade so we return only Company objects to our application, or should we return another type of object (e.g. OrgCompany), or even just use the Organization object in our code.
The SOAP service and Organization object are defined by an external company (a bank), who we have no control over.
Any advice and justification is much appreciated.
My two cents, Introducing external objects into application is always a problem. Especially during maintenance. A small service change might lead into big code change in the application.
It's always good to have a layer abstraction between the external service and application. I would suggest to create a service layer which will do the translation of external service object to your application domain objects and use them within the application. A clear separation / decoupling helps a lot in maintenance.
The below diagram depicts the above content.
Your decision here is how you want to manage external code dependencies in your application. Some factors that should play into your decision:
1) How often will the API change, and what's the expected nature of the changes?
2) What's the utility of your application outside its depdencies? If you removed the SOAP service dependency, would your app still serve a purpose?
A defensive approach is to build a facade or adapter around SOAP service, so that your code only depends on your object model. This gives you a lot of control and a relatively loose coupling between your code/logic and the service. The price that you pay for this control is that when the SOAP contract changes, you must also usually also change a layer of your code.
A different approach is to use the objects you're getting from the WSDL directly. This is beneficial when it doesn't make sense to introduce a level of indirection in your application between the client code, i.e. your application is just a feeder into a different system and the whole point of the app is to stuff the Organization object into a JMS pipeline or something similar. If the SOAP API contract never changes and you don't expect the output of your app to change much, then introducing an extra layer of indirection will just hinder the readability of your codebase long term.
Most j2ee developers tend to take the former approach in my experience, both because of the nature of their applications, and wanting to separate their application logic from the details of the data source.
hope this helps.
I can't think of any situation where it's good to use the objects that another company controls. The first thing you should do is bridge those objects into your own. Also, by having your own objects, you can expand their functionality beyond the one that is provided by the third party you connect to (for example if in the future you need to talk to more than one Company object provider)
Look at the Adapter pattern.
I'd support Sridhars suggestion, I'd like just to add that for translating external service objects to your application domain you can use Dozer :
http://dozer.sourceforge.net/documentation/mappings.html
I typically always Adapt externally defined domain objects to an internal representation.
I also create a comprehensive suite of tests against the external domain object, that will highlight any problems quickly if the external vendor produces a new release.
The Enterprise service bus Architecture might be useful here
Its primary use is in Enterprise Application Integration of
heterogeneous and complex landscapes.
(from Wikipedia)
I would check out open source Mule if you are looking for an open source solution
I'm going to write my first Java based web app, and I'm sort of lost how to begin.
Firstly, I would like a web app and a desktop app that do pretty much the same thing, without the hackish idea of embedding a web browser into the desktop app because that doesn't allow to easily make changes to the desktop without affecting the web app and vice versa.
Now, here my questions.
Right now, I have a bunch of POJOs and they communicate with a single class that, right now, uses a flat file as a "database", of course, in production, I would use a legitimate database and just change that single class. Is this a good idea? Will I be able to go from POJOs to a web app?
Should I use a framework? I would like to have this app written pretty soon, seeing that all the buisness logic is there, I just need to wrap it so its usable, so, I don't want to spend an extreme amount of time learning, say, Spring (which AFAIK is huge), but, I don't want to keep reinventing the wheel throughout my app either. I can always just use JSP and scriptlets...
If you said yes to the above, what framework(s) do you suggest? Please note that I would like a framework that I can start using in maybe 3-4 weeks of learning.
Will I have to start from scratch with the POJOs that I have written? They're well over 30k LOC, so, if it is like that, I'll be hesitant.
You will need:
a web framework. Since you have Swing background, JSF 2 will be your best bet (everything will be painful, of course, but JSF will get you up and going quickly and will help you avoid the most tragic mistakes). Also, wrapping business pojos into web guis is the main use-case for JSF and it's biggest focus.
a "glue framework". One thing that is much different with web applications as opposed to desktop ones is that you cannot create view components by yourself - they must be created when browser requests a page. So you have to find a way to create the view objects and deliver all the references to the pojos that represent logic, some of which may have very different lifecycles (this is not a problem on desktop, but on web you have to distinguish between pojos that live along with the whole application, along with a single user session, along with a single request, and so on).
The "glue framework" could also provide the additional benefit of managing transactions. You have three choices:
Spring. It's not half as complex as you thing; you only need to learn some basic stuff.
EJB. You would need a real application server, like Glassfish or JBoss
bare JSF has good support for dependency injection, the only drawback is the lack of automatic transaction management.
If I were in your position, I would go with bare JSF 2.0 - this way you only need to learn one new technology. At first, try to avoid libraries like PrimeFaces - they usually work worse than advertised.
edit - and addendum
or - what is "dependency injection"(abridged and simplified)
When request comes to a web application, a new task starts in a new thread (well, the thread is probably recycled, but that's not important).
The application has already been running for some time and most of the objects you are going to need are already built and should not get created again: you have your database connection pool, maybe some parts of business layer; it is also possible that the request is just one of many request made during one session, and you already have a bunch of POJOs that the user is working on. The question is - how to get references to those objects?
You could arrange your application so that resources are available through some static fields. They may be singletons themselves, or they could be acquired through a singleton locator. This tends to work, but is out of fashion (hard to test, hard to refactor, hard to reuse, lifecycles are hard coded in application). The real code could look like this:
public void doSomething() {
Customer Service cs = AppManager.getInstance().getCustomerService();
System.out.println(cs.getVersion());
}
if you need clustering and session management, you could build a special kind of broker that would know and provide to anyone all kinds of needed objects. Each type of object would be registered as a factory under a different name. This also works and is implemented in Java as JNDI. The actual client code would look like this:
public void doSomething() throws Exception {
CustomerService cs = (CustomerService)new InitialContext().lookup("some_fancy_looking_name_in_reality_just_string");
System.out.println(cs.getVersion());
}
The last way is the nicest. Since your initial object is not created by you but by the server just after http request arrives (details depend on the technology you choose, but your entry point might be a JSF managed bean or some kind of action controller), you can just advertise which references you need and let the server take care of finding them for you. This is called "Dependency Injection". Your acts as if everything is taken care of before your code is ever launched. Spring or EJB container, or CDI, or JSF take care of the rest. The code would look like this (just an example):
#EJB
CustomerService cs;
public void doSomething() {
System.out.println(cs.getVersion());
}
Note:
when you use DI, it really uses one of the two former methods under the hood. The good thing is: you do not have to know which one and in some cases you can even switch them without altering your code;
the exact means of registering components for injection differs from framework to framework. It might be a piece of Java code (like in Guice), an XML file (classic Spring) or an annotation (classic EJB 3). Most of the mentioned technologies support different kinds of configuration.
You should definitely use a framework as otherwise sooner or later you'll end up writing your own.
If you use maven then simply typing mvn archetype:generate will give you a huge list of frameworks to choose from and it'll set up all of the scaffolding for you so you can just play with a few frameworks until you find the one that works for you.
Spring has good documentation and is surprisingly easy to get started with. Don't be put off by the pages of documentation! You could use JPA to store stuff in the database. You should (in theory) just be able to annotate your existing POJO's to denote primary keys and so on and it should just work. You can also use JSP's within Spring if that makes life easier.
... I a bunch of POJOs and they communicate with a single class that, right now, uses a flat file as a "database", of course, in production, I would use a legitimate database and just change that single class. Is this a good idea? Will I be able to go from POJOs to a web app?
qualified yes. if the pojo's are sane you should not have many problems. many people use hiberbate.
Should I use a framework? I would like to have this app written pretty soon, seeing that all the buisness logic is there, I just need to wrap it so its usable, so, I don't want to spend an extreme amount of time learning, say, Spring (which AFAIK is huge), but, I don't want to keep reinventing the wheel throughout my app either. I can always just use JSP and scriptlets...
probably. spring is huge, but things like grails or roo can help.
if you want to have a responsive web app, you will need to do some kind of rich client (AJAX). this may require a lot of your code to run on the client. this means writing a lot of javascript or using gwt. this will be a pain. it probably will not be so easy to just "wrap it". if you have written a swing app, then basically that code will need to run on the client.
If you said yes to the above, what framework(s) do you suggest? Please note that I would like a framework that I can start using in maybe 3-4 weeks of learning.
i like groovy and grails - grails uses spring-mvc, spring, hibernate. but there is roo, play and others.
Will I have to start from scratch with the POJOs that I have written? They're well over 30k LOC, so, if it is like that, I'll be hesitant.
the code that will run on the server can probably be mostly left alone. the code that has to run on the client needs to be rewritten in javascript or maybe you can get some reuse out of that code by using gwt,
The Play Framework is doing great things. I would recommend it highly. Having worked with EJB apps and Tomcat/Servlet/Spring apps it's a breath of fresh air. After framework installation you get a working app in a few seconds. Reminds me of Ruby on Rails or Node.js with the type-safety of Java.
Much quicker turnaround on getting started, faster development cycles, and a clearer configuration model than previous Java web app frameworks.
http://www.playframework.com/