When building RESTful services, I always come up against the issue of how to develop a client library that can distribute to users of the system.
To take a simple example, say there is a entity call person, and you want to support the basic CRUD functionality through your RESTFul service.
To save a person, the client needs call POST method and pass the
appropriate data structure, say in JSON.
To find people by birthday, your service will reply with a response containing a list of people objects
To delete an person, your service will respond with a success or
failure message.
From the above examples, there are already two objects that may be shared with the client: the person object and the response object. I have tried a few ways of accomplishing this:
Including the Person object from your server call in the client library. The downside to this approach are:
The client code become tightly coupled with your server code. Any
changes from server side will require client to make update during
the same release.
Person's object may contain dependencies or annotation used for
persistence or serialization. The client cares nothing about this
libraries but are forced to include them.
Include a sub class of Map which is not directly tight to Person's object but contains some helper classes to set required fields.
Looser coupling, but could result in silent errors when data structure from server changes.
Use a descriptive file like Apache Thrift, WADL or Json Schema to generate client objects during compilation time. this solve the issue of object dependencies but still creates a hard dependency. This is almost like creating a WSDL for SOAP. However, this approach is not widely used and some times difficult to find examples.
What's the best way to publish a client jar for your application, so that
Its easy for client to use
Does not create tight coupling and some tolerance for server side changes
If you answer is better documentation of the API, what's is a good tool to generate these documents from Java annotation and POJOs.
This is a common problem, regardless of the protocol used for communication.
In some of the REST APIs we've been working with recently (JAX-RS based), we create DTO objects. These are just dumb POJOs (with some additional annotations for JAXB to do some marshalling/unmarshalling for us automatically). We build these as a submodule (in maven) and provide them as a JAR so that any other projects using our API can use the DTOs if they wish. Obviously, if you want to provide your own client library, it can make use of these DTOs. Having them provided as a separate JAR (which any app can depend on) means clients aren't pulling in crazy dependencies that they don't need (your whole serverside code).
This keeps things fairly well decoupled.
On the other hand, you really don't need to provide a client. It's REST after all. Provided your REST API is well constructed and follows HATEOAS principles, your API should be easily crawlable/browsable, i.e. you shouldn't need any other descriptive scheme. If you need WADLs or other similar constructs, your API probably isn't very RESTful.
Related
I have a microservice which is responsible for giving cached data over the rest end points. I have 2 big projects which need this microservice.
My confusion is, if i should use restTemplates to call the rest end points or use the client jar for the microservice in my big project.
If i use rest template I will need to add the pojos for request and response etc. My senior developer is insisting on using rest template but I don't quite like that approach.
Can someone suggest.
In order to facilitate usage of client rest calls, especially when more than one projects are involved, it is best practice to design a client wrapper that calls your desired endpoint as:
SystemApiClient client = new SystemApiClient();
List<Article> articles = client.getArticles("popular");
by designing such client wrapper, it's easy to make a jar file out of it and share it across you microservices. for ease of update you can also upload each new version on local nexus and easily get update on your project when a new one is available
It depends.
If you are using Spring in your whole project then you should go for RestTemplate. Just for calling the endpoint no need to pull another jar. Many times senior people / architect suggest to use unique libraries to maintain application standard and to avoid whole bunch of library offering similar functionalities.
Or if your application is developed in another framework or language you can use any HttpRequest supporting library. It doesn't matter what client side code you are using to access Rest endpoint.
It's always preferable to have services called through rest end-points in service based architectures such as Micro-Services. But again the big question is what should you use - JAR or WAR. That highly depends upon type of projects and their architecture. In this case, it is MicroServices, Uncle Bob beautifully describes here - http://blog.cleancoder.com/uncle-bob/2014/09/19/MicroServicesAndJars.html
I'm currently stuck between two options:
1) Store the object's information in the file.xml that is returned to my application at initialization to be displayed when the GUI is loaded and then perform asynchronous calls to my backend whenever the object is edited via the GUI (saving to the file.xml in the process).
-or-
2) Make the whole thing asynchronous so that when my custom object is brought up for editing by the end-user it queries the backend for the object, returns the xml to be displayed in the GUI, and then do another asynchronous call for if something was changed.
Either way I see many cons to both of these approaches. I really only need one representation of the object (on the backend) and would not like to manage the front-end version of the object as well as the conversion of my object to an xml representation and then breaking that out into another object on the flex front-end to be used in datagrids.
Is there a better way to do this that allows me to only manage my backend java object and create the interface to it on the front-end without worrying about the asynchronous nature of it and multiple representations of the same object?
You should look at Granite Data Services: http://www.graniteds.org If you are using Hibernate: it should be your first choice, as BlazeDS is not so advanced. Granite implements a great facade in Flex to access backend java objects with custom serialization in AMF, support for lazy-loading, an entity cache on the flex-side with bean validation. Globally, it is a top-down approach with generation of AS3 classes from your java classes.
If you need real-time features you can push data changes on flex client (Gravity module) and solve conflicts on the front side or implement conflict resolvers on the backend.
Still you will eventually have to deal with advanced conflicts (with some "deprecated" flex objects to work with on the server: you don't want to deal with that), a basic feature for instance is to add a version field and reject manipulation of such objects on the backend automatically (many ways to do that): you will have to implement a custom way for a flex client to update itself to the current changes implying that some work could be dropped (data lost) on the flex client.
If not so many people work on the same objects on your flex application, this will not happen a lot, like in a distributed VCS.
Depending on your real-time needs (what is the frequency of changes of your java object? This is the most important question), you can choose to "cache" changes in the flex side then updating the whole thing once (but you'll get troublesome conflicts if changes have happened) or you can check everytime the server-side (granite enables this) with less conflicts (and if one happens: it is simpler) but you'll generate probably more code to synchronize objects and more network traffic.
Quick question on what is the best practice for integrating with external systems.
We have a system that deals with Companies which we represent by our own objects. We also use an external system via SOAP that returns a Organization object. They are very similar but not the same (ours is a subset of theirs).
My question is, should we wrap the SOAP service via a Facade so we return only Company objects to our application, or should we return another type of object (e.g. OrgCompany), or even just use the Organization object in our code.
The SOAP service and Organization object are defined by an external company (a bank), who we have no control over.
Any advice and justification is much appreciated.
My two cents, Introducing external objects into application is always a problem. Especially during maintenance. A small service change might lead into big code change in the application.
It's always good to have a layer abstraction between the external service and application. I would suggest to create a service layer which will do the translation of external service object to your application domain objects and use them within the application. A clear separation / decoupling helps a lot in maintenance.
The below diagram depicts the above content.
Your decision here is how you want to manage external code dependencies in your application. Some factors that should play into your decision:
1) How often will the API change, and what's the expected nature of the changes?
2) What's the utility of your application outside its depdencies? If you removed the SOAP service dependency, would your app still serve a purpose?
A defensive approach is to build a facade or adapter around SOAP service, so that your code only depends on your object model. This gives you a lot of control and a relatively loose coupling between your code/logic and the service. The price that you pay for this control is that when the SOAP contract changes, you must also usually also change a layer of your code.
A different approach is to use the objects you're getting from the WSDL directly. This is beneficial when it doesn't make sense to introduce a level of indirection in your application between the client code, i.e. your application is just a feeder into a different system and the whole point of the app is to stuff the Organization object into a JMS pipeline or something similar. If the SOAP API contract never changes and you don't expect the output of your app to change much, then introducing an extra layer of indirection will just hinder the readability of your codebase long term.
Most j2ee developers tend to take the former approach in my experience, both because of the nature of their applications, and wanting to separate their application logic from the details of the data source.
hope this helps.
I can't think of any situation where it's good to use the objects that another company controls. The first thing you should do is bridge those objects into your own. Also, by having your own objects, you can expand their functionality beyond the one that is provided by the third party you connect to (for example if in the future you need to talk to more than one Company object provider)
Look at the Adapter pattern.
I'd support Sridhars suggestion, I'd like just to add that for translating external service objects to your application domain you can use Dozer :
http://dozer.sourceforge.net/documentation/mappings.html
I typically always Adapt externally defined domain objects to an internal representation.
I also create a comprehensive suite of tests against the external domain object, that will highlight any problems quickly if the external vendor produces a new release.
The Enterprise service bus Architecture might be useful here
Its primary use is in Enterprise Application Integration of
heterogeneous and complex landscapes.
(from Wikipedia)
I would check out open source Mule if you are looking for an open source solution
At work, we currently have a WSDL interface as well as a semi-RESTful interface that we're looking to expand upon and take it to the next level.
The main application runs using Servlets + JSPs as well as Spring.
The idea is that the REST and WSDL are interfaces for an API that will be designed. These (and potentially other things in future) are simply a method through which clients will be able to integrate with the interface.
I'm wondering if there are any suggestions or recommendations on frameworks / methodologies, etc for implementing that under-lying API or does it make sense simply to create some Spring beans which is called either by WSDL or REST?
Hope that makes sense.
Have a look at Eunicate it is great . You are using spring , Spring has had support of SOAP for a while and Spring 3 has support of REST (Creating and Consuming).
Your approach makes sense. Probably the most important advice is to make the external API layer as thin as possible. You can use Axis, Apache CXF, Jersey, etc. to handle the implementation of the REST or SOAP protocols, but the implementation of those services should just load the passed in data into a common request object, and pass that into a separate service that handles the request and returns a response object which the external API layer will marshall into the correct format for you.
This approach works especially well when you have a competitor providing similar services and you want to make it easy for their customers to switch. You just build a new external API that mirrors the competitors, and simply translates their format to your internal api model and provided your services are functionally equivalent, you're done.
This is a really late response, but I have a different view on this topic. The traditional way as we know it is to unmarshall xml to java and marshall java to xml. However if the wsdl changes then it would effectively be a structural change in the code which would again require a deployment.
Instead of the above approach if we list the fields mentioned in the wsdl in a presistent store, load the mappings in memory and prepare our structures based on these mappings we would have to have many less changes for this..Thus IMO instead of using existing libraries a configurable approach to unmarshalling and marshalling should be taken.
I'm developing an application that makes heavy use of web services. I will be developing both the client and server ends of this application. I'd like to use JAX WS (which I am new to), because it seems to be the future for web services for Java, but I have a number of concerns related to the artifacts. None of these concerns is a deal-breaker, but collectively, JAX WS seems to create a lot of inconvenience. I'm new to JAX WS, so perhaps there are things I am unaware of that would alleviate my concerns.
Here are my concerns:
I anticipate having a fairly large number of POJOs that are passed between client and server (for lack of a better term, I'll call these transport objects). I would like to include documentation and business logic in these objects (for starters, equals, hashcode, toString). If I have business logic in these classes, then I cannot use wsimport to create the annotations for them, and I have to manage those by hand. Seems cumbersome and error-prone.
I have a choice of having the build system create artifacts, or having developers create artifacts and check them into source control. If artifacts are produced by the build system, then whenever a member of the team updates an API, everyone must generate artifacts in their own development environments. If artifacts are produced by developers and checked into source control, any time a member of the team renames or deletes an API, he must remember to delete wrapper artifacts. Either approach seems to be cumbersome. What's the best practice here?
wsimport creates all the artifacts in the same package. I will be creating multiple services, and I will have some transport objects that are shared, and therefore I need to wsimport all my services into the same package. If two services have an API with the same name, the wrapper artifacts will collide.
I anticipate having at least a hundred API's in my services. This means at least 200 wrapper classes. Seems like a huge amount of clutter. Lots and lots of classes that are of no interest for development. To make matters worse, these wrapper classes will reside in the same package as the transport objects, which will be some of the most highly-used classes in my system. Signal to noise ratio is very low for the most important package in my system.
Any pointers anyone can give me to ease development of my application would be greatly appreciated.
If you have control over both the client and the server you don't really have to generate the client with wsimport. I currently do it as follows: One project defines the API for the web service. The API consists of the interface and all classes of the "transfer objects". Another project implements the service. You can now distribute the API to the client who can now use the service and may leverage all your additional business methods.
Assuming ServiceInterface is your service interface a client might look like this:
Service s = Service.create(
new URL("http://example.com/your_service?wsdl"),
new QName("http://example.com/your_namespace", "YourServiceName"));
ServiceInterface yourService = s.getPort(
new QName("http://example.com/your_namespace", "YourPortName"),
ServiceInterface.class);
And just like that you have a service client. That way you can use all your methods (1), you have full control over your packages (3) and you don't have any wrapper classes lying around as they are all generated at runtime (4). I think (2) is solved by this as well.
Your question is quite large so if I fail to address a point sufficiently, leave a comment and I try to get into more detail.