Lets assume I develop two services called ProductAPI and OrderAPI.Both of them uses a Common Domain Model (common Entity hierarchy).
Both of the services are finally exposed as RCP (SOAP or REST).
The OrderAPI internally invokes ProductAPI.
In the case of the REST:
We can develop ProductAPI using JAX-RS and we can implement a "ProductAPI REST Client" to be used within OrderAPI to access ProductAPI.
This client can use the same class heirachy to deserialize the JSON into the same classes used in ProductAPI.
So,no indermediate format convertion.
In the case of SOAP:
We develop the ProductAPI using JAX-WS (or Axis2..etc) and expose the service in WSDL.
In this case, we have to implement a "ProductAPI SOAP Client" using the exposed WSDL.(may be using a stub generation tool using the exposed WSDL).
In this came the generated Classes are generated from the XSD definition in WSDL and we have to do additional format conversion if we want to use the same Common Domain Model classes.
My questions:
1) In the case of SOAP is there a way to skip this format conversion ?
2) In an enterprise application (like eCommerce) , is it a good practice to avoid this kind of middle data conversions for performance?
To move data between processes, the data needs to be serialized. You can certainly use a different and better format than SOAP for the serialized data, but if you already have SOAP, it will be more work.
Is it worth trading programmer effort for performance in this kind of project? My guess is no, since the performance will probably be good enough anyway. And if it isn't, the usual advice is to measure and identify bottlenecks before optimizing anything.
Related
We are a complete SOA workshop(Java only) and we use SOAP for the data transfer. Currently we are in a process of centralizing the database work for a specific component so the other components can fetch data from one application using SOAP.
My argument is that it is good to centralize but it adds a lot of latency when adding soap between database calls. I want a RMI/EJB type of implementation so we get serialized object and it reduces the marshaling overhead. I like the way the Ejbs are implemented and would like to use it. But the data that we return is not at all from one table, so, I cannot return a database table entity, the data might be from 20 other tables or more.
So, in our current system we have custom entities which are created to map to heavy sql queries. (not related to one table)
Can ejbs be used for this type of environment? If so, are there libraries that are readily available to map the result of a query to entities?
Unfortunately our in-house system is very old, we use java 1.4.
This can be done, but it is going to be painful. There was a reason EJB 3.0 entity beans were created. It's because dealing with these sorts of complex requirements is really quite difficult to map via the old 2.x entity beans xml files.
If you are really building a new SOA layer to represent your database content, why would you do this with a technology that has been obsolete for almost 10 years?
Worse, building this with EJB 2.x and then using RMI/EJB will bind all of your other applications to this same outdated technology. Very few people would choose to start a new EJB 2.1 project.
I honestly believe that you are better off using SOAP for your service instead of EJB, at least it won't couple you to an obsolete platform. Current best practices prefer REST for entity transfer, and save SOAP for things RPC-style interactions, but there are lots of good libraries for doing your database tables to SOAP mappings, many of which are out-of-the-box for RDMS's.
Finally, if you are determined to do this, I'd suggest you first do a test. Build a test framework to actually see if the SOAP deserialization is a significant cost component. Compare it to the cost of the network transport. Unless these entities are in the megabyte range, deserialization will be a tiny fraction of your overall application time.
When building RESTful services, I always come up against the issue of how to develop a client library that can distribute to users of the system.
To take a simple example, say there is a entity call person, and you want to support the basic CRUD functionality through your RESTFul service.
To save a person, the client needs call POST method and pass the
appropriate data structure, say in JSON.
To find people by birthday, your service will reply with a response containing a list of people objects
To delete an person, your service will respond with a success or
failure message.
From the above examples, there are already two objects that may be shared with the client: the person object and the response object. I have tried a few ways of accomplishing this:
Including the Person object from your server call in the client library. The downside to this approach are:
The client code become tightly coupled with your server code. Any
changes from server side will require client to make update during
the same release.
Person's object may contain dependencies or annotation used for
persistence or serialization. The client cares nothing about this
libraries but are forced to include them.
Include a sub class of Map which is not directly tight to Person's object but contains some helper classes to set required fields.
Looser coupling, but could result in silent errors when data structure from server changes.
Use a descriptive file like Apache Thrift, WADL or Json Schema to generate client objects during compilation time. this solve the issue of object dependencies but still creates a hard dependency. This is almost like creating a WSDL for SOAP. However, this approach is not widely used and some times difficult to find examples.
What's the best way to publish a client jar for your application, so that
Its easy for client to use
Does not create tight coupling and some tolerance for server side changes
If you answer is better documentation of the API, what's is a good tool to generate these documents from Java annotation and POJOs.
This is a common problem, regardless of the protocol used for communication.
In some of the REST APIs we've been working with recently (JAX-RS based), we create DTO objects. These are just dumb POJOs (with some additional annotations for JAXB to do some marshalling/unmarshalling for us automatically). We build these as a submodule (in maven) and provide them as a JAR so that any other projects using our API can use the DTOs if they wish. Obviously, if you want to provide your own client library, it can make use of these DTOs. Having them provided as a separate JAR (which any app can depend on) means clients aren't pulling in crazy dependencies that they don't need (your whole serverside code).
This keeps things fairly well decoupled.
On the other hand, you really don't need to provide a client. It's REST after all. Provided your REST API is well constructed and follows HATEOAS principles, your API should be easily crawlable/browsable, i.e. you shouldn't need any other descriptive scheme. If you need WADLs or other similar constructs, your API probably isn't very RESTful.
I need to create 5 methods on the server side, which will work with binary data. The remote clients are applet and JavaScript. The Client will send files to the server, and the server must parse these files and then return the response as XML/JSON.
So I am confused - is it good practice to use REST-service in this case? Or should I use a servlet?
My colleague told me:
"Creating REST-service that will be used only by one Application isn't
good. REST must be created only when it will be used by many apps. And
REST has some disadvantages over servlet: REST is slower than servlet;
it more difficult to write thread-safe REST than servlet"
However, I see some disadvantages with using Servlet: I need to send a function name that I want to call (i.e. as extra HTTP parameter send function name)
and then inside the doPost method perform the following switch:
switch(functionName) {
case "function1":
function1();
break;
case "function2"
function2();
break;
//.... more `case` statements....
}
In case of REST I can simple use different URLs for different functions.
Also, in the case of REST, it more convenient to return JSON/XML from server.
You are confusing two paradigms here:
REST is a software architecture “style”;
Servlet is a server-side technology.
You can, for example, implement REST-like services using Servlets.
Well,
I wouldn't agree with your colleagues' opinion that isn't good to have rest used by only one application, since you may decide in the future to have different applications using the same rest api. If I were you I would choose the pure REST. Why?
If you're using some framework for rest implementation (let's say apache cxf or jersey) you get lots of stuff out of the box - you write POJO you get a rest, you get serialization and deserialization, to and from let's say, JSON object out of the box (eventually you will need to implement some JsonProviders but this is not a big deal).
It is intuitive to work (if you design your rest APIs well).
Very easily consumable by JavaScript clients (especially if you're using JQuery or something like that)
However, it strongly depends of what exactly do you want to do, if you have some strong transactional logic, the rest could be quite tricky. If you're only going to do POST requests (without using the other HTTP methods) you might want to use the Servlet since you won't have to work with additional frameworks and making more dependencies. Note that the REST is more or less an architectural concept and it does not contradict with the Servlet technology, if you're stubborn enough you can make a rest api only with servlets :-).
Hope I've helped.
First, you're speaking in terms of two different paradigms. It's kinda apples-and-oranges.
REST is a style of service that uses HTTP operations (GET, PUT, etc.) to read and write the state of resources. Think of resources as "nouns" and "things".
Servlet, on the other hand, is a software specification originally provided by Sun Microsystems for connecting HTTP requests to custom Java code. Servlets often speak in terms of method calls: "verbs" and "actions".
Since your question implies that you are looking to deal with input->output methods, just a plain servlet should do the job.
I cannot see any problems on using Jersey and create a REST service. I know that I'm a REST-taliban but it's really simple to implement this kind of architecture using JAX-RS, so... why not?
Your colleagues say: "REST must be created only when it will be used by many apps" but I cannot see how this can be true.Why I cannot create a REST service for a single app?
Sounds like your collegue is applying premature optimisation. If you can write it with a JAX-RS library quickly, do so... If it then proves to be the bottleneck only then do you take the time to rewrite as servlet.
In my experience the performance overhead of JAX-RS is not sufficiently big to justify the development and maintenance overhead of writing the equivalent in servlets directly where the problem maps well to JAX-RS
Depending on your container version, Jersey (or any other JAX-RS implementation) will still use a Servlet for dispatching requests to the appropriate handler.
If your application is truly RESTful, then JAX-RS will do what you want. Otherwise, consider using a FrontController to interpret the Request and forward it to the appropriate handler.
Also, don't confuse XML or JSON with REST. You will get these for free in most (if not all) JAX-RS implementations, but these implementations still delegate content marshalling to other libraries (e.g. JAXB).
Here is the similiar link. he has done it with simple servlet
http://software.danielwatrous.com/restful-java-servlet/
Quick question on what is the best practice for integrating with external systems.
We have a system that deals with Companies which we represent by our own objects. We also use an external system via SOAP that returns a Organization object. They are very similar but not the same (ours is a subset of theirs).
My question is, should we wrap the SOAP service via a Facade so we return only Company objects to our application, or should we return another type of object (e.g. OrgCompany), or even just use the Organization object in our code.
The SOAP service and Organization object are defined by an external company (a bank), who we have no control over.
Any advice and justification is much appreciated.
My two cents, Introducing external objects into application is always a problem. Especially during maintenance. A small service change might lead into big code change in the application.
It's always good to have a layer abstraction between the external service and application. I would suggest to create a service layer which will do the translation of external service object to your application domain objects and use them within the application. A clear separation / decoupling helps a lot in maintenance.
The below diagram depicts the above content.
Your decision here is how you want to manage external code dependencies in your application. Some factors that should play into your decision:
1) How often will the API change, and what's the expected nature of the changes?
2) What's the utility of your application outside its depdencies? If you removed the SOAP service dependency, would your app still serve a purpose?
A defensive approach is to build a facade or adapter around SOAP service, so that your code only depends on your object model. This gives you a lot of control and a relatively loose coupling between your code/logic and the service. The price that you pay for this control is that when the SOAP contract changes, you must also usually also change a layer of your code.
A different approach is to use the objects you're getting from the WSDL directly. This is beneficial when it doesn't make sense to introduce a level of indirection in your application between the client code, i.e. your application is just a feeder into a different system and the whole point of the app is to stuff the Organization object into a JMS pipeline or something similar. If the SOAP API contract never changes and you don't expect the output of your app to change much, then introducing an extra layer of indirection will just hinder the readability of your codebase long term.
Most j2ee developers tend to take the former approach in my experience, both because of the nature of their applications, and wanting to separate their application logic from the details of the data source.
hope this helps.
I can't think of any situation where it's good to use the objects that another company controls. The first thing you should do is bridge those objects into your own. Also, by having your own objects, you can expand their functionality beyond the one that is provided by the third party you connect to (for example if in the future you need to talk to more than one Company object provider)
Look at the Adapter pattern.
I'd support Sridhars suggestion, I'd like just to add that for translating external service objects to your application domain you can use Dozer :
http://dozer.sourceforge.net/documentation/mappings.html
I typically always Adapt externally defined domain objects to an internal representation.
I also create a comprehensive suite of tests against the external domain object, that will highlight any problems quickly if the external vendor produces a new release.
The Enterprise service bus Architecture might be useful here
Its primary use is in Enterprise Application Integration of
heterogeneous and complex landscapes.
(from Wikipedia)
I would check out open source Mule if you are looking for an open source solution
At work, we currently have a WSDL interface as well as a semi-RESTful interface that we're looking to expand upon and take it to the next level.
The main application runs using Servlets + JSPs as well as Spring.
The idea is that the REST and WSDL are interfaces for an API that will be designed. These (and potentially other things in future) are simply a method through which clients will be able to integrate with the interface.
I'm wondering if there are any suggestions or recommendations on frameworks / methodologies, etc for implementing that under-lying API or does it make sense simply to create some Spring beans which is called either by WSDL or REST?
Hope that makes sense.
Have a look at Eunicate it is great . You are using spring , Spring has had support of SOAP for a while and Spring 3 has support of REST (Creating and Consuming).
Your approach makes sense. Probably the most important advice is to make the external API layer as thin as possible. You can use Axis, Apache CXF, Jersey, etc. to handle the implementation of the REST or SOAP protocols, but the implementation of those services should just load the passed in data into a common request object, and pass that into a separate service that handles the request and returns a response object which the external API layer will marshall into the correct format for you.
This approach works especially well when you have a competitor providing similar services and you want to make it easy for their customers to switch. You just build a new external API that mirrors the competitors, and simply translates their format to your internal api model and provided your services are functionally equivalent, you're done.
This is a really late response, but I have a different view on this topic. The traditional way as we know it is to unmarshall xml to java and marshall java to xml. However if the wsdl changes then it would effectively be a structural change in the code which would again require a deployment.
Instead of the above approach if we list the fields mentioned in the wsdl in a presistent store, load the mappings in memory and prepare our structures based on these mappings we would have to have many less changes for this..Thus IMO instead of using existing libraries a configurable approach to unmarshalling and marshalling should be taken.